WO2019095571A1 - Human-figure emotion analysis method, apparatus, and storage medium - Google Patents

Human-figure emotion analysis method, apparatus, and storage medium Download PDF

Info

Publication number
WO2019095571A1
WO2019095571A1 PCT/CN2018/076168 CN2018076168W WO2019095571A1 WO 2019095571 A1 WO2019095571 A1 WO 2019095571A1 CN 2018076168 W CN2018076168 W CN 2018076168W WO 2019095571 A1 WO2019095571 A1 WO 2019095571A1
Authority
WO
WIPO (PCT)
Prior art keywords
emotion
real
image
sample
classifier
Prior art date
Application number
PCT/CN2018/076168
Other languages
French (fr)
Chinese (zh)
Inventor
陈林
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019095571A1 publication Critical patent/WO2019095571A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification

Definitions

  • the present application relates to the field of computer vision processing technologies, and in particular, to a character emotion analysis method, a program, an electronic device, and a computer readable storage medium.
  • Face emotion recognition is an important part of human-computer interaction and emotion calculation research, involving psychology, sociology, anthropology, life sciences, cognitive science, computer science and other research fields. It is very intelligent and intelligent for human-computer interaction. significance.
  • FACS is a "facial expression coding system" created in 1976 after years of research. According to the anatomical features of the face, it can be divided into several independent and interrelated motion units (AU). The characteristics of the movement and the main areas it controls can reflect facial expressions.
  • the present application provides a character emotion analysis method, a program, an electronic device, and a computer readable storage medium, the main purpose of which is to identify a real-time face according to each AU feature and probability by identifying an AU feature and probability in a real-time face image.
  • the emotions of the characters in the image effectively improve the efficiency of the character's emotion recognition.
  • the present application provides an electronic device, including: a memory, a processor, and an imaging device, wherein the memory includes a character emotion analysis program, and the character emotion analysis program is implemented by the processor to implement the following step:
  • the feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
  • the present application further provides a method for character emotion analysis, the method comprising:
  • the feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
  • the present application further provides a computer readable storage medium including a character emotion analysis program, when the character emotion analysis program is executed by a processor, implementing the above Any step in the character's sentiment analysis method.
  • the present application further provides a character emotion analysis program, which includes: an acquisition module, an AU recognition module, a feature extraction module, and an emotion recognition module, and when the character emotion analysis program is executed by a processor, Any of the steps in the character emotion analysis method described above.
  • the character emotion analysis method, the program, the electronic device and the computer readable storage medium proposed by the present application by identifying a real-time facial image from a real-time image, extracting each AU feature in the real-time facial image through an AU classifier, The probabilities of the AU features are combined into a feature vector, and the feature vector is input to the emotion classifier to recognize the probability of each emotion existing in the real-time facial image, and the emotion with the highest probability is taken as the emotion in the real-time image.
  • FIG. 1 is a schematic diagram of an application environment of a preferred embodiment of an applicant's sentiment analysis method
  • FIG. 2 is a block diagram showing a preferred embodiment of the character emotion analysis program of FIG. 1;
  • FIG. 3 is a flow chart of a preferred embodiment of the applicant's sentiment analysis method.
  • the present application provides a character emotion analysis method applied to an electronic device 1.
  • FIG. 1 it is a schematic diagram of an application environment of a preferred embodiment of the applicant's emotion analysis method.
  • the electronic device 1 may be a terminal device having a computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • a computing function such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • the electronic device 1 includes a processor 12, a memory 11, an imaging device 13, a network interface 14, and a communication bus 15.
  • the memory 11 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory 11, or the like.
  • the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
  • the readable storage medium may also be an external memory 11 of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC). , Secure Digital (SD) card, Flash Card, etc.
  • SMC smart memory card
  • SD Secure Digital
  • the readable storage medium of the memory 11 is generally used to store a character emotion analysis program 10 installed on the electronic device 1, a face image sample library, and a pre-trained AU classifier and an emotion classifier. Wait.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing character sentiment analysis. Program 10 and so on.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing character sentiment analysis. Program 10 and so on.
  • the imaging device 13 may be part of the electronic device 1 or may be independent of the electronic device 1.
  • the electronic device 1 is a terminal device having a camera such as a smartphone, a tablet computer, a portable computer, etc.
  • the camera device 13 is a camera of the electronic device 1.
  • the electronic device 1 may be a server, and the camera device 13 is connected to the electronic device 1 via a network, for example, the camera device 13 is installed in a specific place, such as an office. And monitoring the area, real-time image is taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 12 through the network.
  • the network interface 14 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface), and is typically used to establish a communication connection between the electronic device 1 and other electronic devices.
  • a standard wired interface such as a WI-FI interface
  • Communication bus 15 is used to implement connection communication between these components.
  • Figure 1 shows only the electronic device 1 with components 11-15, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the electronic device 1 may further include a user interface
  • the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like.
  • the user interface may also include a standard wired interface and a wireless interface.
  • the electronic device 1 may further include a display, which may also be referred to as a display screen or a display unit.
  • a display may also be referred to as a display screen or a display unit.
  • it may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, and an Organic Light-Emitting Diode (OLED) touch sensor.
  • the display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
  • the electronic device 1 further comprises a touch sensor.
  • the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
  • the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
  • the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
  • the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
  • a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
  • the electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
  • RF radio frequency
  • an operating system and a character emotion analysis program 10 may be included in the memory 11 as a computer storage medium; when the processor 12 executes the character emotion analysis program 10 stored in the memory 11, the following is realized as follows step:
  • the feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
  • AUs are a small group of muscle contraction codes for the face. For example, AU1- raises the inner corner of the eyebrow, AU2- raises the outer corner of the eyebrow, AU9-wrinkles the nose, AU22- tightens the lips and turns outwards.
  • the camera 13 When the camera 13 captures a real-time image, the camera 13 transmits the real-time image to the processor 12.
  • the processor 12 After the processor 12 receives the real-time image, the image is first acquired to create a grayscale image of the same size. Converting the acquired color image into a grayscale image and creating a memory space; equalizing the grayscale image histogram, reducing the amount of grayscale image information, speeding up the detection speed, and then loading the training library to detect the person in the image Face, and return an object containing face information, obtain the data of the location of the face, and record the number; finally obtain the area of the avatar and save it, thus completing a real-time facial image extraction process.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may also be: a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, and the like.
  • a real-time facial image extracted from the real-time image using a face recognition algorithm is input to a predetermined AU classifier, each AU is identified from the real-time facial image, and each of the real-time facial images is included
  • the probability of the AU in the range of 0-1), wherein the training steps of the predetermined AU classifier include:
  • each AU In the sample preparation phase of each AU, a certain number of face images are prepared, and from a large number of face images, a face image containing the AU is found as a positive sample image of the AU, and a negative sample is prepared for each AU.
  • Image obtaining a positive sample image and a negative sample image of each AU to form a first sample set; image regions corresponding to different AUs may be the same, for example, AU1, AU2 are related to the face image including eyebrows, eyes, and The area of the forehead, AU9, AU22, relates to the nose and lip areas in the face image. The area of the image that does not contain the AU can be used as the negative sample image of the AU.
  • a first scale (for example, 60%) of sample images are randomly extracted from each of the AU positive/negative sample images as a training set, and a second proportional sample image is extracted from the remaining sample images of the AU as a verification set. For example, 50%, that is, 20% of all sample images of the AU are extracted as a verification set, and the convolutional neural network (CNN) is trained by using the training set of each AU to obtain the AU classifier; in order to ensure the AU classifier Accuracy, the accuracy of the AU classifier needs to be verified, and the accuracy of the trained AU classifier is verified by the verification set. If the accuracy rate is greater than or equal to the preset accuracy rate (for example, 90%), the training is performed. End, or, if the accuracy is less than the preset accuracy, increase the number of sample pictures in the sample set and re-execute the above training steps.
  • the preset accuracy rate for example, 90%
  • the training step of the predetermined AU classifier further includes: performing preprocessing such as scaling, cropping, flipping, and/or twisting on the sample image in the first sample set, using the preprocessed
  • preprocessing such as scaling, cropping, flipping, and/or twisting
  • the probability of identifying each AU (for example, 39) from the real-time facial image is P1, P2, P3, ..., P39, respectively, and each AU
  • the feature vector V1 is then input to a predetermined emotion classifier to identify each of the emotions that may exist from the real-time face image, as well as the probability of each emotion (ie, the likelihood of being present).
  • the training steps of the predetermined emotion classifier include:
  • the sample image is input into the AU classifier, and the probability of identifying each AU from the sample image is obtained, and the probability of identifying each AU from each sample image is combined into one feature.
  • the sample images are classified and labeled, that is, an emotion label (for example, “happy”) is assigned, and a second sample set including the feature vector and the emotion label is obtained; from the second sample set.
  • an emotion label for example, “happy”
  • a verification set the training set is trained by using a naive Bayes algorithm to obtain the emotion classifier; in order to ensure the accuracy of the emotion classifier, the accuracy of the emotion classifier needs to be verified, and the verification set is utilized.
  • Verifying the accuracy of the trained emotion classifier if the accuracy rate is greater than or equal to the preset accuracy rate (eg, 90%), the training ends Or, if accuracy is less than a preset accuracy, increase the number of sample image sample set of training and re-execute the above steps.
  • the preset accuracy rate eg, 90%
  • GaussianNB is a simple Bayesian with Gaussian distribution a priori
  • MultinomialNB is a simple Bayesian with a priori polynomial distribution
  • BernoulliNB is a simple Bayesian with a priori Bernoulli distribution.
  • the classification scenarios applicable to these three categories are different, and GaussianNB is used for classification problems.
  • MultinomialNB and BernoulliNB are used for discrete value models. Because we are a classification problem, we chose GaussianNB.
  • GaussianNB assumes that the prior probability of the feature is a normal distribution, so there is the following formula:
  • ⁇ k is the kth class of Y
  • ⁇ k For values that need to be estimated from the training set. GaussianNB will find ⁇ k and based on the training set. ⁇ k is the average of all X j in the sample class C k . Is the variance of all X j in the sample class C k .
  • the feature vector V1 of the real-time facial image is input to the emotion classifier, there are various emotions recognized from the real-time facial image, and the probability of each emotion is recognized (the range of values is: 0-1) ) different, for example, happy: 0.6, surprised: 0.3, sad: 0.1.
  • the parameters that need to be preset in the first probability, the second probability, the preset accuracy, and the like described in the foregoing embodiments may be adjusted according to user requirements.
  • the electronic device 1 proposed in the above embodiment extracts the real-time facial image from the real-time image, extracts each AU feature in the real-time facial image through the AU classifier, and combines the probabilities of the AU features into feature vectors to feature
  • the vector input emotion classifier identifies the probability of each emotion present in the real-time facial image, and takes the emotion with the highest probability as the emotion in the real-time image.
  • the character sentiment analysis program 10 can be partitioned into one or more modules, one or more modules being stored in the memory 11 and executed by the processor 12 to complete the application.
  • a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
  • FIG. 2 it is a functional block diagram of a preferred embodiment of the character emotion analysis program 10 of FIG.
  • the character emotion analysis program 10 can be divided into: an acquisition module 110, an AU recognition module 120, a feature extraction module 130, and an emotion recognition module 140.
  • the functions or operational steps implemented by the modules 110-140 are similar to the above, and are not described in detail herein, by way of example, for example:
  • the acquiring module 110 is configured to acquire a real-time image captured by the camera device 13 and extract a real-time facial image from the real-time image by using a face recognition algorithm;
  • the AU identification module 120 is configured to input the real-time facial image into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image;
  • a feature extraction module 130 configured to form a probability vector of all AUs in the real-time facial image into a feature vector of the real-time facial image
  • the emotion recognition module 140 is configured to input the feature vector into a predetermined emotion classifier, obtain a probability of identifying each emotion from the real-time face image, and take the emotion with the highest probability as the recognition from the real-time face image Emotions.
  • the present application also provides a method for character emotion analysis.
  • a flow chart of a preferred embodiment of the applicant's sentiment analysis method The method can be performed by a device that can be implemented by software and/or hardware.
  • the character emotion analysis method includes: step S10 - step S40.
  • Step S10 Acquire a real-time image captured by the camera device, and extract a real-time face image from the real-time image by using a face recognition algorithm.
  • the camera When the camera captures a real-time image, the camera sends the real-time image to the processor.
  • the processor receives the real-time image, the image is first acquired to create a grayscale image of the same size; Color image, converted into gray image, and create a memory space; equalize the gray image histogram, reduce the amount of gray image information, speed up the detection, then load the training library, detect the face in the picture, and return An object containing face information, obtains the data of the location of the face, and records the number; finally obtains the area of the avatar and saves it, thus completing a real-time facial image extraction process.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may also be: a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, and the like.
  • step S20 the real-time facial image is input into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image.
  • a real-time facial image extracted from the real-time image using a face recognition algorithm is input to a predetermined AU classifier, each AU is identified from the real-time facial image, and each of the real-time facial images is included
  • the probability of the AU in the range of 0-1), wherein the training steps of the predetermined AU classifier include:
  • each AU In the sample preparation phase of each AU, a certain number of face images are prepared, and from a large number of face images, a face image containing the AU is found as a positive sample image of the AU, and a negative sample is prepared for each AU.
  • Image obtaining a positive sample image and a negative sample image of each AU to form a first sample set; image regions corresponding to different AUs may be the same, for example, AU1, AU2 are related to the face image including eyebrows, eyes, and The area of the forehead, AU9, AU22, relates to the nose and lip areas in the face image. The area of the image that does not contain the AU can be used as the negative sample image of the AU.
  • a first scale (for example, 60%) of sample images are randomly extracted from each of the AU positive/negative sample images as a training set, and a second proportional sample image is extracted from the remaining sample images of the AU as a verification set. For example, 50%, that is, 20% of all sample images of the AU are extracted as a verification set, and the convolutional neural network is trained by using the training set of each AU to obtain the AU classifier; in order to ensure the accuracy of the AU classifier, The accuracy of the AU classifier needs to be verified, and the accuracy of the trained AU classifier is verified by the verification set. If the accuracy rate is greater than or equal to the preset accuracy rate (for example, 90%), the training ends, or If the accuracy is less than the preset accuracy, increase the number of sample pictures in the sample set and re-execute the above training steps.
  • the preset accuracy rate for example, 90%
  • the training step of the predetermined AU classifier further includes: performing preprocessing such as scaling, cropping, flipping, and/or twisting on the sample image in the first sample set, using the preprocessed
  • preprocessing such as scaling, cropping, flipping, and/or twisting
  • step S30 the probability of all AUs in the real-time facial image is composed into the feature vector of the real-time facial image.
  • the probability of identifying each AU (for example, 39) from the real-time facial image is P1, P2, P3, ..., P39, respectively, and each AU
  • Step S40 Input the feature vector into a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and take the emotion with the highest probability as the emotion recognized from the real-time face image.
  • the feature vector V1 is input to a predetermined emotion classifier to identify each emotion that may exist from the real-time face image, and the probability of each emotion (ie, the possibility of existence).
  • the training steps of the predetermined emotion classifier include:
  • the sample image is input into the AU classifier, and the probability of identifying each AU from the sample image is obtained, and each sample image is identified from each sample image.
  • the probability of AU constitutes a feature vector; at the same time, according to the emotion presented by each sample image, the sample image is classified and labeled, that is, an emotion tag (for example, “happy”) is assigned, and a second sample set including the feature vector and the emotion tag is obtained.
  • a first proportion for example, 60%
  • a second proportional sample image from the remaining sample image as a verification set, for example, 50%, that is, extracting the second sample
  • the 20% sample image is collected as a verification set, and the training set is trained by the Naive Bayes algorithm to obtain the emotion classifier. In order to ensure the accuracy of the emotion classifier, the accuracy of the emotion classifier needs to be verified.
  • the verification set to verify the accuracy of the trained emotion classifier, if the accuracy rate is greater than or equal to the preset accuracy rate (eg, 90) ).
  • the preset accuracy rate eg, 90
  • GaussianNB is a simple Bayesian with Gaussian distribution a priori
  • MultinomialNB is a simple Bayesian with a priori polynomial distribution
  • BernoulliNB is a simple Bayesian with a priori Bernoulli distribution.
  • the classification scenarios applicable to these three categories are different, and GaussianNB is used for classification problems.
  • MultinomialNB and BernoulliNB are used for discrete value models. Because we are a classification problem, we chose GaussianNB.
  • GaussianNB assumes that the prior probability of the feature is a normal distribution, so there is the following formula:
  • ⁇ k is the kth class of Y
  • ⁇ k For values that need to be estimated from the training set. GaussianNB will find ⁇ k and based on the training set. ⁇ k is the average of all X j in the sample class C k . Is the variance of all X j in the sample class C k .
  • the feature vector V1 of the real-time facial image is input to the emotion classifier, there are various emotions recognized from the real-time facial image, and the probability of each emotion is recognized (the range of values is: 0-1) ) different, for example, happy: 0.6, surprised: 0.3, sad: 0.1.
  • the character emotion analysis method proposed by the above embodiment by identifying a real-time facial image from a real-time image, extracting each AU feature in the real-time facial image by an AU classifier, and combining the probabilities of the AU features into a feature vector,
  • the feature vector input emotion classifier recognizes the probability of each emotion existing in the real-time face image, and takes the emotion with the highest probability as the emotion in the real-time image.
  • the embodiment of the present application further provides a computer readable storage medium, where the character readable analysis program includes a character emotion analysis program, and when the character emotion analysis program is executed by the processor, the following operations are implemented:
  • the feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
  • the training step of the predetermined AU classifier comprises:
  • Preparing a first sample set containing a certain number of face sample images respectively extracting each AU matched image region as a positive sample image of the AU from the face sample image, and preparing a negative sample image for each AU;
  • the positive/negative sample images of each AU are divided into a training set of a first ratio and a verification set of a second ratio;
  • the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the sample size and re-execute Training steps.
  • the training step of the predetermined AU classifier further includes:
  • the pre-processing operations are performed on the sample images in the first sample set, including: scaling, cropping, flipping, and/or warping.
  • the training step of the predetermined emotion classifier comprises:
  • the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the number of samples in the sample set And re-execute the above training steps.
  • the naive Bayesian algorithm is a naive Bayesian algorithm that is a priori Gaussian distribution.
  • the face recognition algorithm may be a geometric feature based method, a local feature analysis method, a feature face method, an elastic model based method, and a neural network method.
  • a disk including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

Provided are a human-figure emotion analysis method, analysis program, electronic apparatus, and storage medium, said method comprising: obtaining a real-time image captured by a camera apparatus, and using a face recognition algorithm to extract a real-time face image from said real-time image (S10); entering the real-time face image into a predetermined AU classifier to obtain a probability of each AU recognized from the real-time face image (S20); composing the probabilities of all AUs in the real-time face image into feature vectors of the real-time face image (S30); inputting said feature vectors into a predetermined emotion classifier to obtain the probability of each emotion recognized from the real-time face image, and taking the emotion having the greatest probability to be the emotion recognized from the real-time face image (S40). After using the method to identify the AU features and probabilities in a real-time face image, a human-figure emotion in the real-time face image is recognized according to each AU feature and probability, improving the efficiency of human-figure emotion recognition.

Description

人物情绪分析方法、装置及存储介质Character emotion analysis method, device and storage medium
优先权申明Priority claim
本申请基于巴黎公约申明享有2017年11月15日递交的申请号为CN201711126632.2、名称为“人物情绪分析方法、装置及存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。This application is based on the priority of the Chinese Patent Application entitled "People's Emotional Analysis Method, Apparatus and Storage Medium" filed on November 15, 2017, with the application number CN201711126632.2 submitted on November 15, 2017. The overall content of the Chinese patent application It is incorporated herein by reference.
技术领域Technical field
本申请涉及计算机视觉处理技术领域,尤其涉及一种人物情绪分析方法、程序、电子装置及计算机可读存储介质。The present application relates to the field of computer vision processing technologies, and in particular, to a character emotion analysis method, a program, an electronic device, and a computer readable storage medium.
背景技术Background technique
人脸情绪识别是人机交互与情感计算研究的重要组成部分,涉及心理学、社会学、人类学、生命科学、认知科学、计算机科学等研究领域,对人机交互智能化和谐化极具意义。Face emotion recognition is an important part of human-computer interaction and emotion calculation research, involving psychology, sociology, anthropology, life sciences, cognitive science, computer science and other research fields. It is very intelligent and intelligent for human-computer interaction. significance.
国际著名心理学家Paul Ekman和研究伙伴W.V.Friesen作了深入的研究,通过观察和生物反馈,描绘出不同的脸部肌肉动作和不同表情的对应关系。FACS就是经过多年研究于1976年所创制的“面部表情编码系统。根据人脸的解剖学特点,可将其划分成若干既相互独立又相互联系的运动单元(action unit,AU),这些运动单元的运动特征及其所控制的主要区域可以反映出面部表情。Internationally renowned psychologist Paul Ekman and research partner W.V.Friesen have conducted in-depth research to visualize the correspondence between different facial muscle movements and different expressions through observation and biofeedback. FACS is a "facial expression coding system" created in 1976 after years of research. According to the anatomical features of the face, it can be divided into several independent and interrelated motion units (AU). The characteristics of the movement and the main areas it controls can reflect facial expressions.
目前,识别脸部图像中的AU特征、对AU特征进行组合来判断面部情绪方法比较普遍,然而,该方法并没有考虑到每个AU的权重,比如说有些人天生眉毛就有些上扬,但AU检测出来后,就直接算做AU组合中的一个成员参与情绪判定,这样就有可能出现情绪误判定,造成情绪识别准确率不高。At present, it is common to identify AU features in face images and combine AU features to determine facial emotions. However, this method does not take into account the weight of each AU. For example, some people have some natural eyebrows, but AU After detection, it is directly counted as a member of the AU combination to participate in emotional judgment, so that there may be emotional misjudgment, resulting in low accuracy of emotional recognition.
发明内容Summary of the invention
本申请提供一种人物情绪分析方法、程序、电子装置及计算机可读存储介质,其主要目的在于通过识别出实时脸部图像中的AU特征及概率,根据每个AU特征及概率识别出实时脸部图像中的人物情绪,有效提高人物情绪识别 的效率。The present application provides a character emotion analysis method, a program, an electronic device, and a computer readable storage medium, the main purpose of which is to identify a real-time face according to each AU feature and probability by identifying an AU feature and probability in a real-time face image. The emotions of the characters in the image effectively improve the efficiency of the character's emotion recognition.
为实现上述目的,本申请提供一种电子装置,该装置包括:存储器、处理器及摄像装置,所述存储器中包括人物情绪分析程序,所述人物情绪分析程序被所述处理器执行时实现如下步骤:To achieve the above objective, the present application provides an electronic device, including: a memory, a processor, and an imaging device, wherein the memory includes a character emotion analysis program, and the character emotion analysis program is implemented by the processor to implement the following step:
获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Obtaining a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
将所述实时脸部图像输入预先确定的AU分类器,得到从该实时脸部图像中识别出的每个AU的概率;Inputting the real-time facial image into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image;
将该实时脸部图像中所有AU的概率组成该实时脸部图像的特征向量;及Generating the probability of all AUs in the real-time facial image into feature vectors of the real-time facial image; and
将所述特征向量输入预先确定的情绪分类器,得到从该实时人脸图像中识别出每种情绪的概率,取概率最大的情绪作为从该实时人脸图像中识别出的情绪。The feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
此外,为实现上述目的,本申请还提供一种人物情绪分析方法,该方法包括:In addition, in order to achieve the above object, the present application further provides a method for character emotion analysis, the method comprising:
获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Obtaining a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
将所述实时脸部图像输入预先确定的AU分类器,得到从该实时脸部图像中识别出的每个AU的概率;Inputting the real-time facial image into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image;
将该实时脸部图像中所有AU的概率组成该实时脸部图像的特征向量;及Generating the probability of all AUs in the real-time facial image into feature vectors of the real-time facial image; and
将所述特征向量输入预先确定的情绪分类器,得到从该实时人脸图像中识别出每种情绪的概率,取概率最大的情绪作为从该实时人脸图像中识别出的情绪。The feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中包括人物情绪分析程序,所述人物情绪分析程序被处理器执行时,实现如上所述的人物情绪分析方法中的任意步骤。In addition, in order to achieve the above object, the present application further provides a computer readable storage medium including a character emotion analysis program, when the character emotion analysis program is executed by a processor, implementing the above Any step in the character's sentiment analysis method.
此外,为实现上述目的,本申请还提供一种人物情绪分析程序,该程序包括:获取模块、AU识别模块、特征提取模块及情绪识别模块,所述人物情绪分析程序被处理器执行时,实现如上所述的人物情绪分析方法中的任意步骤。In addition, in order to achieve the above object, the present application further provides a character emotion analysis program, which includes: an acquisition module, an AU recognition module, a feature extraction module, and an emotion recognition module, and when the character emotion analysis program is executed by a processor, Any of the steps in the character emotion analysis method described above.
本申请提出的人物情绪分析方法、程序、电子装置及计算机可读存储介质,通过从实时图像中识别出实时脸部图像,通过AU分类器提取该实时脸部图像中的各个AU特征,将各AU特征的概率组合成特征向量,将特征向量输入情绪分类器识别出该实时脸部图像中存在的每种情绪的概率,取概率最大的情绪作为该实时图像中的情绪。通过结合AU分类器及情绪分类器识别实时脸部图像中的情绪,有效提高人物情绪的识别效率。The character emotion analysis method, the program, the electronic device and the computer readable storage medium proposed by the present application, by identifying a real-time facial image from a real-time image, extracting each AU feature in the real-time facial image through an AU classifier, The probabilities of the AU features are combined into a feature vector, and the feature vector is input to the emotion classifier to recognize the probability of each emotion existing in the real-time facial image, and the emotion with the highest probability is taken as the emotion in the real-time image. By combining the AU classifier and the emotion classifier to identify emotions in real-time facial images, the recognition efficiency of the characters' emotions is effectively improved.
附图说明DRAWINGS
图1为本申请人物情绪分析方法较佳实施例的应用环境示意图;1 is a schematic diagram of an application environment of a preferred embodiment of an applicant's sentiment analysis method;
图2为图1中人物情绪分析程序较佳实施例的模块示意图;2 is a block diagram showing a preferred embodiment of the character emotion analysis program of FIG. 1;
图3为本申请人物情绪分析方法较佳实施例的流程图。3 is a flow chart of a preferred embodiment of the applicant's sentiment analysis method.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The implementation, functional features and advantages of the present application will be further described with reference to the accompanying drawings.
具体实施方式Detailed ways
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
本申请提供一种人物情绪分析方法,应用于一种电子装置1。参照图1所示,为本申请人物情绪分析方法较佳实施例的应用环境示意图。The present application provides a character emotion analysis method applied to an electronic device 1. Referring to FIG. 1 , it is a schematic diagram of an application environment of a preferred embodiment of the applicant's emotion analysis method.
在本实施例中,电子装置1可以是服务器、智能手机、平板电脑、便携计算机、桌上型计算机等具有运算功能的终端设备。In this embodiment, the electronic device 1 may be a terminal device having a computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
该电子装置1包括:处理器12、存储器11、摄像装置13、网络接口14及通信总线15。The electronic device 1 includes a processor 12, a memory 11, an imaging device 13, a network interface 14, and a communication bus 15.
存储器11包括至少一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器11等的非易失性存储介 质。在一些实施例中,所述可读存储介质可以是所述电子装置1的内部存储单元,例如该电子装置1的硬盘。在另一些实施例中,所述可读存储介质也可以是所述电子装置1的外部存储器11,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。The memory 11 includes at least one type of readable storage medium. The at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory 11, or the like. In some embodiments, the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1. In other embodiments, the readable storage medium may also be an external memory 11 of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC). , Secure Digital (SD) card, Flash Card, etc.
在本实施例中,所述存储器11的可读存储介质通常用于存储安装于所述电子装置1的人物情绪分析程序10、人脸图像样本库及预先训练好的AU分类器、情绪分类器等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。In this embodiment, the readable storage medium of the memory 11 is generally used to store a character emotion analysis program 10 installed on the electronic device 1, a face image sample library, and a pre-trained AU classifier and an emotion classifier. Wait. The memory 11 can also be used to temporarily store data that has been output or is about to be output.
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行人物情绪分析程序10等。The processor 12, in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing character sentiment analysis. Program 10 and so on.
摄像装置13既可以是所述电子装置1的一部分,也可以独立于电子装置1。在一些实施例中,所述电子装置1为智能手机、平板电脑、便携计算机等具有摄像头的终端设备,则所述摄像装置13即为所述电子装置1的摄像头。在其他实施例中,所述电子装置1可以为服务器,所述摄像装置13独立于该电子装置1、与该电子装置1通过网络连接,例如,该摄像装置13安装于特定场所,如办公场所、监控区域,对进入该特定场所的目标实时拍摄得到实时图像,通过网络将拍摄得到的实时图像传输至处理器12。The imaging device 13 may be part of the electronic device 1 or may be independent of the electronic device 1. In some embodiments, the electronic device 1 is a terminal device having a camera such as a smartphone, a tablet computer, a portable computer, etc., and the camera device 13 is a camera of the electronic device 1. In other embodiments, the electronic device 1 may be a server, and the camera device 13 is connected to the electronic device 1 via a network, for example, the camera device 13 is installed in a specific place, such as an office. And monitoring the area, real-time image is taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 12 through the network.
网络接口14可选地可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该电子装置1与其他电子设备之间建立通信连接。The network interface 14 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface), and is typically used to establish a communication connection between the electronic device 1 and other electronic devices.
通信总线15用于实现这些组件之间的连接通信。 Communication bus 15 is used to implement connection communication between these components.
图1仅示出了具有组件11-15的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。Figure 1 shows only the electronic device 1 with components 11-15, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
可选地,该电子装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)、语音输入装置比如麦克风(microphone)等具有语音识别功能的设备、语音输出装置比如音响、耳机等,可选地用户接口还可以包括标准的有线接口、无线接口。Optionally, the electronic device 1 may further include a user interface, and the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like. Optionally, the user interface may also include a standard wired interface and a wireless interface.
可选地,该电子装置1还可以包括显示器,显示器也可以称为显示屏或显示单元。在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显 示器以及有机发光二极管(Organic Light-Emitting Diode,OLED)触摸器等。显示器用于显示在电子装置1中处理的信息以及用于显示可视化的用户界面。Optionally, the electronic device 1 may further include a display, which may also be referred to as a display screen or a display unit. In some embodiments, it may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, and an Organic Light-Emitting Diode (OLED) touch sensor. The display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
可选地,该电子装置1还包括触摸传感器。所述触摸传感器所提供的供用户进行触摸操作的区域称为触控区域。此外,这里所述的触摸传感器可以为电阻式触摸传感器、电容式触摸传感器等。而且,所述触摸传感器不仅包括接触式的触摸传感器,也可包括接近式的触摸传感器等。此外,所述触摸传感器可以为单个传感器,也可以为例如阵列布置的多个传感器。Optionally, the electronic device 1 further comprises a touch sensor. The area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area. Further, the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like. Moreover, the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like. Furthermore, the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
此外,该电子装置1的显示器的面积可以与所述触摸传感器的面积相同,也可以不同。可选地,将显示器与所述触摸传感器层叠设置,以形成触摸显示屏。该装置基于触摸显示屏侦测用户触发的触控操作。In addition, the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor. Optionally, a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
可选地,该电子装置1还可以包括射频(Radio Frequency,RF)电路,传感器、音频电路等等,在此不再赘述。Optionally, the electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
在图1所示的装置实施例中,作为一种计算机存储介质的存储器11中可以包括操作系统、以及人物情绪分析程序10;处理器12执行存储器11中存储的人物情绪分析程序10时实现如下步骤:In the apparatus embodiment shown in FIG. 1, an operating system and a character emotion analysis program 10 may be included in the memory 11 as a computer storage medium; when the processor 12 executes the character emotion analysis program 10 stored in the memory 11, the following is realized as follows step:
获取摄像装置13拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Obtaining a real-time image captured by the camera device 13 and extracting a real-time face image from the real-time image by using a face recognition algorithm;
将所述实时脸部图像输入预先确定的AU分类器,得到从该实时脸部图像中识别出的每个AU的概率;Inputting the real-time facial image into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image;
将该实时脸部图像中所有AU的概率组成该实时脸部图像的特征向量;及Generating the probability of all AUs in the real-time facial image into feature vectors of the real-time facial image; and
将所述特征向量输入预先确定的情绪分类器,得到从该实时人脸图像中识别出每种情绪的概率,取概率最大的情绪作为从该实时人脸图像中识别出的情绪。The feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
根据保罗·艾克曼总结出的面部情绪编码系统(Facial Action Coding System,FACS),人类一共有39个主要的面部动作单元(Action Unit,AU)。每一个AU,就是脸部的一小组肌肉收缩代码。比如AU1-抬起眉毛内角,AU2-抬起眉毛外角,AU9-皱鼻,AU22-收紧双唇向外翻等。According to Paul Ekman's Facial Action Coding System (FACS), humans have a total of 39 major Action Units (AUs). Each AU is a small group of muscle contraction codes for the face. For example, AU1- raises the inner corner of the eyebrow, AU2- raises the outer corner of the eyebrow, AU9-wrinkles the nose, AU22- tightens the lips and turns outwards.
当摄像装置13拍摄到一张实时图像,摄像装置13将这张实时图像发送到处理器12,当处理器12接收到该实时图像后,先获取图片的大小,建立一 个相同大小的灰度图像;将获取的彩色图像,转换成灰度图像,同时创建一个内存空间;将灰度图像直方图均衡化,使灰度图像信息量减少,加快检测速度,然后加载训练库,检测图片中的人脸,并返回一个包含人脸信息的对象,获得人脸所在位置的数据,并记录个数;最终获取头像的区域且保存下来,这样就完成了一次实时脸部图像提取的过程。When the camera 13 captures a real-time image, the camera 13 transmits the real-time image to the processor 12. After the processor 12 receives the real-time image, the image is first acquired to create a grayscale image of the same size. Converting the acquired color image into a grayscale image and creating a memory space; equalizing the grayscale image histogram, reducing the amount of grayscale image information, speeding up the detection speed, and then loading the training library to detect the person in the image Face, and return an object containing face information, obtain the data of the location of the face, and record the number; finally obtain the area of the avatar and save it, thus completing a real-time facial image extraction process.
具体地,从该实时图像中提取实时脸部图像的人脸识别算法还可以为:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。Specifically, the face recognition algorithm for extracting the real-time facial image from the real-time image may also be: a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, and the like.
接下来,将利用人脸识别算法从实时图像中提取的实时脸部图像输入预先确定的AU分类器,从该实时脸部图像中识别出每个AU,以及该实时脸部图像中包括每个AU的概率(取值范围为:0-1),其中,所述预先确定的AU分类器的训练步骤包括:Next, a real-time facial image extracted from the real-time image using a face recognition algorithm is input to a predetermined AU classifier, each AU is identified from the real-time facial image, and each of the real-time facial images is included The probability of the AU (in the range of 0-1), wherein the training steps of the predetermined AU classifier include:
在每个AU的样本准备阶段,准备一定数量的人脸图像,从大量人脸图像中,找出包含该AU的人脸图像,作为该AU的正样本图像,并为每个AU准备负样本图像,得到该每个AU的正样本图像及负样本图像,形成第一样本集;不同AU对应的图像区域可以是相同的,例如AU1、AU2都涉及到人脸图像中包含眉毛、眼睛和额头的区域,AU9、AU22涉及到人脸图像中的鼻子和嘴唇区域。图像中不包含该AU的区域,均可以作为该AU的负样本图像。In the sample preparation phase of each AU, a certain number of face images are prepared, and from a large number of face images, a face image containing the AU is found as a positive sample image of the AU, and a negative sample is prepared for each AU. Image, obtaining a positive sample image and a negative sample image of each AU to form a first sample set; image regions corresponding to different AUs may be the same, for example, AU1, AU2 are related to the face image including eyebrows, eyes, and The area of the forehead, AU9, AU22, relates to the nose and lip areas in the face image. The area of the image that does not contain the AU can be used as the negative sample image of the AU.
分别从每个AU的正/负样本图像中随机抽取第一比例(例如,60%)的样本图像作为训练集,从剩下的该AU的样本图像中抽取第二比例样本图像作为验证集,例如50%,即抽取该AU所有样本图像的20%作为验证集,分别利用每个AU的所述训练集训练卷积神经网络(CNN),得到所述AU分类器;为了保证AU分类器的准确率,需对AU分类器的准确率进行验证,利用所述验证集验证训练的所述AU分类器的准确率,若准确率大于或者等于预设准确率(例如,90%),则训练结束,或者,若准确率小于预设准确率,则增加样本集中的样本图片数量并重新执行上述训练步骤。A first scale (for example, 60%) of sample images are randomly extracted from each of the AU positive/negative sample images as a training set, and a second proportional sample image is extracted from the remaining sample images of the AU as a verification set. For example, 50%, that is, 20% of all sample images of the AU are extracted as a verification set, and the convolutional neural network (CNN) is trained by using the training set of each AU to obtain the AU classifier; in order to ensure the AU classifier Accuracy, the accuracy of the AU classifier needs to be verified, and the accuracy of the trained AU classifier is verified by the verification set. If the accuracy rate is greater than or equal to the preset accuracy rate (for example, 90%), the training is performed. End, or, if the accuracy is less than the preset accuracy, increase the number of sample pictures in the sample set and re-execute the above training steps.
需要说明的是,所述预先确定的AU分类器的训练步骤还包括:对第一样本集中的样本图像进行预处理如缩放、裁剪、翻转及/或扭曲等操作,利用经过预处理后的样本图片对卷积神经网络进行训练,有效提高模型训练的真实性及准确率。It should be noted that the training step of the predetermined AU classifier further includes: performing preprocessing such as scaling, cropping, flipping, and/or twisting on the sample image in the first sample set, using the preprocessed The sample image trains the convolutional neural network to effectively improve the authenticity and accuracy of the model training.
假设利用AU分类器对实时脸部图像进行检测后输出的结果中,从实时脸部图像识别出各AU(例如,39个)的概率分别为P1、P2、P3、…、P39,将各AU的概率组成一个特征向量V1,V1=[P1,P2,P3,…,P39],作为实时脸部图像的特征向量。Assuming that the real-time facial image is detected and output by the AU classifier, the probability of identifying each AU (for example, 39) from the real-time facial image is P1, P2, P3, ..., P39, respectively, and each AU The probability consists of a feature vector V1, V1 = [P1, P2, P3, ..., P39] as a feature vector of the real-time facial image.
然后将特征向量V1输入预先确定的情绪分类器,以从该实时人脸图像中识别出可能存在的每种情绪,以及每种情绪的概率(即存在的可能性)。其中,所述预先确定的情绪分类器的训练步骤包括:The feature vector V1 is then input to a predetermined emotion classifier to identify each of the emotions that may exist from the real-time face image, as well as the probability of each emotion (ie, the likelihood of being present). The training steps of the predetermined emotion classifier include:
借用上述第一样本集中的样本图像,样本图像输入上述AU分类器后,得到从样本图像中识别出每种AU的概率,将从每张样本图像中识别出每种AU的概率组成一个特征向量;同时,根据每张样本图像呈现的情绪,对样本图像进行分类标记,即分配一个情绪标签(例如“开心”),得到包含特征向量、情绪标签的第二样本集;从第二样本集中随机抽取第一比例(例如,60%)的样本图像作为训练集,从剩下的样本图像中抽取第二比例样本图像作为验证集,例如50%,即抽取第二样本集中20%的样本图像作为验证集,利用朴素贝叶斯算法对所述训练集进行训练,得到所述情绪分类器;为了保证情绪分类器的准确率,需对情绪分类器的准确率进行验证,利用所述验证集验证训练的所述情绪分类器的准确率,若准确率大于或者等于预设准确率(例如,90%),则训练结束,或者,若准确率小于预设准确率,则增加样本集中的样本图片数量并重新执行上述训练步骤。By using the sample image in the first sample set, the sample image is input into the AU classifier, and the probability of identifying each AU from the sample image is obtained, and the probability of identifying each AU from each sample image is combined into one feature. At the same time, according to the emotions presented by each sample image, the sample images are classified and labeled, that is, an emotion label (for example, “happy”) is assigned, and a second sample set including the feature vector and the emotion label is obtained; from the second sample set. Randomly extracting a sample image of a first scale (for example, 60%) as a training set, and extracting a second scale sample image from the remaining sample images as a verification set, for example, 50%, that is, extracting 20% of the sample image in the second sample set As a verification set, the training set is trained by using a naive Bayes algorithm to obtain the emotion classifier; in order to ensure the accuracy of the emotion classifier, the accuracy of the emotion classifier needs to be verified, and the verification set is utilized. Verifying the accuracy of the trained emotion classifier, if the accuracy rate is greater than or equal to the preset accuracy rate (eg, 90%), the training ends Or, if accuracy is less than a preset accuracy, increase the number of sample image sample set of training and re-execute the above steps.
需要说明的是,朴素贝叶斯算法分为三种,分别是GaussianNB,MultinomialNB和BernoulliNB。其中GaussianNB是先验为高斯分布的朴素贝叶斯,MultinomialNB是先验为多项式分布的朴素贝叶斯,而BernoulliNB就是先验为伯努利分布的朴素贝叶斯。这三个类适用的分类场景各不相同,GaussianNB用于分类问题。MultinomialNB和BernoulliNB用于离散值模型。因为我们是分类问题,所以选择GaussianNB。It should be noted that the Naive Bayes algorithm is divided into three types, namely GaussianNB, MultinomialNB and BernoulliNB. Among them, GaussianNB is a simple Bayesian with Gaussian distribution a priori, MultinomialNB is a simple Bayesian with a priori polynomial distribution, and BernoulliNB is a simple Bayesian with a priori Bernoulli distribution. The classification scenarios applicable to these three categories are different, and GaussianNB is used for classification problems. MultinomialNB and BernoulliNB are used for discrete value models. Because we are a classification problem, we chose GaussianNB.
其中,GaussianNB假设特征的先验概率为正态分布,所以有如下公式:Among them, GaussianNB assumes that the prior probability of the feature is a normal distribution, so there is the following formula:
Figure PCTCN2018076168-appb-000001
Figure PCTCN2018076168-appb-000001
其中C k为Y的第k类类别,μ k
Figure PCTCN2018076168-appb-000002
为需要从训练集估计的值。GaussianNB会根据训练集求出μ k
Figure PCTCN2018076168-appb-000003
μ k为在样本类别C k中所有X j的平均值。
Figure PCTCN2018076168-appb-000004
为在样本 类别C k中所有X j的方差。
Where k k is the kth class of Y, μ k and
Figure PCTCN2018076168-appb-000002
For values that need to be estimated from the training set. GaussianNB will find μ k and based on the training set.
Figure PCTCN2018076168-appb-000003
μ k is the average of all X j in the sample class C k .
Figure PCTCN2018076168-appb-000004
Is the variance of all X j in the sample class C k .
假设将实时脸部图像的特征向量V1输入所述情绪分类器后,从该实时脸部图像中识别出的情绪有多种,且识别出每种情绪的概率(取值范围为:0-1)各不相同,例如,开心:0.6、惊讶:0.3、悲伤:0.1。Assuming that the feature vector V1 of the real-time facial image is input to the emotion classifier, there are various emotions recognized from the real-time facial image, and the probability of each emotion is recognized (the range of values is: 0-1) ) different, for example, happy: 0.6, surprised: 0.3, sad: 0.1.
上述情绪分类器的输出结果中,从实时脸部图像中识别出“开心”的概率最大,则确定从该实时脸部图像中识别出的情绪即为“开心”。In the output result of the above-described emotion classifier, when the probability of recognizing "happy" from the real-time face image is the largest, it is determined that the emotion recognized from the real-time face image is "happy".
需要说明的是,上述实施例中所述的第一概率、第二概率、预设准确率等需要预先设置的参数,可根据用户需求进行相应调整。It should be noted that the parameters that need to be preset in the first probability, the second probability, the preset accuracy, and the like described in the foregoing embodiments may be adjusted according to user requirements.
上述实施例提出的电子装置1,通过从实时图像中识别出实时脸部图像,通过AU分类器提取该实时脸部图像中的各个AU特征,将各AU特征的概率组合成特征向量,将特征向量输入情绪分类器识别出该实时脸部图像中存在的每种情绪的概率,取概率最大的情绪作为该实时图像中的情绪。通过结合AU分类器及情绪分类器识别实时脸部图像中的情绪,有效提高人物情绪的识别效率。The electronic device 1 proposed in the above embodiment extracts the real-time facial image from the real-time image, extracts each AU feature in the real-time facial image through the AU classifier, and combines the probabilities of the AU features into feature vectors to feature The vector input emotion classifier identifies the probability of each emotion present in the real-time facial image, and takes the emotion with the highest probability as the emotion in the real-time image. By combining the AU classifier and the emotion classifier to identify emotions in real-time facial images, the recognition efficiency of the characters' emotions is effectively improved.
具体地,人物情绪分析程序10可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由处理器12执行,以完成本申请。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。参照图2所示,为图1中人物情绪分析程序10较佳实施例的功能模块图。所述人物情绪分析程序10可以被分割为:获取模块110、AU识别模块120、特征提取模块130及情绪识别模块140。所述模块110-140所实现的功能或操作步骤均与上文类似,此处不再详述,示例性地,例如其中:In particular, the character sentiment analysis program 10 can be partitioned into one or more modules, one or more modules being stored in the memory 11 and executed by the processor 12 to complete the application. A module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function. Referring to FIG. 2, it is a functional block diagram of a preferred embodiment of the character emotion analysis program 10 of FIG. The character emotion analysis program 10 can be divided into: an acquisition module 110, an AU recognition module 120, a feature extraction module 130, and an emotion recognition module 140. The functions or operational steps implemented by the modules 110-140 are similar to the above, and are not described in detail herein, by way of example, for example:
获取模块110,用于获取摄像装置13拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;The acquiring module 110 is configured to acquire a real-time image captured by the camera device 13 and extract a real-time facial image from the real-time image by using a face recognition algorithm;
AU识别模块120,用于将所述实时脸部图像输入预先确定的AU分类器,得到从该实时脸部图像中识别出的每个AU的概率;The AU identification module 120 is configured to input the real-time facial image into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image;
特征提取模块130,用于将该实时脸部图像中所有AU的概率组成该实时脸部图像的特征向量;及a feature extraction module 130, configured to form a probability vector of all AUs in the real-time facial image into a feature vector of the real-time facial image; and
情绪识别模块140,用于将所述特征向量输入预先确定的情绪分类器,得到从该实时人脸图像中识别出每种情绪的概率,取概率最大的情绪作为从该 实时人脸图像中识别出的情绪。The emotion recognition module 140 is configured to input the feature vector into a predetermined emotion classifier, obtain a probability of identifying each emotion from the real-time face image, and take the emotion with the highest probability as the recognition from the real-time face image Emotions.
此外,本申请还提供一种人物情绪分析方法。参照图3所示,为本申请人物情绪分析方法较佳实施例的流程图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。In addition, the present application also provides a method for character emotion analysis. Referring to Figure 3, there is shown a flow chart of a preferred embodiment of the applicant's sentiment analysis method. The method can be performed by a device that can be implemented by software and/or hardware.
在本实施例中,人物情绪分析方法包括:步骤S10-步骤S40。In the embodiment, the character emotion analysis method includes: step S10 - step S40.
步骤S10,获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像。Step S10: Acquire a real-time image captured by the camera device, and extract a real-time face image from the real-time image by using a face recognition algorithm.
当摄像装置拍摄到一张实时图像,摄像装置将这张实时图像发送到处理器,当处理器接收到该实时图像后,先获取图片的大小,建立一个相同大小的灰度图像;将获取的彩色图像,转换成灰度图像,同时创建一个内存空间;将灰度图像直方图均衡化,使灰度图像信息量减少,加快检测速度,然后加载训练库,检测图片中的人脸,并返回一个包含人脸信息的对象,获得人脸所在位置的数据,并记录个数;最终获取头像的区域且保存下来,这样就完成了一次实时脸部图像提取的过程。When the camera captures a real-time image, the camera sends the real-time image to the processor. When the processor receives the real-time image, the image is first acquired to create a grayscale image of the same size; Color image, converted into gray image, and create a memory space; equalize the gray image histogram, reduce the amount of gray image information, speed up the detection, then load the training library, detect the face in the picture, and return An object containing face information, obtains the data of the location of the face, and records the number; finally obtains the area of the avatar and saves it, thus completing a real-time facial image extraction process.
具体地,从该实时图像中提取实时脸部图像的人脸识别算法还可以为:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。Specifically, the face recognition algorithm for extracting the real-time facial image from the real-time image may also be: a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, and the like.
步骤S20,将所述实时脸部图像输入预先确定的AU分类器,得到从该实时脸部图像中识别出的每个AU的概率。In step S20, the real-time facial image is input into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image.
接下来,将利用人脸识别算法从实时图像中提取的实时脸部图像输入预先确定的AU分类器,从该实时脸部图像中识别出每个AU,以及该实时脸部图像中包括每个AU的概率(取值范围为:0-1),其中,所述预先确定的AU分类器的训练步骤包括:Next, a real-time facial image extracted from the real-time image using a face recognition algorithm is input to a predetermined AU classifier, each AU is identified from the real-time facial image, and each of the real-time facial images is included The probability of the AU (in the range of 0-1), wherein the training steps of the predetermined AU classifier include:
在每个AU的样本准备阶段,准备一定数量的人脸图像,从大量人脸图像中,找出包含该AU的人脸图像,作为该AU的正样本图像,并为每个AU准备负样本图像,得到该每个AU的正样本图像及负样本图像,形成第一样本集;不同AU对应的图像区域可以是相同的,例如AU1、AU2都涉及到人脸图像中包含眉毛、眼睛和额头的区域,AU9、AU22涉及到人脸图像中的鼻子和嘴唇区域。图像中不包含该AU的区域,均可以作为该AU的负样本图像。In the sample preparation phase of each AU, a certain number of face images are prepared, and from a large number of face images, a face image containing the AU is found as a positive sample image of the AU, and a negative sample is prepared for each AU. Image, obtaining a positive sample image and a negative sample image of each AU to form a first sample set; image regions corresponding to different AUs may be the same, for example, AU1, AU2 are related to the face image including eyebrows, eyes, and The area of the forehead, AU9, AU22, relates to the nose and lip areas in the face image. The area of the image that does not contain the AU can be used as the negative sample image of the AU.
分别从每个AU的正/负样本图像中随机抽取第一比例(例如,60%)的样本图像作为训练集,从剩下的该AU的样本图像中抽取第二比例样本图像作为验证集,例如50%,即抽取该AU所有样本图像的20%作为验证集,分别利用每个AU的所述训练集训练卷积神经网络,得到所述AU分类器;为了保证AU分类器的准确率,需对AU分类器的准确率进行验证,利用所述验证集验证训练的所述AU分类器的准确率,若准确率大于或者等于预设准确率(例如,90%),则训练结束,或者,若准确率小于预设准确率,则增加样本集中的样本图片数量并重新执行上述训练步骤。A first scale (for example, 60%) of sample images are randomly extracted from each of the AU positive/negative sample images as a training set, and a second proportional sample image is extracted from the remaining sample images of the AU as a verification set. For example, 50%, that is, 20% of all sample images of the AU are extracted as a verification set, and the convolutional neural network is trained by using the training set of each AU to obtain the AU classifier; in order to ensure the accuracy of the AU classifier, The accuracy of the AU classifier needs to be verified, and the accuracy of the trained AU classifier is verified by the verification set. If the accuracy rate is greater than or equal to the preset accuracy rate (for example, 90%), the training ends, or If the accuracy is less than the preset accuracy, increase the number of sample pictures in the sample set and re-execute the above training steps.
需要说明的是,所述预先确定的AU分类器的训练步骤还包括:对第一样本集中的样本图像进行预处理如缩放、裁剪、翻转及/或扭曲等操作,利用经过预处理后的样本图片对卷积神经网络进行训练,有效提高模型训练的真实性及准确率。It should be noted that the training step of the predetermined AU classifier further includes: performing preprocessing such as scaling, cropping, flipping, and/or twisting on the sample image in the first sample set, using the preprocessed The sample image trains the convolutional neural network to effectively improve the authenticity and accuracy of the model training.
步骤S30,将该实时脸部图像中所有AU的概率组成该实时脸部图像的特征向量。In step S30, the probability of all AUs in the real-time facial image is composed into the feature vector of the real-time facial image.
假设利用AU分类器对实时脸部图像进行检测后输出的结果中,从实时脸部图像识别出各AU(例如,39个)的概率分别为P1、P2、P3、…、P39,将各AU的概率组成一个特征向量V1,V1=[P1,P2,P3,…,P39],作为实时脸部图像的特征向量。Assuming that the real-time facial image is detected and output by the AU classifier, the probability of identifying each AU (for example, 39) from the real-time facial image is P1, P2, P3, ..., P39, respectively, and each AU The probability consists of a feature vector V1, V1 = [P1, P2, P3, ..., P39] as a feature vector of the real-time facial image.
步骤S40,将所述特征向量输入预先确定的情绪分类器,得到从该实时人脸图像中识别出每种情绪的概率,取概率最大的情绪作为从该实时人脸图像中识别出的情绪。Step S40: Input the feature vector into a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and take the emotion with the highest probability as the emotion recognized from the real-time face image.
将特征向量V1输入预先确定的情绪分类器,以从该实时人脸图像中识别出可能存在的每种情绪,以及每种情绪的概率(即存在的可能性)。其中,所述预先确定的情绪分类器的训练步骤包括:The feature vector V1 is input to a predetermined emotion classifier to identify each emotion that may exist from the real-time face image, and the probability of each emotion (ie, the possibility of existence). The training steps of the predetermined emotion classifier include:
在本实施例中,借用上述第一样本集中的样本图像,样本图像输入上述AU分类器后,得到从样本图像中识别出每种AU的概率,将从每张样本图像中识别出每种AU的概率组成一个特征向量;同时,根据每张样本图像呈现的情绪,对样本图像进行分类标记,即分配一个情绪标签(例如“开心”),得到包含特征向量、情绪标签的第二样本集;从第二样本集中随机抽取第一比例(例如,60%)的样本图像作为训练集,从剩下的样本图像中抽取第二比例 样本图像作为验证集,例如50%,即抽取第二样本集中20%的样本图像作为验证集,利用朴素贝叶斯算法对所述训练集进行训练,得到所述情绪分类器;为了保证情绪分类器的准确率,需对情绪分类器的准确率进行验证,利用所述验证集验证训练的所述情绪分类器的准确率,若准确率大于或者等于预设准确率(例如,90%),则训练结束,或者,若准确率小于预设准确率,则增加样本集中的样本图片数量并重新执行上述训练步骤。In this embodiment, by using the sample image in the first sample set, the sample image is input into the AU classifier, and the probability of identifying each AU from the sample image is obtained, and each sample image is identified from each sample image. The probability of AU constitutes a feature vector; at the same time, according to the emotion presented by each sample image, the sample image is classified and labeled, that is, an emotion tag (for example, “happy”) is assigned, and a second sample set including the feature vector and the emotion tag is obtained. And randomly extracting a first proportion (for example, 60%) of the sample image from the second sample set as a training set, and extracting a second proportional sample image from the remaining sample image as a verification set, for example, 50%, that is, extracting the second sample The 20% sample image is collected as a verification set, and the training set is trained by the Naive Bayes algorithm to obtain the emotion classifier. In order to ensure the accuracy of the emotion classifier, the accuracy of the emotion classifier needs to be verified. Using the verification set to verify the accuracy of the trained emotion classifier, if the accuracy rate is greater than or equal to the preset accuracy rate (eg, 90) ), The end of the training, or, if less than the preset accuracy accuracy, increase the number of sample image sample set of training and re-execute the above steps.
需要说明的是,朴素贝叶斯算法分为三种,分别是GaussianNB,MultinomialNB和BernoulliNB。其中GaussianNB是先验为高斯分布的朴素贝叶斯,MultinomialNB是先验为多项式分布的朴素贝叶斯,而BernoulliNB就是先验为伯努利分布的朴素贝叶斯。这三个类适用的分类场景各不相同,GaussianNB用于分类问题。MultinomialNB和BernoulliNB用于离散值模型。因为我们是分类问题,所以选择GaussianNB。It should be noted that the Naive Bayes algorithm is divided into three types, namely GaussianNB, MultinomialNB and BernoulliNB. Among them, GaussianNB is a simple Bayesian with Gaussian distribution a priori, MultinomialNB is a simple Bayesian with a priori polynomial distribution, and BernoulliNB is a simple Bayesian with a priori Bernoulli distribution. The classification scenarios applicable to these three categories are different, and GaussianNB is used for classification problems. MultinomialNB and BernoulliNB are used for discrete value models. Because we are a classification problem, we chose GaussianNB.
其中,GaussianNB假设特征的先验概率为正态分布,所以有如下公式:Among them, GaussianNB assumes that the prior probability of the feature is a normal distribution, so there is the following formula:
Figure PCTCN2018076168-appb-000005
Figure PCTCN2018076168-appb-000005
其中C k为Y的第k类类别,μ k
Figure PCTCN2018076168-appb-000006
为需要从训练集估计的值。GaussianNB会根据训练集求出μ k
Figure PCTCN2018076168-appb-000007
μ k为在样本类别C k中所有X j的平均值。
Figure PCTCN2018076168-appb-000008
为在样本类别C k中所有X j的方差。
Where k k is the kth class of Y, μ k and
Figure PCTCN2018076168-appb-000006
For values that need to be estimated from the training set. GaussianNB will find μ k and based on the training set.
Figure PCTCN2018076168-appb-000007
μ k is the average of all X j in the sample class C k .
Figure PCTCN2018076168-appb-000008
Is the variance of all X j in the sample class C k .
假设将实时脸部图像的特征向量V1输入所述情绪分类器后,从该实时脸部图像中识别出的情绪有多种,且识别出每种情绪的概率(取值范围为:0-1)各不相同,例如,开心:0.6、惊讶:0.3、悲伤:0.1。Assuming that the feature vector V1 of the real-time facial image is input to the emotion classifier, there are various emotions recognized from the real-time facial image, and the probability of each emotion is recognized (the range of values is: 0-1) ) different, for example, happy: 0.6, surprised: 0.3, sad: 0.1.
上述情绪分类器的输出结果中,从实时脸部图像中识别出“开心”的概率最大,则确定从该实时脸部图像中识别出的情绪即为“开心”。In the output result of the above-described emotion classifier, when the probability of recognizing "happy" from the real-time face image is the largest, it is determined that the emotion recognized from the real-time face image is "happy".
需要说明的是,上述实施例中所述的第一概率、第二概率、预设准确率等需要预先设置的参数,可根据用户需求进行相应调整。上述实施例提出的人物情绪分析方法,通过从实时图像中识别出实时脸部图像,通过AU分类器提取该实时脸部图像中的各个AU特征,将各AU特征的概率组合成特征向量,将特征向量输入情绪分类器识别出该实时脸部图像中存在的每种情绪的概率,取概率最大的情绪作为该实时图像中的情绪。通过结合AU分类器及情绪分类器识别实时脸部图像中的情绪,有效提高人物情绪的识别效率。It should be noted that the parameters that need to be preset in the first probability, the second probability, the preset accuracy, and the like described in the foregoing embodiments may be adjusted according to user requirements. The character emotion analysis method proposed by the above embodiment, by identifying a real-time facial image from a real-time image, extracting each AU feature in the real-time facial image by an AU classifier, and combining the probabilities of the AU features into a feature vector, The feature vector input emotion classifier recognizes the probability of each emotion existing in the real-time face image, and takes the emotion with the highest probability as the emotion in the real-time image. By combining the AU classifier and the emotion classifier to identify emotions in real-time facial images, the recognition efficiency of the characters' emotions is effectively improved.
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质中包括人物情绪分析程序,所述人物情绪分析程序被处理器执行时实现如下操作:In addition, the embodiment of the present application further provides a computer readable storage medium, where the character readable analysis program includes a character emotion analysis program, and when the character emotion analysis program is executed by the processor, the following operations are implemented:
获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Obtaining a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
将所述实时脸部图像输入预先确定的AU分类器,得到从该实时脸部图像中识别出的每个AU的概率;Inputting the real-time facial image into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image;
将该实时脸部图像中所有AU的概率组成该实时脸部图像的特征向量;及Generating the probability of all AUs in the real-time facial image into feature vectors of the real-time facial image; and
将所述特征向量输入预先确定的情绪分类器,得到从该实时人脸图像中识别出每种情绪的概率,取概率最大的情绪作为从该实时人脸图像中识别出的情绪。The feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
优选地,所述预先确定的AU分类器的训练步骤包括:Preferably, the training step of the predetermined AU classifier comprises:
准备包含一定数量的人脸样本图像的第一样本集,从人脸样本图像中分别截取每个AU相匹配的图像区域作为该AU的正样本图像,并为每个AU准备负样本图像;Preparing a first sample set containing a certain number of face sample images, respectively extracting each AU matched image region as a positive sample image of the AU from the face sample image, and preparing a negative sample image for each AU;
将每个AU的正/负样本图像分为第一比例的训练集和第二比例的验证集;The positive/negative sample images of each AU are divided into a training set of a first ratio and a verification set of a second ratio;
利用所述训练集训练卷积神经网络,得到所述AU分类器;及Using the training set to train a convolutional neural network to obtain the AU classifier; and
利用所述验证集验证训练的所述AU分类器的准确率,若准确率大于或者等于预设准确率,则训练结束,或者,若准确率小于预设准确率,则增加样本数量并重新执行训练步骤。Using the verification set to verify the accuracy of the trained AU classifier, if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the sample size and re-execute Training steps.
优选地,所述预先确定的AU分类器的训练步骤还包括:Preferably, the training step of the predetermined AU classifier further includes:
对所述第一样本集中的样本图像进行预处理操作,包括:缩放、裁剪、翻转及/或扭曲。The pre-processing operations are performed on the sample images in the first sample set, including: scaling, cropping, flipping, and/or warping.
优选地,所述预先确定的情绪分类器的训练步骤包括:Preferably, the training step of the predetermined emotion classifier comprises:
依据所述第一样本集中每张样本图像的情绪对每张样本图像分配一个情绪标签,得到包含特征向量、情绪标签的第二样本集;Assigning an emotion tag to each sample image according to the emotion of each sample image in the first sample set, and obtaining a second sample set including the feature vector and the emotion tag;
将第二样本集中的样本图像分为第一比例的训练集和第二比例的验证 集;Dividing the sample image in the second sample set into a training set of a first ratio and a verification set of a second ratio;
利用朴素贝叶斯算法对所述训练集进行训练,得到所述情绪分类器;及Training the training set using a naive Bayesian algorithm to obtain the emotion classifier; and
利用所述验证集验证训练的所述情绪分类器的准确率,若准确率大于或者等于预设准确率,则训练结束,或者,若准确率小于预设准确率,则增加样本集中的样本数量并重新执行上述训练步骤。Using the verification set to verify the accuracy of the trained emotion classifier, if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the number of samples in the sample set And re-execute the above training steps.
优选地,所述朴素贝叶斯算法为先验为高斯分布的朴素贝叶斯算法。Preferably, the naive Bayesian algorithm is a naive Bayesian algorithm that is a priori Gaussian distribution.
优选地,所述人脸识别算法可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法及神经网络方法。Preferably, the face recognition algorithm may be a geometric feature based method, a local feature analysis method, a feature face method, an elastic model based method, and a neural network method.
本申请之计算机可读存储介质的具体实施方式与上述人物情绪分析方法、电子装置的具体实施方式大致相同,在此不再赘述。The specific implementation manner of the computer readable storage medium of the present application is substantially the same as the specific embodiment of the character emotion analysis method and the electronic device, and details are not described herein again.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。It is to be understood that the term "comprises", "comprising", or any other variants thereof, is intended to encompass a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a series of elements includes those elements. It also includes other elements not explicitly listed, or elements that are inherent to such a process, device, item, or method. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, the device, the item, or the method that comprises the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。The serial numbers of the embodiments of the present application are merely for the description, and do not represent the advantages and disadvantages of the embodiments. Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better. Implementation. Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM as described above). , a disk, an optical disk, including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above is only a preferred embodiment of the present application, and is not intended to limit the scope of the patent application, and the equivalent structure or equivalent process transformations made by the specification and the drawings of the present application, or directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of this application.

Claims (20)

  1. 一种人物情绪分析方法,应用于电子装置,其特征在于,所述方法包括:A character emotion analysis method is applied to an electronic device, characterized in that the method comprises:
    获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Obtaining a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
    将所述实时脸部图像输入预先确定的AU分类器,得到从该实时脸部图像中识别出的每个AU的概率;Inputting the real-time facial image into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image;
    将该实时脸部图像中所有AU的概率组成该实时脸部图像的特征向量;及Generating the probability of all AUs in the real-time facial image into feature vectors of the real-time facial image; and
    将所述特征向量输入预先确定的情绪分类器,得到从该实时人脸图像中识别出每种情绪的概率,取概率最大的情绪作为从该实时人脸图像中识别出的情绪。The feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
  2. 根据权利要求1所述的人物情绪分析方法,其特征在于,所述预先确定的AU分类器的训练步骤包括:The character emotion analysis method according to claim 1, wherein the training step of the predetermined AU classifier comprises:
    准备包含一定数量的人脸样本图像的第一样本集,从人脸样本图像中分别截取每个AU相匹配的图像区域作为该AU的正样本图像,并为每个AU准备负样本图像;Preparing a first sample set containing a certain number of face sample images, respectively extracting each AU matched image region as a positive sample image of the AU from the face sample image, and preparing a negative sample image for each AU;
    将每个AU的正/负样本图像分为第一比例的训练集和第二比例的验证集;The positive/negative sample images of each AU are divided into a training set of a first ratio and a verification set of a second ratio;
    利用所述训练集训练卷积神经网络,得到所述AU分类器;及Using the training set to train a convolutional neural network to obtain the AU classifier; and
    利用所述验证集验证训练的所述AU分类器的准确率,若准确率大于或者等于预设准确率,则训练结束,或者,若准确率小于预设准确率,则增加样本数量并重新执行训练步骤。Using the verification set to verify the accuracy of the trained AU classifier, if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the sample size and re-execute Training steps.
  3. 根据权利要求2所述的人物情绪分析方法,其特征在于,所述预先确定的AU分类器的训练步骤还包括:The character emotion analysis method according to claim 2, wherein the training step of the predetermined AU classifier further comprises:
    对所述第一样本集中的样本图像进行预处理操作,包括:缩放、裁剪、翻转及/或扭曲。The pre-processing operations are performed on the sample images in the first sample set, including: scaling, cropping, flipping, and/or warping.
  4. 根据权利要求1或3所述的人物情绪分析方法,其特征在于,所述预先确定的情绪分类器的训练步骤包括:The character emotion analysis method according to claim 1 or 3, wherein the training step of the predetermined emotion classifier comprises:
    依据所述第一样本集中每张样本图像的情绪对每张样本图像分配一个情绪标签,得到包含特征向量、情绪标签的第二样本集;Assigning an emotion tag to each sample image according to the emotion of each sample image in the first sample set, and obtaining a second sample set including the feature vector and the emotion tag;
    将第二样本集中的样本图像分为第一比例的训练集和第二比例的验证集;Dividing the sample images in the second sample set into a training set of a first ratio and a verification set of a second ratio;
    利用朴素贝叶斯算法对所述训练集进行训练,得到所述情绪分类器;及Training the training set using a naive Bayesian algorithm to obtain the emotion classifier; and
    利用所述验证集验证训练的所述情绪分类器的准确率,若准确率大于或者等于预设准确率,则训练结束,或者,若准确率小于预设准确率,则增加样本集中的样本数量并重新执行上述训练步骤。Using the verification set to verify the accuracy of the trained emotion classifier, if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the number of samples in the sample set And re-execute the above training steps.
  5. 根据权利要求4所述的人物情绪分析方法,其特征在于,所述朴素贝叶斯算法为先验为高斯分布的朴素贝叶斯算法。The character emotion analysis method according to claim 4, wherein the naive Bayesian algorithm is a naive Bayesian algorithm that is a priori Gaussian distribution.
  6. 根据权利要求1所述的人物情绪分析方法,其特征在于,所述人脸识别算法可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法及神经网络方法。The character emotion analysis method according to claim 1, wherein the face recognition algorithm is a geometric feature based method, a local feature analysis method, a feature face method, an elastic model based method, and a neural network method.
  7. 一种电子装置,其特征在于,该电子装置包括:存储器、处理器及摄像装置,所述存储器中包括人物情绪分析程序,所述人物情绪分析程序被所述处理器执行时实现如下步骤:An electronic device, comprising: a memory, a processor and a camera device, wherein the memory includes a character emotion analysis program, and when the character emotion analysis program is executed by the processor, the following steps are implemented:
    获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Obtaining a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
    将所述实时脸部图像输入预先确定的AU分类器,得到从该实时脸部图像中识别出的每个AU的概率;Inputting the real-time facial image into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image;
    将该实时脸部图像中所有AU的概率组成该实时脸部图像的特征向量;及Generating the probability of all AUs in the real-time facial image into feature vectors of the real-time facial image; and
    将所述特征向量输入预先确定的情绪分类器,得到从该实时人脸图像中识别出每种情绪的概率,取概率最大的情绪作为从该实时人脸图像中识别出的情绪。The feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
  8. 根据权利要求7所述的电子装置,其特征在于,所述预先确定的AU分类器的训练步骤包括:The electronic device according to claim 7, wherein the training step of the predetermined AU classifier comprises:
    准备包含一定数量的人脸样本图像的第一样本集,从人脸样本图像中分别截取每个AU相匹配的图像区域作为该AU的正样本图像,并为每个AU准备负样本图像;Preparing a first sample set containing a certain number of face sample images, respectively extracting each AU matched image region as a positive sample image of the AU from the face sample image, and preparing a negative sample image for each AU;
    将每个AU的正/负样本图像分为第一比例的训练集和第二比例的验证集;The positive/negative sample images of each AU are divided into a training set of a first ratio and a verification set of a second ratio;
    利用所述训练集训练卷积神经网络,得到所述AU分类器;及Using the training set to train a convolutional neural network to obtain the AU classifier; and
    利用所述验证集验证训练的所述AU分类器的准确率,若准确率大于或者等于预设准确率,则训练结束,或者,若准确率小于预设准确率,则增加样本数量并重新执行训练步骤。Using the verification set to verify the accuracy of the trained AU classifier, if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the sample size and re-execute Training steps.
  9. 根据权利要求8所述的电子装置,其特征在于,所述预先确定的AU分类器的训练步骤还包括:The electronic device according to claim 8, wherein the training step of the predetermined AU classifier further comprises:
    对所述第一样本集中的样本图像进行预处理操作,包括:缩放、裁剪、翻转及/或扭曲。The pre-processing operations are performed on the sample images in the first sample set, including: scaling, cropping, flipping, and/or warping.
  10. 根据权利要求7所述的电子装置,其特征在于,所述预先确定的情绪分类器的训练步骤包括:The electronic device according to claim 7, wherein the training step of the predetermined emotion classifier comprises:
    依据所述第一样本集中每张样本图像的情绪对每张样本图像分配一个情绪标签,得到包含特征向量、情绪标签的第二样本集;Assigning an emotion tag to each sample image according to the emotion of each sample image in the first sample set, and obtaining a second sample set including the feature vector and the emotion tag;
    将第二样本集中的样本图像分为第一比例的训练集和第二比例的验证集;Dividing the sample images in the second sample set into a training set of a first ratio and a verification set of a second ratio;
    利用朴素贝叶斯算法对所述训练集进行训练,得到所述情绪分类器;及Training the training set using a naive Bayesian algorithm to obtain the emotion classifier; and
    利用所述验证集验证训练的所述情绪分类器的准确率,若准确率大于或者等于预设准确率,则训练结束,或者,若准确率小于预设准确率,则增加样本集中的样本数量并重新执行上述训练步骤。Using the verification set to verify the accuracy of the trained emotion classifier, if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the number of samples in the sample set And re-execute the above training steps.
  11. 根据权利要求10所述的电子装置,其特征在于,所述朴素贝叶斯算法为先验为高斯分布的朴素贝叶斯算法。The electronic device according to claim 10, wherein the naive Bayes algorithm is a naive Bayesian algorithm that is a priori Gaussian distribution.
  12. 根据权利要求7所述的电子装置,其特征在于,其特征在于,所述人脸识别算法可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法及神经网络方法。The electronic device according to claim 7, wherein the face recognition algorithm is a geometric feature based method, a local feature analysis method, a feature face method, an elastic model based method, and a neural network method. .
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括人物情绪分析程序,所述人物情绪分析程序被处理器执行时,实现如下步骤:A computer readable storage medium, comprising: a character emotion analysis program, wherein when the character emotion analysis program is executed by a processor, the following steps are implemented:
    获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Obtaining a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
    将所述实时脸部图像输入预先确定的AU分类器,得到从该实时脸部图像中识别出的每个AU的概率;Inputting the real-time facial image into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image;
    将该实时脸部图像中所有AU的概率组成该实时脸部图像的特征向量;及Generating the probability of all AUs in the real-time facial image into feature vectors of the real-time facial image; and
    将所述特征向量输入预先确定的情绪分类器,得到从该实时人脸图像中识别出每种情绪的概率,取概率最大的情绪作为从该实时人脸图像中识别出的情绪。The feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
  14. 根据权利要求13所述的计算机可读存储介质,其特征在于,所述预先确定的AU分类器的训练步骤包括:The computer readable storage medium of claim 13, wherein the training step of the predetermined AU classifier comprises:
    准备包含一定数量的人脸样本图像的第一样本集,从人脸样本图像中分别截取每个AU相匹配的图像区域作为该AU的正样本图像,并为每个AU准备负样本图像;Preparing a first sample set containing a certain number of face sample images, respectively extracting each AU matched image region as a positive sample image of the AU from the face sample image, and preparing a negative sample image for each AU;
    将每个AU的正/负样本图像分为第一比例的训练集和第二比例的验证集;The positive/negative sample images of each AU are divided into a training set of a first ratio and a verification set of a second ratio;
    利用所述训练集训练卷积神经网络,得到所述AU分类器;及Using the training set to train a convolutional neural network to obtain the AU classifier; and
    利用所述验证集验证训练的所述AU分类器的准确率,若准确率大于或者等于预设准确率,则训练结束,或者,若准确率小于预设准确率,则增加样本数量并重新执行训练步骤。Using the verification set to verify the accuracy of the trained AU classifier, if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the sample size and re-execute Training steps.
  15. 根据权利要求14所述的计算机可读存储介质,其特征在于,所述预先确定的AU分类器的训练步骤还包括:The computer readable storage medium according to claim 14, wherein the training step of the predetermined AU classifier further comprises:
    对所述第一样本集中的样本图像进行预处理操作,包括:缩放、裁剪、翻转及/或扭曲。The pre-processing operations are performed on the sample images in the first sample set, including: scaling, cropping, flipping, and/or warping.
  16. 根据权利要求13所述的计算机可读存储介质,其特征在于,所述预先确定的情绪分类器的训练步骤包括:The computer readable storage medium of claim 13, wherein the training step of the predetermined emotion classifier comprises:
    依据所述第一样本集中每张样本图像的情绪对每张样本图像分配一个情绪标签,得到包含特征向量、情绪标签的第二样本集;Assigning an emotion tag to each sample image according to the emotion of each sample image in the first sample set, and obtaining a second sample set including the feature vector and the emotion tag;
    将第二样本集中的样本图像分为第一比例的训练集和第二比例的验证集;Dividing the sample images in the second sample set into a training set of a first ratio and a verification set of a second ratio;
    利用朴素贝叶斯算法对所述训练集进行训练,得到所述情绪分类器;及Training the training set using a naive Bayesian algorithm to obtain the emotion classifier; and
    利用所述验证集验证训练的所述情绪分类器的准确率,若准确率大于或 者等于预设准确率,则训练结束,或者,若准确率小于预设准确率,则增加样本集中的样本数量并重新执行上述训练步骤。Using the verification set to verify the accuracy of the trained emotion classifier, if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the number of samples in the sample set And re-execute the above training steps.
  17. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述朴素贝叶斯算法为先验为高斯分布的朴素贝叶斯算法。The computer readable storage medium according to claim 16, wherein the naive Bayesian algorithm is a naive Bayesian algorithm that is a priori Gaussian distribution.
  18. 根据权利要求13所述的计算机可读存储介质,其特征在于,所述人脸识别算法可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法及神经网络方法。The computer readable storage medium according to claim 13, wherein the face recognition algorithm is a geometric feature based method, a local feature analysis method, a feature face method, an elasticity model based method, and a neural network method.
  19. 一种人物情绪分析程序,其特征在于,该程序包括:获取模块、AU识别模块、特征提取模块及情绪识别模块。A character emotion analysis program is characterized in that the program comprises: an acquisition module, an AU recognition module, a feature extraction module, and an emotion recognition module.
  20. 如权利要求19所述的人物情绪分析程序,其特征在于,该人物情绪分析程序被处理器执行时,可实现如权利要求1-6中任意一项人物情绪分析方法的步骤。The character emotion analysis program according to claim 19, wherein the step of the character emotion analysis method according to any one of claims 1 to 6 is implemented when the character emotion analysis program is executed by the processor.
PCT/CN2018/076168 2017-11-15 2018-02-10 Human-figure emotion analysis method, apparatus, and storage medium WO2019095571A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711126632.2A CN107862292B (en) 2017-11-15 2017-11-15 Personage's mood analysis method, device and storage medium
CN201711126632.2 2017-11-15

Publications (1)

Publication Number Publication Date
WO2019095571A1 true WO2019095571A1 (en) 2019-05-23

Family

ID=61701889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/076168 WO2019095571A1 (en) 2017-11-15 2018-02-10 Human-figure emotion analysis method, apparatus, and storage medium

Country Status (2)

Country Link
CN (1) CN107862292B (en)
WO (1) WO2019095571A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263673A (en) * 2019-05-31 2019-09-20 合肥工业大学 Human facial expression recognition method, apparatus, computer equipment and storage medium
CN110427802A (en) * 2019-06-18 2019-11-08 平安科技(深圳)有限公司 AU detection method, device, electronic equipment and storage medium

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110395260B (en) * 2018-04-20 2021-12-07 比亚迪股份有限公司 Vehicle, safe driving method and device
CN108717464A (en) * 2018-05-31 2018-10-30 中国联合网络通信集团有限公司 photo processing method, device and terminal device
CN108810624A (en) * 2018-06-08 2018-11-13 广州视源电子科技股份有限公司 Program feedback method and device and playing equipment
CN108875704B (en) * 2018-07-17 2021-04-02 北京字节跳动网络技术有限公司 Method and apparatus for processing image
CN109635838B (en) * 2018-11-12 2023-07-11 平安科技(深圳)有限公司 Face sample picture labeling method and device, computer equipment and storage medium
CN109493403A (en) * 2018-11-13 2019-03-19 北京中科嘉宁科技有限公司 A method of human face animation is realized based on moving cell Expression Mapping
CN109635727A (en) * 2018-12-11 2019-04-16 昆山优尼电能运动科技有限公司 A kind of facial expression recognizing method and device
CN109584050A (en) * 2018-12-14 2019-04-05 深圳壹账通智能科技有限公司 Consumer's risk degree analyzing method and device based on micro- Expression Recognition
CN109829996A (en) * 2018-12-15 2019-05-31 深圳壹账通智能科技有限公司 Using method, apparatus of registering, computer installation and storage medium
CN109829363A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Expression recognition method, device, computer equipment and storage medium
CN109766765A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Audio data method for pushing, device, computer equipment and storage medium
CN109583431A (en) * 2019-01-02 2019-04-05 上海极链网络科技有限公司 A kind of face Emotion identification model, method and its electronic device
CN109840512A (en) * 2019-02-28 2019-06-04 北京科技大学 A kind of Facial action unit recognition methods and identification device
CN109840513B (en) * 2019-02-28 2020-12-01 北京科技大学 Face micro-expression recognition method and recognition device
CN110166836B (en) * 2019-04-12 2022-08-02 深圳壹账通智能科技有限公司 Television program switching method and device, readable storage medium and terminal equipment
CN110210194A (en) * 2019-04-18 2019-09-06 深圳壹账通智能科技有限公司 Electronic contract display methods, device, electronic equipment and storage medium
CN110177205A (en) * 2019-05-20 2019-08-27 深圳壹账通智能科技有限公司 Terminal device, photographic method and computer readable storage medium based on micro- expression
CN112016368A (en) * 2019-05-31 2020-12-01 沈阳新松机器人自动化股份有限公司 Facial expression coding system-based expression recognition method and system and electronic equipment
CN110399836A (en) * 2019-07-25 2019-11-01 深圳智慧林网络科技有限公司 User emotion recognition methods, device and computer readable storage medium
CN110598546B (en) * 2019-08-06 2024-06-28 平安科技(深圳)有限公司 Image-based target object generation method and related equipment
CN110705419A (en) * 2019-09-24 2020-01-17 新华三大数据技术有限公司 Emotion recognition method, early warning method, model training method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050408A1 (en) * 2012-08-14 2014-02-20 Samsung Electronics Co., Ltd. Method for on-the-fly learning of facial artifacts for facial emotion recognition
CN104376333A (en) * 2014-09-25 2015-02-25 电子科技大学 Facial expression recognition method based on random forests
CN104680141A (en) * 2015-02-13 2015-06-03 华中师范大学 Motion unit layering-based facial expression recognition method and system
CN105844221A (en) * 2016-03-18 2016-08-10 常州大学 Human face expression identification method based on Vadaboost screening characteristic block
CN107194347A (en) * 2017-05-19 2017-09-22 深圳市唯特视科技有限公司 A kind of method that micro- expression detection is carried out based on Facial Action Coding System

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007043712A1 (en) * 2005-10-14 2007-04-19 Nagasaki University Emotion evaluating method and emotion indicating method, and program, recording medium, and system for the methods
EP2630635B1 (en) * 2010-10-21 2021-04-21 Samsung Electronics Co., Ltd. Method and apparatus for recognizing an emotion of an individual based on facial action units
KR102094723B1 (en) * 2012-07-17 2020-04-14 삼성전자주식회사 Feature descriptor for robust facial expression recognition
CN103065122A (en) * 2012-12-21 2013-04-24 西北工业大学 Facial expression recognition method based on facial motion unit combination features

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050408A1 (en) * 2012-08-14 2014-02-20 Samsung Electronics Co., Ltd. Method for on-the-fly learning of facial artifacts for facial emotion recognition
CN104376333A (en) * 2014-09-25 2015-02-25 电子科技大学 Facial expression recognition method based on random forests
CN104680141A (en) * 2015-02-13 2015-06-03 华中师范大学 Motion unit layering-based facial expression recognition method and system
CN105844221A (en) * 2016-03-18 2016-08-10 常州大学 Human face expression identification method based on Vadaboost screening characteristic block
CN107194347A (en) * 2017-05-19 2017-09-22 深圳市唯特视科技有限公司 A kind of method that micro- expression detection is carried out based on Facial Action Coding System

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263673A (en) * 2019-05-31 2019-09-20 合肥工业大学 Human facial expression recognition method, apparatus, computer equipment and storage medium
CN110263673B (en) * 2019-05-31 2022-10-14 合肥工业大学 Facial expression recognition method and device, computer equipment and storage medium
CN110427802A (en) * 2019-06-18 2019-11-08 平安科技(深圳)有限公司 AU detection method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN107862292B (en) 2019-04-12
CN107862292A (en) 2018-03-30

Similar Documents

Publication Publication Date Title
WO2019095571A1 (en) Human-figure emotion analysis method, apparatus, and storage medium
US10445562B2 (en) AU feature recognition method and device, and storage medium
WO2019109526A1 (en) Method and device for age recognition of face image, storage medium
WO2019033573A1 (en) Facial emotion identification method, apparatus and storage medium
WO2019033571A1 (en) Facial feature point detection method, apparatus and storage medium
KR102174595B1 (en) System and method for identifying faces in unconstrained media
EP2630635B1 (en) Method and apparatus for recognizing an emotion of an individual based on facial action units
WO2017088432A1 (en) Image recognition method and device
WO2019033572A1 (en) Method for detecting whether face is blocked, device and storage medium
WO2019061658A1 (en) Method and device for positioning eyeglass, and storage medium
Anwar et al. Learned features are better for ethnicity classification
WO2021051547A1 (en) Violent behavior detection method and system
EP3685288B1 (en) Apparatus, method and computer program product for biometric recognition
CN113255557B (en) Deep learning-based video crowd emotion analysis method and system
Khatri et al. Facial expression recognition: A survey
WO2021127916A1 (en) Facial emotion recognition method, smart device and computer-readabel storage medium
Gudipati et al. Efficient facial expression recognition using adaboost and haar cascade classifiers
Chowdhury et al. Lip as biometric and beyond: a survey
Singh et al. Feature based method for human facial emotion detection using optical flow based analysis
Nahar et al. Twins and Similar Faces Recognition Using Geometric and Photometric Features with Transfer Learning
Aslam et al. Gender classification based on isolated facial features and foggy faces using jointly trained deep convolutional neural network
Ali et al. Facial action units detection under pose variations using deep regions learning
US20220335752A1 (en) Emotion recognition and notification system
Agada et al. Edge based mean LBP for valence facial expression detection
Nigam et al. Review of Facial Recognition Techniques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18878905

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC DATED 14-08-2020

122 Ep: pct application non-entry in european phase

Ref document number: 18878905

Country of ref document: EP

Kind code of ref document: A1