WO2019095571A1 - Procédé d'analyse d'émotion de figure humaine, appareil, et support d'informations - Google Patents

Procédé d'analyse d'émotion de figure humaine, appareil, et support d'informations Download PDF

Info

Publication number
WO2019095571A1
WO2019095571A1 PCT/CN2018/076168 CN2018076168W WO2019095571A1 WO 2019095571 A1 WO2019095571 A1 WO 2019095571A1 CN 2018076168 W CN2018076168 W CN 2018076168W WO 2019095571 A1 WO2019095571 A1 WO 2019095571A1
Authority
WO
WIPO (PCT)
Prior art keywords
emotion
real
image
sample
classifier
Prior art date
Application number
PCT/CN2018/076168
Other languages
English (en)
Chinese (zh)
Inventor
陈林
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019095571A1 publication Critical patent/WO2019095571A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification

Definitions

  • the present application relates to the field of computer vision processing technologies, and in particular, to a character emotion analysis method, a program, an electronic device, and a computer readable storage medium.
  • Face emotion recognition is an important part of human-computer interaction and emotion calculation research, involving psychology, sociology, anthropology, life sciences, cognitive science, computer science and other research fields. It is very intelligent and intelligent for human-computer interaction. significance.
  • FACS is a "facial expression coding system" created in 1976 after years of research. According to the anatomical features of the face, it can be divided into several independent and interrelated motion units (AU). The characteristics of the movement and the main areas it controls can reflect facial expressions.
  • the present application provides a character emotion analysis method, a program, an electronic device, and a computer readable storage medium, the main purpose of which is to identify a real-time face according to each AU feature and probability by identifying an AU feature and probability in a real-time face image.
  • the emotions of the characters in the image effectively improve the efficiency of the character's emotion recognition.
  • the present application provides an electronic device, including: a memory, a processor, and an imaging device, wherein the memory includes a character emotion analysis program, and the character emotion analysis program is implemented by the processor to implement the following step:
  • the feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
  • the present application further provides a method for character emotion analysis, the method comprising:
  • the feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
  • the present application further provides a computer readable storage medium including a character emotion analysis program, when the character emotion analysis program is executed by a processor, implementing the above Any step in the character's sentiment analysis method.
  • the present application further provides a character emotion analysis program, which includes: an acquisition module, an AU recognition module, a feature extraction module, and an emotion recognition module, and when the character emotion analysis program is executed by a processor, Any of the steps in the character emotion analysis method described above.
  • the character emotion analysis method, the program, the electronic device and the computer readable storage medium proposed by the present application by identifying a real-time facial image from a real-time image, extracting each AU feature in the real-time facial image through an AU classifier, The probabilities of the AU features are combined into a feature vector, and the feature vector is input to the emotion classifier to recognize the probability of each emotion existing in the real-time facial image, and the emotion with the highest probability is taken as the emotion in the real-time image.
  • FIG. 1 is a schematic diagram of an application environment of a preferred embodiment of an applicant's sentiment analysis method
  • FIG. 2 is a block diagram showing a preferred embodiment of the character emotion analysis program of FIG. 1;
  • FIG. 3 is a flow chart of a preferred embodiment of the applicant's sentiment analysis method.
  • the present application provides a character emotion analysis method applied to an electronic device 1.
  • FIG. 1 it is a schematic diagram of an application environment of a preferred embodiment of the applicant's emotion analysis method.
  • the electronic device 1 may be a terminal device having a computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • a computing function such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • the electronic device 1 includes a processor 12, a memory 11, an imaging device 13, a network interface 14, and a communication bus 15.
  • the memory 11 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory 11, or the like.
  • the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
  • the readable storage medium may also be an external memory 11 of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC). , Secure Digital (SD) card, Flash Card, etc.
  • SMC smart memory card
  • SD Secure Digital
  • the readable storage medium of the memory 11 is generally used to store a character emotion analysis program 10 installed on the electronic device 1, a face image sample library, and a pre-trained AU classifier and an emotion classifier. Wait.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing character sentiment analysis. Program 10 and so on.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing character sentiment analysis. Program 10 and so on.
  • the imaging device 13 may be part of the electronic device 1 or may be independent of the electronic device 1.
  • the electronic device 1 is a terminal device having a camera such as a smartphone, a tablet computer, a portable computer, etc.
  • the camera device 13 is a camera of the electronic device 1.
  • the electronic device 1 may be a server, and the camera device 13 is connected to the electronic device 1 via a network, for example, the camera device 13 is installed in a specific place, such as an office. And monitoring the area, real-time image is taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 12 through the network.
  • the network interface 14 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface), and is typically used to establish a communication connection between the electronic device 1 and other electronic devices.
  • a standard wired interface such as a WI-FI interface
  • Communication bus 15 is used to implement connection communication between these components.
  • Figure 1 shows only the electronic device 1 with components 11-15, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the electronic device 1 may further include a user interface
  • the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like.
  • the user interface may also include a standard wired interface and a wireless interface.
  • the electronic device 1 may further include a display, which may also be referred to as a display screen or a display unit.
  • a display may also be referred to as a display screen or a display unit.
  • it may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, and an Organic Light-Emitting Diode (OLED) touch sensor.
  • the display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
  • the electronic device 1 further comprises a touch sensor.
  • the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
  • the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
  • the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
  • the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
  • a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
  • the electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
  • RF radio frequency
  • an operating system and a character emotion analysis program 10 may be included in the memory 11 as a computer storage medium; when the processor 12 executes the character emotion analysis program 10 stored in the memory 11, the following is realized as follows step:
  • the feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
  • AUs are a small group of muscle contraction codes for the face. For example, AU1- raises the inner corner of the eyebrow, AU2- raises the outer corner of the eyebrow, AU9-wrinkles the nose, AU22- tightens the lips and turns outwards.
  • the camera 13 When the camera 13 captures a real-time image, the camera 13 transmits the real-time image to the processor 12.
  • the processor 12 After the processor 12 receives the real-time image, the image is first acquired to create a grayscale image of the same size. Converting the acquired color image into a grayscale image and creating a memory space; equalizing the grayscale image histogram, reducing the amount of grayscale image information, speeding up the detection speed, and then loading the training library to detect the person in the image Face, and return an object containing face information, obtain the data of the location of the face, and record the number; finally obtain the area of the avatar and save it, thus completing a real-time facial image extraction process.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may also be: a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, and the like.
  • a real-time facial image extracted from the real-time image using a face recognition algorithm is input to a predetermined AU classifier, each AU is identified from the real-time facial image, and each of the real-time facial images is included
  • the probability of the AU in the range of 0-1), wherein the training steps of the predetermined AU classifier include:
  • each AU In the sample preparation phase of each AU, a certain number of face images are prepared, and from a large number of face images, a face image containing the AU is found as a positive sample image of the AU, and a negative sample is prepared for each AU.
  • Image obtaining a positive sample image and a negative sample image of each AU to form a first sample set; image regions corresponding to different AUs may be the same, for example, AU1, AU2 are related to the face image including eyebrows, eyes, and The area of the forehead, AU9, AU22, relates to the nose and lip areas in the face image. The area of the image that does not contain the AU can be used as the negative sample image of the AU.
  • a first scale (for example, 60%) of sample images are randomly extracted from each of the AU positive/negative sample images as a training set, and a second proportional sample image is extracted from the remaining sample images of the AU as a verification set. For example, 50%, that is, 20% of all sample images of the AU are extracted as a verification set, and the convolutional neural network (CNN) is trained by using the training set of each AU to obtain the AU classifier; in order to ensure the AU classifier Accuracy, the accuracy of the AU classifier needs to be verified, and the accuracy of the trained AU classifier is verified by the verification set. If the accuracy rate is greater than or equal to the preset accuracy rate (for example, 90%), the training is performed. End, or, if the accuracy is less than the preset accuracy, increase the number of sample pictures in the sample set and re-execute the above training steps.
  • the preset accuracy rate for example, 90%
  • the training step of the predetermined AU classifier further includes: performing preprocessing such as scaling, cropping, flipping, and/or twisting on the sample image in the first sample set, using the preprocessed
  • preprocessing such as scaling, cropping, flipping, and/or twisting
  • the probability of identifying each AU (for example, 39) from the real-time facial image is P1, P2, P3, ..., P39, respectively, and each AU
  • the feature vector V1 is then input to a predetermined emotion classifier to identify each of the emotions that may exist from the real-time face image, as well as the probability of each emotion (ie, the likelihood of being present).
  • the training steps of the predetermined emotion classifier include:
  • the sample image is input into the AU classifier, and the probability of identifying each AU from the sample image is obtained, and the probability of identifying each AU from each sample image is combined into one feature.
  • the sample images are classified and labeled, that is, an emotion label (for example, “happy”) is assigned, and a second sample set including the feature vector and the emotion label is obtained; from the second sample set.
  • an emotion label for example, “happy”
  • a verification set the training set is trained by using a naive Bayes algorithm to obtain the emotion classifier; in order to ensure the accuracy of the emotion classifier, the accuracy of the emotion classifier needs to be verified, and the verification set is utilized.
  • Verifying the accuracy of the trained emotion classifier if the accuracy rate is greater than or equal to the preset accuracy rate (eg, 90%), the training ends Or, if accuracy is less than a preset accuracy, increase the number of sample image sample set of training and re-execute the above steps.
  • the preset accuracy rate eg, 90%
  • GaussianNB is a simple Bayesian with Gaussian distribution a priori
  • MultinomialNB is a simple Bayesian with a priori polynomial distribution
  • BernoulliNB is a simple Bayesian with a priori Bernoulli distribution.
  • the classification scenarios applicable to these three categories are different, and GaussianNB is used for classification problems.
  • MultinomialNB and BernoulliNB are used for discrete value models. Because we are a classification problem, we chose GaussianNB.
  • GaussianNB assumes that the prior probability of the feature is a normal distribution, so there is the following formula:
  • ⁇ k is the kth class of Y
  • ⁇ k For values that need to be estimated from the training set. GaussianNB will find ⁇ k and based on the training set. ⁇ k is the average of all X j in the sample class C k . Is the variance of all X j in the sample class C k .
  • the feature vector V1 of the real-time facial image is input to the emotion classifier, there are various emotions recognized from the real-time facial image, and the probability of each emotion is recognized (the range of values is: 0-1) ) different, for example, happy: 0.6, surprised: 0.3, sad: 0.1.
  • the parameters that need to be preset in the first probability, the second probability, the preset accuracy, and the like described in the foregoing embodiments may be adjusted according to user requirements.
  • the electronic device 1 proposed in the above embodiment extracts the real-time facial image from the real-time image, extracts each AU feature in the real-time facial image through the AU classifier, and combines the probabilities of the AU features into feature vectors to feature
  • the vector input emotion classifier identifies the probability of each emotion present in the real-time facial image, and takes the emotion with the highest probability as the emotion in the real-time image.
  • the character sentiment analysis program 10 can be partitioned into one or more modules, one or more modules being stored in the memory 11 and executed by the processor 12 to complete the application.
  • a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
  • FIG. 2 it is a functional block diagram of a preferred embodiment of the character emotion analysis program 10 of FIG.
  • the character emotion analysis program 10 can be divided into: an acquisition module 110, an AU recognition module 120, a feature extraction module 130, and an emotion recognition module 140.
  • the functions or operational steps implemented by the modules 110-140 are similar to the above, and are not described in detail herein, by way of example, for example:
  • the acquiring module 110 is configured to acquire a real-time image captured by the camera device 13 and extract a real-time facial image from the real-time image by using a face recognition algorithm;
  • the AU identification module 120 is configured to input the real-time facial image into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image;
  • a feature extraction module 130 configured to form a probability vector of all AUs in the real-time facial image into a feature vector of the real-time facial image
  • the emotion recognition module 140 is configured to input the feature vector into a predetermined emotion classifier, obtain a probability of identifying each emotion from the real-time face image, and take the emotion with the highest probability as the recognition from the real-time face image Emotions.
  • the present application also provides a method for character emotion analysis.
  • a flow chart of a preferred embodiment of the applicant's sentiment analysis method The method can be performed by a device that can be implemented by software and/or hardware.
  • the character emotion analysis method includes: step S10 - step S40.
  • Step S10 Acquire a real-time image captured by the camera device, and extract a real-time face image from the real-time image by using a face recognition algorithm.
  • the camera When the camera captures a real-time image, the camera sends the real-time image to the processor.
  • the processor receives the real-time image, the image is first acquired to create a grayscale image of the same size; Color image, converted into gray image, and create a memory space; equalize the gray image histogram, reduce the amount of gray image information, speed up the detection, then load the training library, detect the face in the picture, and return An object containing face information, obtains the data of the location of the face, and records the number; finally obtains the area of the avatar and saves it, thus completing a real-time facial image extraction process.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may also be: a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, and the like.
  • step S20 the real-time facial image is input into a predetermined AU classifier to obtain a probability of each AU identified from the real-time facial image.
  • a real-time facial image extracted from the real-time image using a face recognition algorithm is input to a predetermined AU classifier, each AU is identified from the real-time facial image, and each of the real-time facial images is included
  • the probability of the AU in the range of 0-1), wherein the training steps of the predetermined AU classifier include:
  • each AU In the sample preparation phase of each AU, a certain number of face images are prepared, and from a large number of face images, a face image containing the AU is found as a positive sample image of the AU, and a negative sample is prepared for each AU.
  • Image obtaining a positive sample image and a negative sample image of each AU to form a first sample set; image regions corresponding to different AUs may be the same, for example, AU1, AU2 are related to the face image including eyebrows, eyes, and The area of the forehead, AU9, AU22, relates to the nose and lip areas in the face image. The area of the image that does not contain the AU can be used as the negative sample image of the AU.
  • a first scale (for example, 60%) of sample images are randomly extracted from each of the AU positive/negative sample images as a training set, and a second proportional sample image is extracted from the remaining sample images of the AU as a verification set. For example, 50%, that is, 20% of all sample images of the AU are extracted as a verification set, and the convolutional neural network is trained by using the training set of each AU to obtain the AU classifier; in order to ensure the accuracy of the AU classifier, The accuracy of the AU classifier needs to be verified, and the accuracy of the trained AU classifier is verified by the verification set. If the accuracy rate is greater than or equal to the preset accuracy rate (for example, 90%), the training ends, or If the accuracy is less than the preset accuracy, increase the number of sample pictures in the sample set and re-execute the above training steps.
  • the preset accuracy rate for example, 90%
  • the training step of the predetermined AU classifier further includes: performing preprocessing such as scaling, cropping, flipping, and/or twisting on the sample image in the first sample set, using the preprocessed
  • preprocessing such as scaling, cropping, flipping, and/or twisting
  • step S30 the probability of all AUs in the real-time facial image is composed into the feature vector of the real-time facial image.
  • the probability of identifying each AU (for example, 39) from the real-time facial image is P1, P2, P3, ..., P39, respectively, and each AU
  • Step S40 Input the feature vector into a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and take the emotion with the highest probability as the emotion recognized from the real-time face image.
  • the feature vector V1 is input to a predetermined emotion classifier to identify each emotion that may exist from the real-time face image, and the probability of each emotion (ie, the possibility of existence).
  • the training steps of the predetermined emotion classifier include:
  • the sample image is input into the AU classifier, and the probability of identifying each AU from the sample image is obtained, and each sample image is identified from each sample image.
  • the probability of AU constitutes a feature vector; at the same time, according to the emotion presented by each sample image, the sample image is classified and labeled, that is, an emotion tag (for example, “happy”) is assigned, and a second sample set including the feature vector and the emotion tag is obtained.
  • a first proportion for example, 60%
  • a second proportional sample image from the remaining sample image as a verification set, for example, 50%, that is, extracting the second sample
  • the 20% sample image is collected as a verification set, and the training set is trained by the Naive Bayes algorithm to obtain the emotion classifier. In order to ensure the accuracy of the emotion classifier, the accuracy of the emotion classifier needs to be verified.
  • the verification set to verify the accuracy of the trained emotion classifier, if the accuracy rate is greater than or equal to the preset accuracy rate (eg, 90) ).
  • the preset accuracy rate eg, 90
  • GaussianNB is a simple Bayesian with Gaussian distribution a priori
  • MultinomialNB is a simple Bayesian with a priori polynomial distribution
  • BernoulliNB is a simple Bayesian with a priori Bernoulli distribution.
  • the classification scenarios applicable to these three categories are different, and GaussianNB is used for classification problems.
  • MultinomialNB and BernoulliNB are used for discrete value models. Because we are a classification problem, we chose GaussianNB.
  • GaussianNB assumes that the prior probability of the feature is a normal distribution, so there is the following formula:
  • ⁇ k is the kth class of Y
  • ⁇ k For values that need to be estimated from the training set. GaussianNB will find ⁇ k and based on the training set. ⁇ k is the average of all X j in the sample class C k . Is the variance of all X j in the sample class C k .
  • the feature vector V1 of the real-time facial image is input to the emotion classifier, there are various emotions recognized from the real-time facial image, and the probability of each emotion is recognized (the range of values is: 0-1) ) different, for example, happy: 0.6, surprised: 0.3, sad: 0.1.
  • the character emotion analysis method proposed by the above embodiment by identifying a real-time facial image from a real-time image, extracting each AU feature in the real-time facial image by an AU classifier, and combining the probabilities of the AU features into a feature vector,
  • the feature vector input emotion classifier recognizes the probability of each emotion existing in the real-time face image, and takes the emotion with the highest probability as the emotion in the real-time image.
  • the embodiment of the present application further provides a computer readable storage medium, where the character readable analysis program includes a character emotion analysis program, and when the character emotion analysis program is executed by the processor, the following operations are implemented:
  • the feature vector is input to a predetermined emotion classifier to obtain a probability of identifying each emotion from the real-time face image, and the emotion with the highest probability is taken as the emotion recognized from the real-time face image.
  • the training step of the predetermined AU classifier comprises:
  • Preparing a first sample set containing a certain number of face sample images respectively extracting each AU matched image region as a positive sample image of the AU from the face sample image, and preparing a negative sample image for each AU;
  • the positive/negative sample images of each AU are divided into a training set of a first ratio and a verification set of a second ratio;
  • the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the sample size and re-execute Training steps.
  • the training step of the predetermined AU classifier further includes:
  • the pre-processing operations are performed on the sample images in the first sample set, including: scaling, cropping, flipping, and/or warping.
  • the training step of the predetermined emotion classifier comprises:
  • the training ends, or if the accuracy rate is less than the preset accuracy rate, increase the number of samples in the sample set And re-execute the above training steps.
  • the naive Bayesian algorithm is a naive Bayesian algorithm that is a priori Gaussian distribution.
  • the face recognition algorithm may be a geometric feature based method, a local feature analysis method, a feature face method, an elastic model based method, and a neural network method.
  • a disk including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé d'analyse d'émotion de figure humaine, un programme d'analyse, un appareil électronique, et un support d'informations, ledit procédé consistant : à obtenir une image en temps réel capturée par un appareil photo, et à utiliser un algorithme de reconnaissance faciale pour extraire une image de visage en temps réel à partir de ladite image en temps réel (S10) ; à entrer l'image de visage en temps réel dans un classificateur AU prédéterminé pour obtenir une probabilité de chaque AU reconnu à partir de l'image de visage en temps réel (S20) ; à former les probabilités de tous les AU dans l'image de visage en temps réel en de vecteurs de caractéristique de l'image de visage en temps réel (S30) ; à entrer lesdits vecteurs de caractéristique dans un classificateur d'émotions prédéterminé pour obtenir la probabilité de chaque émotion reconnue à partir de l'image de visage en temps réel, et à prendre l'émotion ayant la meilleure probabilité pour être l'émotion reconnue à partir de l'image de visage en temps réel (S40). Après utilisation du procédé pour identifier les caractéristiques AU et les probabilités en une image de visage en temps réel, une émotion de la figure humaine dans l'image de visage en temps réel est reconnue conformément à chaque caractéristique AU et une probabilité, ce qui permet d'améliorer l'efficacité de la reconnaissance d'émotion de la figure humaine.
PCT/CN2018/076168 2017-11-15 2018-02-10 Procédé d'analyse d'émotion de figure humaine, appareil, et support d'informations WO2019095571A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711126632.2 2017-11-15
CN201711126632.2A CN107862292B (zh) 2017-11-15 2017-11-15 人物情绪分析方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2019095571A1 true WO2019095571A1 (fr) 2019-05-23

Family

ID=61701889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/076168 WO2019095571A1 (fr) 2017-11-15 2018-02-10 Procédé d'analyse d'émotion de figure humaine, appareil, et support d'informations

Country Status (2)

Country Link
CN (1) CN107862292B (fr)
WO (1) WO2019095571A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263673A (zh) * 2019-05-31 2019-09-20 合肥工业大学 面部表情识别方法、装置、计算机设备及存储介质
CN110427802A (zh) * 2019-06-18 2019-11-08 平安科技(深圳)有限公司 Au检测方法、装置、电子设备及存储介质

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110395260B (zh) * 2018-04-20 2021-12-07 比亚迪股份有限公司 车辆、安全驾驶方法和装置
CN108717464A (zh) * 2018-05-31 2018-10-30 中国联合网络通信集团有限公司 照片处理方法、装置及终端设备
CN108810624A (zh) * 2018-06-08 2018-11-13 广州视源电子科技股份有限公司 节目反馈方法和装置、播放设备
CN108875704B (zh) * 2018-07-17 2021-04-02 北京字节跳动网络技术有限公司 用于处理图像的方法和装置
CN109635838B (zh) * 2018-11-12 2023-07-11 平安科技(深圳)有限公司 人脸样本图片标注方法、装置、计算机设备及存储介质
CN109493403A (zh) * 2018-11-13 2019-03-19 北京中科嘉宁科技有限公司 一种基于运动单元表情映射实现人脸动画的方法
CN109635727A (zh) * 2018-12-11 2019-04-16 昆山优尼电能运动科技有限公司 一种人脸表情识别方法及装置
CN109584050A (zh) * 2018-12-14 2019-04-05 深圳壹账通智能科技有限公司 基于微表情识别的用户风险程度分析方法及装置
CN109829996A (zh) * 2018-12-15 2019-05-31 深圳壹账通智能科技有限公司 应用签到方法、装置、计算机装置及存储介质
CN109829363A (zh) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 表情识别方法、装置、计算机设备和存储介质
CN109766765A (zh) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 音频数据推送方法、装置、计算机设备和存储介质
CN109583431A (zh) * 2019-01-02 2019-04-05 上海极链网络科技有限公司 一种人脸情绪识别模型、方法及其电子装置
CN109840513B (zh) * 2019-02-28 2020-12-01 北京科技大学 一种人脸微表情识别方法及识别装置
CN109840512A (zh) * 2019-02-28 2019-06-04 北京科技大学 一种面部动作单元识别方法及识别装置
CN110166836B (zh) * 2019-04-12 2022-08-02 深圳壹账通智能科技有限公司 一种电视节目切换方法、装置、可读存储介质及终端设备
CN110210194A (zh) * 2019-04-18 2019-09-06 深圳壹账通智能科技有限公司 电子合同显示方法、装置、电子设备及存储介质
CN110177205A (zh) * 2019-05-20 2019-08-27 深圳壹账通智能科技有限公司 终端设备、基于微表情的拍照方法及计算机可读存储介质
CN112016368A (zh) * 2019-05-31 2020-12-01 沈阳新松机器人自动化股份有限公司 一种基于面部表情编码系统的表情识别方法、系统及电子设备
CN110399836A (zh) * 2019-07-25 2019-11-01 深圳智慧林网络科技有限公司 用户情绪识别方法、装置以及计算机可读存储介质
CN110598546B (zh) * 2019-08-06 2024-06-28 平安科技(深圳)有限公司 基于图像的目标物生成方法及相关设备
CN110705419A (zh) * 2019-09-24 2020-01-17 新华三大数据技术有限公司 情绪识别方法、预警方法、模型训练方法和相关装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050408A1 (en) * 2012-08-14 2014-02-20 Samsung Electronics Co., Ltd. Method for on-the-fly learning of facial artifacts for facial emotion recognition
CN104376333A (zh) * 2014-09-25 2015-02-25 电子科技大学 基于随机森林的人脸表情识别方法
CN104680141A (zh) * 2015-02-13 2015-06-03 华中师范大学 基于运动单元分层的人脸表情识别方法及系统
CN105844221A (zh) * 2016-03-18 2016-08-10 常州大学 一种基于Vadaboost筛选特征块的人脸表情识别方法
CN107194347A (zh) * 2017-05-19 2017-09-22 深圳市唯特视科技有限公司 一种基于面部动作编码系统进行微表情检测的方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007043712A1 (fr) * 2005-10-14 2007-04-19 Nagasaki University Procédé d’analyse et d’indication d’émotion, programme, support d’enregistrement et système de ces procédés
AU2011318719B2 (en) * 2010-10-21 2015-07-02 Samsung Electronics Co., Ltd. Method and apparatus for recognizing an emotion of an individual based on facial action units
KR102094723B1 (ko) * 2012-07-17 2020-04-14 삼성전자주식회사 견고한 얼굴 표정 인식을 위한 특징 기술자
CN103065122A (zh) * 2012-12-21 2013-04-24 西北工业大学 基于面部动作单元组合特征的人脸表情识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050408A1 (en) * 2012-08-14 2014-02-20 Samsung Electronics Co., Ltd. Method for on-the-fly learning of facial artifacts for facial emotion recognition
CN104376333A (zh) * 2014-09-25 2015-02-25 电子科技大学 基于随机森林的人脸表情识别方法
CN104680141A (zh) * 2015-02-13 2015-06-03 华中师范大学 基于运动单元分层的人脸表情识别方法及系统
CN105844221A (zh) * 2016-03-18 2016-08-10 常州大学 一种基于Vadaboost筛选特征块的人脸表情识别方法
CN107194347A (zh) * 2017-05-19 2017-09-22 深圳市唯特视科技有限公司 一种基于面部动作编码系统进行微表情检测的方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263673A (zh) * 2019-05-31 2019-09-20 合肥工业大学 面部表情识别方法、装置、计算机设备及存储介质
CN110263673B (zh) * 2019-05-31 2022-10-14 合肥工业大学 面部表情识别方法、装置、计算机设备及存储介质
CN110427802A (zh) * 2019-06-18 2019-11-08 平安科技(深圳)有限公司 Au检测方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN107862292A (zh) 2018-03-30
CN107862292B (zh) 2019-04-12

Similar Documents

Publication Publication Date Title
WO2019095571A1 (fr) Procédé d'analyse d'émotion de figure humaine, appareil, et support d'informations
WO2019033525A1 (fr) Procédé de reconnaissance de caractéristiques d'unité d'action, dispositif et support d'informations
WO2019109526A1 (fr) Procédé et dispositif de reconnaissance de l'âge de l'image d'un visage, et support de stockage
WO2019033573A1 (fr) Procédé d'identification d'émotion faciale, appareil et support d'informations
WO2019033571A1 (fr) Procédé de détection de point de caractéristique faciale, appareil et support de stockage
KR102174595B1 (ko) 비제약형 매체에 있어서 얼굴을 식별하는 시스템 및 방법
EP2630635B1 (fr) Procédé et appareil destinés à reconnaître une émotion d'un individu sur la base d'unités d'actions faciales
WO2017088432A1 (fr) Procédé et dispositif de reconnaissance d'image
WO2019033572A1 (fr) Procédé de détection de situation de visage bloqué, dispositif et support d'informations
WO2019061658A1 (fr) Procédé et dispositif de localisation de lunettes, et support d'informations
WO2021051547A1 (fr) Procédé et système de détection de comportement violent
Anwar et al. Learned features are better for ethnicity classification
EP3685288B1 (fr) Appareil, procédé et produit de programme informatique pour la reconnaissance biométrique
CN113255557B (zh) 一种基于深度学习的视频人群情绪分析方法及系统
Khatri et al. Facial expression recognition: A survey
WO2021127916A1 (fr) Procédé de reconnaissance d'émotion faciale, dispositif intelligent et support de stockage lisible par ordinateur
Gudipati et al. Efficient facial expression recognition using adaboost and haar cascade classifiers
Chowdhury et al. Lip as biometric and beyond: a survey
Singh et al. Feature based method for human facial emotion detection using optical flow based analysis
Nahar et al. Twins and similar faces recognition using geometric and photometric features with transfer learning
Aslam et al. Gender classification based on isolated facial features and foggy faces using jointly trained deep convolutional neural network
Ali et al. Facial action units detection under pose variations using deep regions learning
US20220335752A1 (en) Emotion recognition and notification system
WO2024000233A1 (fr) Procédé et appareil de reconnaissance d'expression faciale, et dispositif et support de stockage lisible
Praneel et al. Malayalam Sign Language Character Recognition System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18878905

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC DATED 14-08-2020

122 Ep: pct application non-entry in european phase

Ref document number: 18878905

Country of ref document: EP

Kind code of ref document: A1