WO2019033525A1 - Au特征识别方法、装置及存储介质 - Google Patents

Au特征识别方法、装置及存储介质 Download PDF

Info

Publication number
WO2019033525A1
WO2019033525A1 PCT/CN2017/104819 CN2017104819W WO2019033525A1 WO 2019033525 A1 WO2019033525 A1 WO 2019033525A1 CN 2017104819 W CN2017104819 W CN 2017104819W WO 2019033525 A1 WO2019033525 A1 WO 2019033525A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
image
facial
real
time
Prior art date
Application number
PCT/CN2017/104819
Other languages
English (en)
French (fr)
Inventor
陈林
张国辉
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Priority to US16/337,931 priority Critical patent/US10445562B2/en
Publication of WO2019033525A1 publication Critical patent/WO2019033525A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Definitions

  • the present invention relates to the field of computer vision processing technologies, and in particular, to an AU feature recognition method and apparatus, and a computer readable storage medium.
  • Face emotion recognition is an important part of human-computer interaction and emotion calculation research, involving psychology, sociology, anthropology, life sciences, cognitive science, computer science and other research fields. It is very intelligent and intelligent for human-computer interaction. significance.
  • FACS is a "facial expression coding system" created in 1976 after years of research. According to the anatomical features of the face, it can be divided into several independent and interrelated motion units (AU). The characteristics of the movement and the main areas it controls can reflect facial expressions.
  • the method of judging facial expressions by recognizing AU features in facial images is relatively common and has high accuracy.
  • AU features are mostly collected, and a large number of AU samples are collected, and the samples are sorted into several categories, and convolutional nerves are used.
  • the network trains the AU feature recognition model for AU feature recognition, but the accuracy of this method is not high.
  • the invention provides an AU feature recognition method, device and computer readable storage medium, the main purpose of which is to identify AU features in feature regions in real-time facial images by different AU classifiers, thereby effectively improving the efficiency of AU feature recognition. .
  • the present invention provides an electronic device, comprising: a memory, a processor, and an imaging device, wherein the memory stores an AU feature recognition program, and the AU feature recognition program is implemented by the processor The following steps:
  • Real-time image capturing step acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
  • a facial feature point recognition step inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
  • a local feature extraction step determining, according to locations of the t facial feature points, a feature region matching the AU in the real-time facial image, extracting local features from the feature region, and generating a plurality of feature vectors;
  • the AU feature prediction step inputting the plurality of feature vectors into the AU classifier that is pre-trained and matched with the feature region, respectively, to obtain a prediction result that identifies the corresponding AU feature from the feature region.
  • the facial average model is trained by the facial feature recognition model, wherein the facial feature recognition model is an ERT algorithm, which is expressed by the following formula:
  • ⁇ t ( ⁇ , ⁇ ) represents the current class of the regression
  • S(t) is the shape estimate of the current model
  • each regression ⁇ t ( ⁇ , ⁇ ) is based on the input current image I and S(t) to predict an increment Adding this increment to the current shape estimate to improve the current model.
  • some feature points of all sample images are trained to train the first regression tree, and the predicted value of the first regression tree is described.
  • the residual of the true value of some feature points is used to train the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the part of the feature points is close to 0, and all the regression trees of the ERT algorithm are obtained. According to these regression trees, the facial average model is obtained.
  • the following steps are further implemented:
  • the determining step determining whether the probability of each AU feature in the prediction result is greater than a preset threshold.
  • the determining step further includes:
  • Prompting step when it is determined that the AU feature having a probability greater than a preset threshold exists in the prediction result, prompting to identify the AU feature from the real-time facial image.
  • the training step of the predetermined AU classifier comprises:
  • Sample preparation step collecting a face sample image, respectively capturing each AU matched image region as a positive sample image of the AU from the face sample image, and preparing a negative sample image for each AU;
  • Local feature extraction step extracting a positive sample image of each AU, a local feature of the negative sample image, and generating a corresponding feature vector;
  • Model training step The support vector classifier is trained by using the local features of the positive/negative sample images of each AU, and the corresponding AU classifier is obtained.
  • the face recognition algorithm comprises: a geometric feature based method, a local feature analysis method, a feature face method, an elastic model based method, and a neural network method.
  • the present invention further provides an AU feature recognition method, the method comprising:
  • Real-time image capturing step acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
  • a facial feature point recognition step inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
  • a local feature extraction step determining, according to locations of the t facial feature points, a feature region matching the AU in the real-time facial image, extracting local features from the feature region, and generating a plurality of feature vectors;
  • the AU feature prediction step inputting the plurality of feature vectors into the AU classifier that is pre-trained and matched with the feature region, respectively, to obtain a prediction result that identifies the corresponding AU feature from the feature region.
  • the facial average model is trained by the facial feature recognition model, wherein the facial feature recognition model is an ERT algorithm, which is expressed by the following formula:
  • ⁇ t ( ⁇ , ⁇ ) represents the current class of the regression
  • S(t) is the shape estimate of the current model
  • each regression ⁇ t ( ⁇ , ⁇ ) is based on the input current image I and S(t) to predict an increment Adding this increment to the current shape estimate to improve the current model.
  • some feature points of all sample images are trained to train the first regression tree, and the predicted value of the first regression tree is described.
  • the residual of the true value of some feature points is used to train the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the part of the feature points is close to 0, and all the regression trees of the ERT algorithm are obtained. According to these regression trees, the facial average model is obtained.
  • the method further comprises:
  • the determining step determining whether the probability of each AU feature in the prediction result is greater than a preset threshold.
  • the determining step further includes:
  • Prompting step when it is determined that the AU feature having a probability greater than a preset threshold exists in the prediction result, prompting to identify the AU feature from the real-time facial image.
  • the training step of the predetermined AU classifier comprises:
  • Sample preparation step collecting a face sample image, respectively capturing each AU matched image region as a positive sample image of the AU from the face sample image, and preparing a negative sample image for each AU;
  • Local feature extraction step extracting a positive sample image of each AU, a local feature of the negative sample image, and generating a corresponding feature vector;
  • Model training step The support vector classifier is trained by using the local features of the positive/negative sample images of each AU, and the corresponding AU classifier is obtained.
  • the face recognition algorithm comprises: a geometric feature based method, a local feature analysis method, a feature face method, an elastic model based method, and a neural network method.
  • the present invention further provides a computer readable storage medium having an AU feature recognition program stored therein, and when the AU feature recognition program is executed by a processor, implemented as described above Any step in the AU feature recognition method.
  • the AU feature recognition method, the electronic device and the computer readable storage medium provided by the present invention intercept the feature regions corresponding to each AU feature from the real-time facial image, and input the feature regions corresponding to each AU feature into the corresponding AU classification. , get the predicted knot for each AU feature As a result, the accuracy of AU feature recognition is improved.
  • FIG. 1 is a schematic view of a preferred embodiment of an electronic device of the present invention
  • FIG. 2 is a block diagram of the AU feature recognition program of FIG. 1;
  • FIG. 3 is a flowchart of a first embodiment of an AU feature identification method according to the present invention.
  • FIG. 4 is a flow chart of a second embodiment of the AU feature recognition method of the present invention.
  • the present invention provides an electronic device 1.
  • FIG. 1 a schematic diagram of a preferred embodiment of an electronic device 1 of the present invention is shown.
  • the electronic device 1 may be a terminal device having a computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • a computing function such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • the electronic device 1 includes a processor 12, a memory 11, an imaging device 13, a network interface 14, and a communication bus 15.
  • the camera device 13 is installed in a specific place, such as an office place and a monitoring area, and real-time images are taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 12 through the network.
  • Network interface 14 may optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • Communication bus 15 is used to implement connection communication between these components.
  • the memory 11 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory 11, or the like.
  • the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
  • the readable storage medium may also be an external memory 11 of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC). , Secure Digital (SD) card, Flash Card, etc.
  • SMC smart memory card
  • SD Secure Digital
  • the readable storage medium of the memory 11 is generally used to store an action unit (AU) feature recognition program 10 installed on the electronic device 1, a face image sample library, and pre-trained. Face average model, AU classifier, etc.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing AU feature recognition. Program 10 and so on.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing AU feature recognition.
  • Program 10 and so on.
  • Figure 1 shows only the electronic device 1 with components 11-15, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the electronic device 1 may further include a user interface
  • the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like.
  • the user interface may also include a standard wired interface and a wireless interface.
  • the electronic device 1 may further include a display, which may also be appropriately referred to as a display screen or a display unit.
  • a display may also be appropriately referred to as a display screen or a display unit.
  • it may be an LED display, a liquid crystal display, a touch liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like.
  • the display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
  • the electronic device 1 further comprises a touch sensor.
  • the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
  • the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
  • the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
  • the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
  • a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
  • the electronic device 1 may further include an RF (Radio Frequency) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
  • RF Radio Frequency
  • an operating system and an AU feature recognition program 10 may be included in the memory 11 as a computer storage medium; when the processor 12 executes the AU feature recognition program 10 stored in the memory 11, the following is realized as follows step:
  • AU feature recognition program 10 may also be partitioned into one or more modules, one or more modules being stored in memory 11 and executed by processor 12 to complete the present invention.
  • a module as referred to in the present invention refers to a series of computer program instructions that are capable of performing a particular function.
  • FIG. 2 it is a block diagram of the AU feature recognition program 10 of FIG.
  • the AU feature recognition program 10 can be divided into: an acquisition module 110, an identification module 120, a feature extraction module 130, a prediction module 140, a determination module 150, and a prompt module 160.
  • the obtaining module 110 is configured to acquire a real-time image captured by the camera device 13 and extract a real-time face image from the real-time image by using a face recognition algorithm.
  • the camera 13 captures a real-time image
  • the camera 13 transmits the real-time image to the processor 12.
  • the processor 12 receives the real-time image
  • the acquiring module 110 first acquires the size of the image to establish an identical image.
  • Size ash Degree image convert the acquired color image into a grayscale image, and create a memory space at the same time; equalize the grayscale image histogram, reduce the amount of grayscale image information, speed up the detection speed, and then load the training library to detect the image
  • the face of the face, and return an object containing the face information obtain the data of the location of the face, and record the number; finally obtain the area of the avatar and save it, thus completing a real-time facial image extraction process.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may also be: a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, and the like.
  • the identification module 120 is configured to input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
  • the recognition module 120 calls the facial average model of the trained facial feature points from the memory 11, and then aligns the real-time facial image with the facial average model, and then utilizes the features.
  • the extraction algorithm searches the real-time facial image for 76 facial feature points that match the 76 facial feature points of the facial average model.
  • the facial average model of the facial feature points is pre-built and trained, and the specific embodiment will be described in the following AU feature recognition method.
  • the feature extraction algorithm is a SIFT (scale-invariant feature transform) algorithm.
  • the SIFT algorithm extracts the local features of each facial feature point from the facial average model of the facial feature points, selects an eye feature point or a lip feature point as a reference feature point, and searches for the reference feature point in the real-time facial image.
  • the feature points with the same or similar local features for example, whether the difference between the local features of the two feature points is within a preset range, and if so, indicating that the feature point is the same or similar to the local feature of the reference feature point, and As a facial feature point.
  • all facial feature points are found in the real-time facial image.
  • the feature extraction algorithm may also be a SURF (Speeded Up Robust Features) algorithm, an LBP (Local Binary Patterns) algorithm, a HOG (Histogram of Oriented Gridients) algorithm, or the like.
  • SURF Speeded Up Robust Features
  • LBP Long Binary Patterns
  • HOG Histogram of Oriented Gridients
  • the feature extraction module 130 is configured to determine, according to the location of the t facial feature points, a feature region that matches each AU in the real-time facial image, extract local features from the feature region, and generate a plurality of feature vectors.
  • FACS Facial Action Coding System
  • Each AU is a small group of muscle contraction codes for the face. For example, AU1- raises the inner corner of the eyebrow, AU2- raises the outer corner of the eyebrow, AU9-wrinkles the nose, AU22- tightens the lips and turns outwards.
  • the feature extraction module 130 determines the real-time facial image according to the 76 facial feature points identified by the recognition module 120 from the real-time facial image.
  • the forehead, eyebrows and eye area in the middle as the characteristic area matching AU1 and AU2, extract the HOG features of the inner corner of the eyebrow and the outer corner of the eyebrow from the forehead, eyebrow and eye area, respectively, and form the feature vector V of the feature area of AU1 and AU2, respectively. 1 , V 2 .
  • AU9 For AU9, AU22, we need to determine the matching feature area with the AU, namely the nose and the lips, and the feature extraction module 130 determines the real-time facial image according to the 76 facial feature points identified by the recognition module 120 from the real-time facial image.
  • the nose and lip area as a feature region AU9, AU22 matching region is extracted from the nose and mouth region are HOG features are formed AU9, feature vector V to the feature region AU22 9, V 22.
  • the prediction module 140 is configured to input the plurality of feature vectors into the AU classifier that is pre-trained and matched with the feature region, to obtain a prediction result that identifies the corresponding AU feature from the feature region.
  • the number of the pre-trained AU classifiers is 39, corresponding to each AU, and the prediction module inputs the feature vectors V 1 , V 2 , V 9 , and V 22 into AU1, AU2, AU9, and AU22, respectively.
  • the AU classifier, the classifier outputs the probability of identifying AU1, AU2, AU9, and AU22 from the corresponding feature regions, respectively.
  • the determining module 150 is configured to determine whether the probability of identifying the corresponding AU feature from the feature area in the prediction result is greater than a preset threshold. It is assumed that the probability that each AU classifier recognizes AU1, AU2, AU9, and AU22 from the current real-time facial image is 0.45, 0.51, 0.60, 0.65, and the preset threshold is 0.50, and the determining module 150 determines from the real-time facial image. The probability of the corresponding AU feature is identified with the size of the preset threshold (0.50).
  • the prompting module 160 is configured to prompt to identify the AU feature from the real-time facial image if a probability that the predicted feature has a corresponding AU feature from the feature region is greater than a preset threshold.
  • the probability of identifying AU1 from the current real-time facial image is less than a preset threshold, and the probability of identifying AU2, AU9, and AU22 from the current real-time facial image is greater than a preset threshold, and the prompting module 160 prompts to identify from the current real-time facial image. Arrived at AU2, AU9, AU22.
  • the electronic device 1 proposed in this embodiment extracts feature regions matching each AU from real-time images, and respectively identifies corresponding AU features from the feature regions, thereby improving the accuracy of AU feature recognition.
  • the present invention also provides an AU feature recognition method.
  • FIG. 3 it is a flowchart of the first embodiment of the AU feature identification method of the present invention. The method can be performed by a device that can be implemented by software and/or hardware.
  • the AU feature recognition method includes: Step S10 - Step S40.
  • Step S10 Acquire a real-time image captured by the camera device, and extract a real-time face image from the real-time image by using a face recognition algorithm.
  • the camera 13 captures a real-time image
  • the camera 13 transmits the real-time image to the processor 12.
  • the processor 12 receives the real-time image, the image is first acquired to create a grayscale image of the same size.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may also be: a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, and the like.
  • Step S20 input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
  • the facial average model is obtained by the following method:
  • a first sample library with n face images is created, and 76 feature points are manually marked at the positions of the eyes, eyebrows, nose, mouth, and facial contours in each face image.
  • the 76 feature points in each face image constitute a shape feature vector S, and n shape feature vectors S of the face are obtained.
  • the face feature recognition model is trained by using the t facial feature points to obtain a face average model.
  • the face feature recognition model is an Ensemble of Regression Tress (ERT) algorithm.
  • ERT Regression Tress
  • t represents the cascading sequence number
  • ⁇ t ( ⁇ , ⁇ ) represents the regression of the current stage.
  • Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
  • S(t) is the shape estimate of the current model; each regression ⁇ t ( ⁇ , ⁇ ) predicts an increment based on the input images I and S(t) Add this increment to the current shape estimate to improve the current model.
  • Each level of regression is based on feature points for prediction.
  • the training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
  • the number of face images in the first sample library is n, assuming that each sample image has 76 feature points, the feature vector Take some feature points of all sample images (for example, randomly take 50 feature points out of 76 feature points of each sample picture) to train the first regression tree, and predict the first regression tree and the partial features.
  • the residual of the true value of the point (the weighted average of the 50 feature points taken from each sample picture) is used to train the second tree... and so on until the predicted value of the Nth tree is trained and The true value of some feature points is close to 0, and all the regression trees of the ERT algorithm are obtained. According to these regression trees, the face average model is obtained, and the model file and the sample library are saved in the memory.
  • the facial average model of the trained facial feature points is called from the memory, and the real-time facial image is aligned with the facial average model, and then the feature extraction algorithm is used in the real-time facial image. 76 facial feature points matching the 76 facial feature points of the facial average model are searched.
  • the feature extraction algorithm may be a SIFT algorithm, a SURF algorithm, an LBP algorithm, an HOG algorithm, or the like.
  • Step S30 Determine, according to the position of the t facial feature points, a feature region that matches each AU in the real-time facial image, extract local features from the feature region, and generate a plurality of feature vectors.
  • AU1- raises the inner corner of the eyebrow
  • AU2- raises the outer corner of the eyebrow
  • AU22- tightens the lips and turns outwards.
  • AU1- raises the inner corner of the eyebrow
  • AU2- raises the outer corner of the eyebrow
  • AU22- tightens the lips and turns outwards.
  • AU1- we need to determine the feature area matching the AU, that is, the eyebrow, and determine the forehead, eyebrows and eyes in the real-time facial image based on the 76 facial feature points identified from the real-time facial image.
  • the region as a feature region matching AU1 and AU2, extracts the HOG features of the inner corner of the eyebrow and the outer corner of the eyebrow in the forehead, the eyebrow, and the eye region, respectively, and form the feature vectors V 1 and V 2 of the feature regions of AU1 and AU2, respectively.
  • AU9 For AU9, AU22, we need to determine the feature areas that match the AU, namely the nose and the lips, and determine the nose and lip areas in the real-time facial image based on the 76 facial feature points identified from the real-time facial image.
  • AU22 matching region is extracted from the nose and mouth region are HOG features are formed AU9, feature vector V to the feature region AU22 9, V 22.
  • Step S40 Input the plurality of feature vectors into the AU classifier that is pre-trained and matched with the feature region, respectively, to obtain a prediction result that identifies the corresponding AU feature from the feature region.
  • the number of the pre-trained AU classifiers is 39, corresponding to AU1, AU2, AU3, ..., AU39, respectively, obtained by:
  • an image region (and a face image including the AU) matching each AU is respectively taken from each face sample image as a positive sample image of the AU, and Each AU prepares a negative sample image to obtain a positive sample image and a negative sample image of each AU.
  • the image areas corresponding to different AUs may be the same, for example, AU1, AU2, AU4 all relate to the area including the eyebrows, eyes and forehead in the face image, and AU9, AU22 relate to the nose and lip areas in the face image.
  • the area of the image that does not contain the AU can be used as the negative sample image of the AU.
  • the AU positive sample image and the negative sample image are normalized to the same size.
  • Local features such as HOG features
  • SVM Vector Machine
  • the feature vectors V 1 , V 2 , V 9 , and V 22 are input to the AU classifiers of AU1, AU2, AU9, and AU22, respectively, and the classifier outputs the probability of identifying AU1, AU2, AU9, and AU22 from the corresponding feature regions, respectively.
  • the AU feature recognition method proposed in this embodiment determines the probability of identifying the AU feature from the feature region by intercepting the feature region matching each AU from the real-time image.
  • the AU features in the feature regions in the real-time facial images are identified by different AU classifiers, which effectively improves the efficiency of AU feature recognition.
  • FIG. 4 it is a flowchart of a second embodiment of the AU feature recognition method of the present invention.
  • the method can be performed by a device that can be implemented by software and/or hardware.
  • the AU feature recognition method includes: Step S10 - Step S70.
  • the steps S10 to S40 are substantially the same as those in the first embodiment, and are not described herein again.
  • Step S50 Determine whether the probability of each AU feature in the prediction result is greater than a preset threshold.
  • each AU classifier recognizes AU1, AU2, AU9, and AU22 from the current real-time facial image is: 0.45, 0.51, 0.60, 0.65, and the preset threshold is 0.50, and the probability and pre-determination of each AU feature are determined. Set the size of the threshold.
  • Step S60 When it is determined that the AU feature having a probability greater than a preset threshold in the prediction result, prompting to identify the AU feature from the real-time facial image.
  • the probability of identifying AU1 from the current real-time facial image is less than a preset threshold, and the probability of identifying AU2, AU9, and AU22 from the current real-time facial image is greater than a preset threshold, and then determining that AU2 is recognized from the current real-time facial image.
  • AU9, AU22, AU1 is not recognized from the current real-time face image.
  • the AU feature recognition method proposed in this embodiment is from a real-time image. Intercepting the feature region matching each AU, determining the probability of identifying the AU feature from the feature region by using the corresponding AU classifier, and performing the AU feature in the feature region in the real-time face image through different AU classifiers Identifying and setting thresholds, filtering the probability that each AU classifier recognizes the corresponding AU, effectively improving the accuracy of AU feature recognition.
  • an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium includes an AU feature recognition program, and when the AU feature recognition program is executed by the processor, the following operations are implemented:
  • Real-time image capturing step acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
  • a facial feature point recognition step inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
  • a local feature extraction step determining, according to locations of the t facial feature points, a feature region matching the AU in the real-time facial image, extracting local features from the feature region, and generating a plurality of feature vectors;
  • the AU feature prediction step inputting the plurality of feature vectors into the AU classifier that is pre-trained and matched with the feature region, respectively, to obtain a prediction result that identifies the corresponding AU feature from the feature region.
  • the determining step determining whether the probability of each AU feature in the prediction result is greater than a preset threshold.
  • the determining step further includes:
  • Prompting step when it is determined that the AU feature having a probability greater than a preset threshold exists in the prediction result, prompting to identify the AU feature from the real-time facial image.
  • the specific implementation manner of the computer readable storage medium of the present invention is substantially the same as the specific implementation manner of the AU feature identification method described above, and details are not described herein again.

Abstract

一种AU特征识别方法、电子装置及计算机可读存储介质,该方法包括:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像(S10);将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点(S20);根据该t个面部特征点的位置,确定该实时脸部图像中与每个AU匹配的特征区域,从所述特征区域提取局部特征,生成多个特征向量(S30);将所述多个特征向量分别输入预先训练好的、且与该特征区域匹配的AU分类器,得到从该特征区域中识别到相应AU特征的预测结果(S40)。该方法通过不同的AU分类器对实时脸部图像中特征区域中的AU特征进行识别,有效提高AU特征识别的效率。

Description

AU特征识别方法、装置及存储介质
优先权申明
本申请基于巴黎公约申明享有2017年8月17日递交的申请号为CN201710709113.2、名称为“AU特征识别方法、装置及存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本发明涉及计算机视觉处理技术领域,尤其涉及一种AU特征识别方法、装置及计算机可读存储介质。
背景技术
人脸情绪识别是人机交互与情感计算研究的重要组成部分,涉及心理学、社会学、人类学、生命科学、认知科学、计算机科学等研究领域,对人机交互智能化和谐化极具意义。
国际著名心理学家Paul Ekman和研究伙伴W.V.Friesen作了深入的研究,通过观察和生物反馈,描绘出不同的脸部肌肉动作和不同表情的对应关系。FACS就是经过多年研究于1976年所创制的“面部表情编码系统。根据人脸的解剖学特点,可将其划分成若干既相互独立又相互联系的运动单元(action unit,AU),这些运动单元的运动特征及其所控制的主要区域可以反映出面部表情。
目前,通过识别脸部图像中的AU特征判断面部表情方法比较通用,准确率较高,然而,业内识别AU特征,大多是收集大量AU样本,对样本进行整理,分成几类,使用卷积神经网络训练出AU特征识别模型,用来进行AU特征识别,但该种方法准确率不高。
发明内容
本发明提供一种AU特征识别方法、装置及计算机可读存储介质,其主要目的在于通过不同的AU分类器对实时脸部图像中特征区域中的AU特征进行识别,有效提高AU特征识别的效率。
为实现上述目的,本发明提供一种电子装置,该装置包括:存储器、处理器及摄像装置,所述存储器中存储有AU特征识别程序,所述AU特征识别程序被所述处理器执行时实现如下步骤:
实时图像捕获步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;
面部特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;
局部特征提取步骤:根据该t个面部特征点的位置,确定该实时脸部图像中与每个AU匹配的特征区域,从所述特征区域提取局部特征,生成多个特征向量;及
AU特征预测步骤:将所述多个特征向量分别输入预先训练好的、且与该特征区域匹配的AU分类器,得到从该特征区域中识别到相应AU特征的预测结果。
优选地,所述面部平均模型对人脸特征识别模型进行训练得到,其中,所述人脸特征识别模型为ERT算法,用公式表示如下:
Figure PCTCN2017104819-appb-000001
其中t表示级联序号,τt(·,·)表示当前级的回归器,S(t)为当前模型的形状估计,每个回归器τt(·,·)根据输入的当前图像I和S(t)来预测一个增量
Figure PCTCN2017104819-appb-000002
把这个增量加到当前的形状估计上来改进当前模型,在模型训练的过程中,取所有样本图片的部分特征点训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型。
优选地,所述AU特征识别程序被所述处理器执行时,还实现如下步骤:
判断步骤:判断所述预测结果中每个AU特征的概率是否大于预设阈值。
优选地,所述判断步骤还包括:
提示步骤:当判断所述预测结果中存在概率大于预设阈值的AU特征,提示从该实时脸部图像中识别到该AU特征。
优选地,所述预先确定的AU分类器的训练步骤包括:
样本准备步骤:收集人脸样本图像,从人脸样本图像中分别截取每个AU相匹配的图像区域作为该AU的正样本图像,并为每个AU准备负样本图像;
局部特征提取步骤:提取每个AU的正样本图像、负样本图像的局部特征,生成相应的特征向量;及
模型训练步骤:利用每个AU的正/负样本图像的局部特征对支持向量分类器进行学习训练,得到相应的AU分类器。
优选地,所述人脸识别算法包括:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法及神经网络方法。
此外,为实现上述目的,本发明还提供一种AU特征识别方法,该方法包括:
实时图像捕获步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;
面部特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;
局部特征提取步骤:根据该t个面部特征点的位置,确定该实时脸部图像中与每个AU匹配的特征区域,从所述特征区域提取局部特征,生成多个特征向量;及
AU特征预测步骤:将所述多个特征向量分别输入预先训练好的、且与该特征区域匹配的AU分类器,得到从该特征区域中识别到相应AU特征的预测结果。
优选地,所述面部平均模型对人脸特征识别模型进行训练得到,其中,所述人脸特征识别模型为ERT算法,用公式表示如下:
Figure PCTCN2017104819-appb-000003
其中t表示级联序号,τt(·,·)表示当前级的回归器,S(t)为当前模型的形状估计,每个回归器τt(·,·)根据输入的当前图像I和S(t)来预测一个增量
Figure PCTCN2017104819-appb-000004
把这个增量加到当前的形状估计上来改进当前模型,在模型训练的过程中,取所有样本图片的部分特征点训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型。
优选地,该方法还包括:
判断步骤:判断所述预测结果中每个AU特征的概率是否大于预设阈值。
优选地,所述判断步骤还包括:
提示步骤:当判断所述预测结果中存在概率大于预设阈值的AU特征,提示从该实时脸部图像中识别到该AU特征。
优选地,所述预先确定的AU分类器的训练步骤包括:
样本准备步骤:收集人脸样本图像,从人脸样本图像中分别截取每个AU相匹配的图像区域作为该AU的正样本图像,并为每个AU准备负样本图像;
局部特征提取步骤:提取每个AU的正样本图像、负样本图像的局部特征,生成相应的特征向量;及
模型训练步骤:利用每个AU的正/负样本图像的局部特征对支持向量分类器进行学习训练,得到相应的AU分类器。
优选地,所述人脸识别算法包括:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法及神经网络方法。
此外,为实现上述目的,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有AU特征识别程序,所述AU特征识别程序被处理器执行时,实现如上所述的AU特征识别方法中的任意步骤。
本发明提出的AU特征识别方法、电子装置及计算机可读存储介质,通过从实时脸部图像中截取每个AU特征对应的特征区域,并将每个AU特征对应的特征区域输入对应的AU分类器,得到识别出每个AU特征的预测结 果,提高了AU特征识别的准确率。
附图说明
图1为本发明电子装置较佳实施例的示意图;
图2为图1中AU特征识别程序的模块示意图;
图3为本发明AU特征识别方法第一实施例的流程图;
图4为本发明AU特征识别方法第二实施例的流程图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明提供一种电子装置1。参照图1所示,为本发明电子装置1较佳实施例的示意图。
在本实施例中,电子装置1可以是服务器、智能手机、平板电脑、便携计算机、桌上型计算机等具有运算功能的终端设备。
该电子装置1包括:处理器12、存储器11、摄像装置13、网络接口14及通信总线15。其中,摄像装置13安装于特定场所,如办公场所、监控区域,对进入该特定场所的目标实时拍摄得到实时图像,通过网络将拍摄得到的实时图像传输至处理器12。网络接口14可选地可以包括标准的有线接口、无线接口(如WI-FI接口)。通信总线15用于实现这些组件之间的连接通信。
存储器11包括至少一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器11等的非易失性存储介质。在一些实施例中,所述可读存储介质可以是所述电子装置1的内部存储单元,例如该电子装置1的硬盘。在另一些实施例中,所述可读存储介质也可以是所述电子装置1的外部存储器11,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
在本实施例中,所述存储器11的可读存储介质通常用于存储安装于所述电子装置1的动作单元(action unit,AU)特征识别程序10、人脸图像样本库及预先训练好的面部平均模型、AU分类器等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行AU特征识别程序10等。
图1仅示出了具有组件11-15的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
可选地,该电子装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)、语音输入装置比如麦克风(microphone)等具有语音识别功能的设备、语音输出装置比如音响、耳机等,可选地用户接口还可以包括标准的有线接口、无线接口。
可选地,该电子装置1还可以包括显示器,显示器也可以适当的称为显示屏或显示单元。在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。显示器用于显示在电子装置1中处理的信息以及用于显示可视化的用户界面。
可选地,该电子装置1还包括触摸传感器。所述触摸传感器所提供的供用户进行触摸操作的区域称为触控区域。此外,这里所述的触摸传感器可以为电阻式触摸传感器、电容式触摸传感器等。而且,所述触摸传感器不仅包括接触式的触摸传感器,也可包括接近式的触摸传感器等。此外,所述触摸传感器可以为单个传感器,也可以为例如阵列布置的多个传感器。
此外,该电子装置1的显示器的面积可以与所述触摸传感器的面积相同,也可以不同。可选地,将显示器与所述触摸传感器层叠设置,以形成触摸显示屏。该装置基于触摸显示屏侦测用户触发的触控操作。
可选地,该电子装置1还可以包括RF(Radio Frequency,射频)电路,传感器、音频电路等等,在此不再赘述。
在图1所示的装置实施例中,作为一种计算机存储介质的存储器11中可以包括操作系统、以及AU特征识别程序10;处理器12执行存储器11中存储的AU特征识别程序10时实现如下步骤:
获取摄像装置13拍摄的实时图像,利用人脸识别算法从该实时图像中提取出实时脸部图像,从实时脸部图像中截取每个AU特征对应的特征区域;处理器12从存储器11中调用预先训练好的AU分类器,并将每个AU特征对应的特征区域输入对应的AU分类器,分别得到从该实时脸部图像中识别到每个AU特征的预测结果,便于后续对当前脸部图像中的情绪进行判断。
在其他实施例中,AU特征识别程序10还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由处理器12执行,以完成本发明。本发明所称的模块是指能够完成特定功能的一系列计算机程序指令段。
参照图2所示,为图1中AU特征识别程序10的模块示意图。
所述AU特征识别程序10可以被分割为:获取模块110、识别模块120、特征提取模块130、预测模块140、判断模块150及提示模块160。
获取模块110,用于获取摄像装置13拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像。当摄像装置13拍摄到一张实时图像,摄像装置13将这张实时图像发送到处理器12,当处理器12接收到该实时图像后,所述获取模块110先获取图片的大小,建立一个相同大小的灰 度图像;将获取的彩色图像,转换成灰度图像,同时创建一个内存空间;将灰度图像直方图均衡化,使灰度图像信息量减少,加快检测速度,然后加载训练库,检测图片中的人脸,并返回一个包含人脸信息的对象,获得人脸所在位置的数据,并记录个数;最终获取头像的区域且保存下来,这样就完成了一次实时脸部图像提取的过程。
具体地,从该实时图像中提取实时脸部图像的人脸识别算法还可以为:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。
识别模块120,用于将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点。假设t=76,面部平均模型中的76个面部特征点中。当获取模块110提取出实时脸部图像后,所述识别模块120从存储器11中调用训练好的面部特征点的面部平均模型后,将实时脸部图像与该面部平均模型进行对齐,然后利用特征提取算法在该实时脸部图像中搜索与该面部平均模型的76个面部特征点匹配的76个面部特征点。其中,所述面部特征点的面部平均模型是预先构建并训练好的,具体实施方式将在下述AU特征识别方法中进行说明。
在本实施例中,所述特征提取算法为SIFT(scale-invariant feature transform)算法。SIFT算法从面部特征点的面部平均模型后提取每个面部特征点的局部特征,,选择一个眼部特征点或唇部特征点为参考特征点,在实时脸部图像中查找与该参考特征点的局部特征相同或相似的特征点,例如,两个特征点的局部特征的差值是否在预设范围内,若是,则表明该特征点与参考特征点的局部特征相同或相似,并将其作为一个面部特征点。依此原理直到在实时脸部图像中查找出所有面部特征点。在其他实施例中,该特征提取算法还可以为SURF(Speeded Up Robust Features)算法,LBP(Local Binary Patterns)算法,HOG(Histogram of Oriented Gridients)算法等。
特征提取模块130,用于根据该t个面部特征点的位置,确定该实时脸部图像中与每个AU匹配的特征区域,从所述特征区域提取局部特征,生成多个特征向量。在一个实施例中,根据保罗·艾克曼总结出的面部情绪编码系统(Facial Action Coding System,FACS),人类一共有39个主要的面部动作单元。每一个AU,就是脸部的一小组肌肉收缩代码。比如AU1-抬起眉毛内角,AU2-抬起眉毛外角,AU9-皱鼻,AU22-收紧双唇向外翻等。那么,对于AU1、AU2,我们需要确定与该AU的匹配的特征区域,即眉毛,特征提取模块130根据识别模块120从实时脸部图像中识别的76个面部特征点,确定该实时脸部图像中的额头、眉毛及眼睛区域,作为与AU1、AU2匹配的特征区域,从额头、眉毛及眼睛区域中分别提取眉毛内角、眉毛外角的HOG特征,分别形成AU1、AU2的特征区域的特征向量V1、V2。对于AU9、AU22,我们需要确定与该AU的匹配的特征区域,即鼻子和嘴唇,特征提取模块130根据识别模块120从实时脸部图像中识别的76个面部特征点,确定该实时脸部图像中的鼻子和嘴唇区域,作为与AU9、AU22匹配的特征 区域,从鼻子区域及嘴唇区域中分别提取HOG特征,分别形成AU9、AU22的特征区域的特征向量V9、V22
预测模块140,用于将所述多个特征向量分别输入预先训练好的、且与该特征区域匹配的AU分类器,得到从该特征区域中识别到相应AU特征的预测结果。其中,所述预先训练好的AU分类器的个数为39个,分别与每个AU对应,预测模块分别将特征向量V1、V2、V9、V22输入AU1、AU2、AU9、AU22的AU分类器,分类器分别输出从对应特征区域中识别出AU1、AU2、AU9、AU22的概率。
判断模块150,用于判断所述预测结果中是否存在从该特征区域识别到对应AU特征的概率大于预设阈值。假设各AU分类器从当前实时脸部图像中识别出AU1、AU2、AU9、AU22的概率分别为:0.45、0.51、0.60、0.65,预设阈值为0.50,判断模块150判断从实时脸部图像中识别到相应AU特征的概率与预设阈值(0.50)的大小。
提示模块160,用于若所述预测结果中存在从该特征区域识别到对应AU特征的概率大于预设阈值,提示从该实时脸部图像中识别到该AU特征。从当前实时脸部图像中识别出AU1的概率小于预设阈值,从当前实时脸部图像中识别AU2、AU9、AU22的概率大于预设阈值,那么提示模块160提示从当前实时脸部图像中识别到了AU2、AU9、AU22。
本实施例提出的电子装置1,从实时图像中分别提取与每个AU匹配的特征区域,并分别从特征区域中识别对应的AU特征,提高了AU特征识别的准确率。
此外,本发明还提供一种AU特征识别方法。参照图3所示,为本发明AU特征识别方法第一实施例的流程图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,AU特征识别方法包括:步骤S10-步骤S40。
步骤S10,获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像。当摄像装置13拍摄到一张实时图像,摄像装置13将这张实时图像发送到处理器12,当处理器12接收到该实时图像后,先获取图片的大小,建立一个相同大小的灰度图像;将获取的彩色图像,转换成灰度图像,同时创建一个内存空间;将灰度图像直方图均衡化,使灰度图像信息量减少,加快检测速度,然后加载训练库,检测图片中的人脸,并返回一个包含人脸信息的对象,获得人脸所在位置的数据,并记录个数;最终获取头像的区域且保存下来,这样就完成了一次实时脸部图像提取的过程。
具体地,从该实时图像中提取实时脸部图像的人脸识别算法还可以为:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。
步骤S20,将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点。
其中,所述面部平均模型通过以下方法得到:
建立一个有n张人脸图像的第一样本库,在每张人脸图像中的眼睛、眉毛、鼻子、嘴巴、脸部外轮廓的位置手动标记76个特征点。每张人脸图像中的该76个特征点组成一个形状特征向量S,得到面部的n个形状特征向量S。
利用所述t个面部特征点对人脸特征识别模型进行训练得到面部平均模型。所述人脸特征识别模型为Ensemble of Regression Tress(简称ERT)算法。ERT算法用公式表示如下:
Figure PCTCN2017104819-appb-000005
其中t表示级联序号,τt(·,·)表示当前级的回归器。每个回归器由很多棵回归树(tree)组成,训练的目的就是得到这些回归树。其中S(t)为当前模型的形状估计;每个回归器τt(·,·)根据输入图像I和S(t)来预测一个增量
Figure PCTCN2017104819-appb-000006
把这个增量加到当前的形状估计上来改进当前模型。其中每一级回归器都是根据特征点来进行预测。训练数据集为:(I1,S1),...,(In,Sn)其中I是输入的样本图像,S是样本图像中的特征点组成的形状特征向量。
在模型训练的过程中,第一样本库中人脸图像的数量为n,假设每一张样本图片有76个特征点,特征向量
Figure PCTCN2017104819-appb-000007
取所有样本图片的部分特征点(例如在每个样本图片的76个特征点中随机取50个特征点)训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值(每个样本图片所取的50个特征点的加权平均值)的残差用来训练第二棵树...依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型(mean shape),并将模型文件及样本库保存至存储器中。
当提取出实时脸部图像后,从存储器中调用训练好的面部特征点的面部平均模型后,将实时脸部图像与该面部平均模型进行对齐,然后利用特征提取算法在该实时脸部图像中搜索与该面部平均模型的76个面部特征点匹配的76个面部特征点。
所述特征提取算法可以为SIFT算法,SURF算法,LBP算法,HOG算法等。
步骤S30,根据该t个面部特征点的位置,确定该实时脸部图像中与每个AU匹配的特征区域,从所述特征区域提取局部特征,生成多个特征向量。
比如AU1-抬起眉毛内角,AU2-抬起眉毛外角,AU9-皱鼻,AU22-收紧双唇向外翻等。那么,对于AU1、AU2,我们需要确定与该AU的匹配的特征区域,即眉毛,根据从实时脸部图像中识别的76个面部特征点,确定该实时脸部图像中的额头、眉毛及眼睛区域,作为与AU1、AU2匹配的特征区域,额头、眉毛及眼睛区域中分别提取眉毛内角、眉毛外角的HOG特征,分别形成AU1、AU2的特征区域的特征向量V1、V2。对于AU9、AU22,我们需要确定与该AU的匹配的特征区域,即鼻子和嘴唇,根据从实时脸部图像中识别的76个面部特征点,确定该实时脸部图像中的鼻子和嘴唇区域, 作为与AU9、AU22匹配的特征区域,从鼻子区域及嘴唇区域中分别提取HOG特征,分别形成AU9、AU22的特征区域的特征向量V9、V22
步骤S40,将所述多个特征向量分别输入预先训练好的、且与该特征区域匹配的AU分类器,得到从该特征区域中识别到相应AU特征的预测结果。
其中,所述预先训练好的AU分类器的个数为39个,分别对应AU1、AU2、AU3、…、AU39,通过以下方式获得:
在上述的第一样本库中,从每张人脸样本图像中分别截取与每个AU相匹配的图像区域(及包含该AU的人脸图像),作为该AU的正样本图像,并为每个AU准备负样本图像,得到该每个AU的正样本图像及负样本图像。不同AU对应的图像区域可以是相同的,例如AU1、AU2、AU4都涉及到人脸图像中包含眉毛、眼睛和额头的区域,AU9、AU22涉及到人脸图像中的鼻子和嘴唇区域。图像中不包含该AU的区域,均可以作为该AU的负样本图像。AU正样本图像、负样本图像规范化成相同大小。从每个AU正样本图像、负样本图像中提取局部特征,如HOG特征,保存成特征向量;利用每个AU的正/负样本图像的局部特征分别对支持向量分类器(Support Vector Machine,SVM)进行学习训练,得到每个AU的分类器。
分别将特征向量V1、V2、V9、V22输入AU1、AU2、AU9、AU22的AU分类器,分类器分别输出从对应特征区域中识别出AU1、AU2、AU9、AU22的概率。
本实施例提出的AU特征识别方法,通过从实时图像中截取与每个AU匹配的特征区域,比通过对应的AU分类器判断从该特征区域中识别出该AU特征的概率。通过不同的AU分类器对实时脸部图像中特征区域中的AU特征进行识别,有效提高AU特征识别的效率。
基于第一实施例提出AU特征识别方法的第二实施例。参照图4所示,为本发明AU特征识别方法第二实施例的流程图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,AU特征识别方法包括:步骤S10-步骤S70。其中,步骤S10-步骤S40与第一实施例中内容大致相同,这里不再赘述。
步骤S50,判断所述预测结果中每个AU特征的概率是否大于预设阈值。
假设各AU分类器从当前实时脸部图像中识别出AU1、AU2、AU9、AU22的概率分别为:0.45、0.51、0.60、0.65,预设阈值为0.50,判断识别出各AU特征的概率与预设阈值的大小。
步骤S60,当判断所述预测结果中存在概率大于预设阈值的AU特征,提示从该实时脸部图像中识别到该AU特征。从当前实时脸部图像中识别出AU1的概率小于预设阈值,从当前实时脸部图像中识别AU2、AU9、AU22的概率大于预设阈值,那么判断从当前实时脸部图像中识别到了AU2、AU9、AU22,未从当前实时脸部图像中识别出AU1。
相比于第一实施例,本实施例提出的AU特征识别方法,从实时图像中 截取与每个AU匹配的特征区域,比通过对应的AU分类器判断从该特征区域中识别出该AU特征的概率,通过不同的AU分类器对实时脸部图像中特征区域中的AU特征进行识别,并设置阈值,对各个AU分类器识别出相应AU的概率进行过滤,有效提高AU特征识别的准确度。
此外,本发明实施例还提出一种计算机可读存储介质,所述计算机可读存储介质中包括AU特征识别程序,所述AU特征识别程序被处理器执行时实现如下操作:
实时图像捕获步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;
面部特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;
局部特征提取步骤:根据该t个面部特征点的位置,确定该实时脸部图像中与每个AU匹配的特征区域,从所述特征区域提取局部特征,生成多个特征向量;及
AU特征预测步骤:将所述多个特征向量分别输入预先训练好的、且与该特征区域匹配的AU分类器,得到从该特征区域中识别到相应AU特征的预测结果。
可选地,所述AU特征识别程序被所述处理器执行时,还实现如下步骤:
判断步骤:判断所述预测结果中每个AU特征的概率是否大于预设阈值。
可选地,所述判断步骤还包括:
提示步骤:当判断所述预测结果中存在概率大于预设阈值的AU特征,提示从该实时脸部图像中识别到该AU特征。
本发明之计算机可读存储介质的具体实施方式与上述AU特征识别方法的具体实施方式大致相同,在此不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器, 或者网络设备等)执行本发明各个实施例所述的方法。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (20)

  1. 一种电子装置,其特征在于,所述装置包括:存储器、处理器及摄像装置,所述存储器中存储有动作单元特征识别程序,所述AU特征识别程序被所述处理器执行时实现如下步骤:
    实时图像捕获步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;
    面部特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;
    局部特征提取步骤:根据该t个面部特征点的位置,确定该实时脸部图像中与每个AU匹配的特征区域,从所述特征区域提取局部特征,生成多个特征向量;及
    AU特征预测步骤:将所述多个特征向量分别输入预先训练好的、且与该特征区域匹配的AU分类器,得到从该特征区域中识别到相应AU特征的预测结果。
  2. 根据权利要求1所述的电子装置,其特征在于,所述面部平均模型对人脸特征识别模型进行训练得到,其中,所述人脸特征识别模型为ERT算法,用公式表示如下:
    Figure PCTCN2017104819-appb-100001
    其中t表示级联序号,τt(·,·)表示当前级的回归器,S(t)为当前模型的形状估计,每个回归器τt(·,·)根据输入的当前图像I和S(t)来预测一个增量
    Figure PCTCN2017104819-appb-100002
    把这个增量加到当前的形状估计上来改进当前模型,在模型训练的过程中,取所有样本图片的部分特征点训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型。
  3. 根据权利要求2所述的电子装置,其特征在于,所述AU特征识别程序被所述处理器执行时,还实现如下步骤:
    判断步骤:判断所述预测结果中每个AU特征的概率是否大于预设阈值。
  4. 根据权利要求3所述的电子装置,其特征在于,所述判断步骤还包括:
    提示步骤:当判断所述预测结果中存在概率大于预设阈值的AU特征,提示从该实时脸部图像中识别到该AU特征。
  5. 根据权利要求1所述的电子装置,其特征在于,所述预先确定的AU分类器的训练步骤包括:
    样本准备步骤:收集人脸样本图像,从人脸样本图像中分别截取每个AU相匹配的图像区域作为该AU的正样本图像,并为每个AU准备负样本图像;
    局部特征提取步骤:提取每个AU的正样本图像、负样本图像的局部特征,生成相应的特征向量;及
    模型训练步骤:利用每个AU的正/负样本图像的局部特征对支持向量分类器进行学习训练,得到相应的AU分类器。
  6. 根据权利要求5所述的电子装置,其特征在于,所述AU特征识别程序被所述处理器执行时,还实现如下步骤:
    判断步骤:判断所述预测结果中每个AU特征的概率是否大于预设阈值。
  7. 根据权利要求6所述的电子装置,其特征在于,所述判断步骤还包括:
    提示步骤:当判断所述预测结果中存在概率大于预设阈值的AU特征,提示从该实时脸部图像中识别到该AU特征。
  8. 根据权利要求1所述的电子装置,其特征在于,所述人脸识别算法包括:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法及神经网络方法。
  9. 一种AU特征识别方法,其特征在于,所述方法包括:
    实时图像捕获步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;
    面部特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;
    局部特征提取步骤:根据该t个面部特征点的位置,确定该实时脸部图像中与每个AU匹配的特征区域,从所述特征区域提取局部特征,生成多个特征向量;及
    AU特征预测步骤:将所述多个特征向量分别输入预先训练好的、且与该特征区域匹配的AU分类器,得到从该特征区域中识别到相应AU特征的预测结果。
  10. 根据权利要求9所述的AU特征识别方法,其特征在于,所述面部平均模型对人脸特征识别模型进行训练得到,其中,所述人脸特征识别模型为ERT算法,用公式表示如下:
    Figure PCTCN2017104819-appb-100003
    其中t表示级联序号,τt(·,·)表示当前级的回归器,S(t)为当前模型的形状估计,每个回归器τt(·,·)根据输入的当前图像I和S(t)来预测一个增量
    Figure PCTCN2017104819-appb-100004
    把这个增量加到当前的形状估计上来改进当前模型,在模型训练的过程中,取所有样本图片的部分特征点训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型。
  11. 根据权利要求10所述的AU特征识别方法,其特征在于,该方法还包括:
    判断步骤:判断所述预测结果中每个AU特征的概率是否大于预设阈值。
  12. 根据权利要求11所述的AU特征识别方法,其特征在于,所述判断步骤还包括:
    提示步骤:当判断所述预测结果中存在概率大于预设阈值的AU特征,提示从该实时脸部图像中识别到该AU特征。
  13. 根据权利要求9所述的AU特征识别方法,其特征在于,所述AU分类器的训练步骤包括:
    样本准备步骤:收集人脸样本图像,从人脸样本图像中分别截取每个AU相匹配的图像区域作为该AU的正样本图像,并为每个AU准备负样本图像;
    局部特征提取步骤:提取每个AU的正样本图像、负样本图像的局部特征,生成相应的特征向量;及
    模型训练步骤:利用每个AU的正/负样本图像的局部特征对支持向量分类器进行学习训练,得到相应的AU分类器。
  14. 根据权利要求13所述的AU特征识别方法,其特征在于,该方法还包括:
    判断步骤:判断所述预测结果中每个AU特征的概率是否大于预设阈值。
  15. 根据权利要求14所述的AU特征识别方法,其特征在于,所述判断步骤还包括:
    提示步骤:当判断所述预测结果中存在概率大于预设阈值的AU特征,提示从该实时脸部图像中识别到该AU特征。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有AU特征识别程序,所述AU特征识别程序被处理器执行时,可实现如下步骤:
    实时图像捕获步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;
    面部特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;
    局部特征提取步骤:根据该t个面部特征点的位置,确定该实时脸部图像中与每个AU匹配的特征区域,从所述特征区域提取局部特征,生成多个特征向量;及
    AU特征预测步骤:将所述多个特征向量分别输入预先训练好的、且与该特征区域匹配的AU分类器,得到从该特征区域中识别到相应AU特征的预测结果。
  17. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述面部平均模型对人脸特征识别模型进行训练得到,其中,所述人脸特征识别模型为ERT算法,用公式表示如下:
    Figure PCTCN2017104819-appb-100005
    其中t表示级联序号,τt(·,·)表示当前级的回归器,S(t)为当前模型的形状估计,每个回归器τt(·,·)根据输入的当前图像I和S(t)来预测一个增量
    Figure PCTCN2017104819-appb-100006
    把这个增量加到当前的形状估计上来改进当前模型,在模型训练的过程中,取所有样本图片的部分特征点训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型。
  18. 根据权利要求17所述的计算机可读存储介质,其特征在于,所述AU特征识别程序被处理器执行时,还可实现如下步骤:
    判断步骤:判断所述预测结果中每个AU特征的概率是否大于预设阈值;及
    提示步骤:当判断所述预测结果中存在概率大于预设阈值的AU特征,提示从该实时脸部图像中识别到该AU特征。
  19. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述AU分类器的训练步骤包括:
    样本准备步骤:收集人脸样本图像,从人脸样本图像中分别截取每个AU相匹配的图像区域作为该AU的正样本图像,并为每个AU准备负样本图像;
    局部特征提取步骤:提取每个AU的正样本图像、负样本图像的局部特征,生成相应的特征向量;及
    模型训练步骤:利用每个AU的正/负样本图像的局部特征对支持向量分类器进行学习训练,得到相应的AU分类器。
  20. 根据权利要求19所述的计算机可读存储介质,其特征在于,所述 AU特征识别程序被处理器执行时,还可实现如下步骤:
    判断步骤:判断所述预测结果中每个AU特征的概率是否大于预设阈值;及
    提示步骤:当判断所述预测结果中存在概率大于预设阈值的AU特征,提示从该实时脸部图像中识别到该AU特征。
PCT/CN2017/104819 2017-08-17 2017-09-30 Au特征识别方法、装置及存储介质 WO2019033525A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/337,931 US10445562B2 (en) 2017-08-17 2017-09-30 AU feature recognition method and device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710709113.2A CN107633207B (zh) 2017-08-17 2017-08-17 Au特征识别方法、装置及存储介质
CN201710709113.2 2017-08-17

Publications (1)

Publication Number Publication Date
WO2019033525A1 true WO2019033525A1 (zh) 2019-02-21

Family

ID=61099709

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/104819 WO2019033525A1 (zh) 2017-08-17 2017-09-30 Au特征识别方法、装置及存储介质

Country Status (3)

Country Link
US (1) US10445562B2 (zh)
CN (1) CN107633207B (zh)
WO (1) WO2019033525A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135279A (zh) * 2019-04-23 2019-08-16 深圳神目信息技术有限公司 一种基于人脸识别的预警方法、装置、设备及计算机可读介质
CN110222571A (zh) * 2019-05-06 2019-09-10 平安科技(深圳)有限公司 黑眼圈智能判断方法、装置及计算机可读存储介质

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832735A (zh) * 2017-11-24 2018-03-23 百度在线网络技术(北京)有限公司 用于识别人脸的方法和装置
CN108564016A (zh) * 2018-04-04 2018-09-21 北京红云智胜科技有限公司 一种基于计算机视觉的au分类系统及方法
CN110728168B (zh) * 2018-07-17 2022-07-22 广州虎牙信息科技有限公司 部位识别方法、装置、设备及存储介质
CN109586950B (zh) * 2018-10-18 2022-08-16 锐捷网络股份有限公司 网络场景识别方法、网络管理设备、系统及存储介质
CN109493403A (zh) * 2018-11-13 2019-03-19 北京中科嘉宁科技有限公司 一种基于运动单元表情映射实现人脸动画的方法
CN109635844B (zh) * 2018-11-14 2021-08-27 网易传媒科技(北京)有限公司 训练分类器的方法及装置和水印检测方法及装置
CN109583431A (zh) * 2019-01-02 2019-04-05 上海极链网络科技有限公司 一种人脸情绪识别模型、方法及其电子装置
EP3910507A4 (en) * 2019-03-13 2022-01-26 Huawei Technologies Co., Ltd. METHOD AND DEVICE FOR WAKE-UP A SCREEN
CN110222572B (zh) * 2019-05-06 2024-04-09 平安科技(深圳)有限公司 跟踪方法、装置、电子设备及存储介质
CN112016368A (zh) * 2019-05-31 2020-12-01 沈阳新松机器人自动化股份有限公司 一种基于面部表情编码系统的表情识别方法、系统及电子设备
CN110428399B (zh) 2019-07-05 2022-06-14 百度在线网络技术(北京)有限公司 用于检测图像的方法、装置、设备和存储介质
CN110610131B (zh) * 2019-08-06 2024-04-09 平安科技(深圳)有限公司 人脸运动单元的检测方法、装置、电子设备及存储介质
US11244206B2 (en) * 2019-09-06 2022-02-08 Fujitsu Limited Image normalization for facial analysis
CN111104846B (zh) * 2019-10-16 2022-08-30 平安科技(深圳)有限公司 数据检测方法、装置、计算机设备和存储介质
CN110717928B (zh) * 2019-10-21 2022-03-18 网易(杭州)网络有限公司 人脸运动单元AUs的参数估计方法、装置和电子设备
CN110781828A (zh) * 2019-10-28 2020-02-11 北方工业大学 一种基于微表情的疲劳状态检测方法
CN112950529A (zh) * 2019-12-09 2021-06-11 丽宝大数据股份有限公司 脸部肌肉特征点自动标记方法
CN111191532B (zh) * 2019-12-18 2023-08-25 深圳供电局有限公司 基于施工区域的人脸识别方法、装置、计算机设备
CN111062333B (zh) * 2019-12-19 2024-01-05 北京海国华创云科技有限公司 活体的面部动态识别方法、系统、存储介质
CN113298747A (zh) * 2020-02-19 2021-08-24 北京沃东天骏信息技术有限公司 图片、视频检测方法和装置
CN111340013B (zh) * 2020-05-22 2020-09-01 腾讯科技(深圳)有限公司 人脸识别方法、装置、计算机设备及存储介质
CN111832573B (zh) * 2020-06-12 2022-04-15 桂林电子科技大学 一种基于类激活映射和视觉显著性的图像情感分类方法
CN111783677B (zh) * 2020-07-03 2023-12-01 北京字节跳动网络技术有限公司 人脸识别方法、装置、服务器和计算机可读介质
CN111860454B (zh) * 2020-08-04 2024-02-09 北京深醒科技有限公司 一种基于人脸识别的模型切换算法
CN112201343B (zh) * 2020-09-29 2024-02-02 浙江大学 基于脸部微表情的认知状态识别系统及方法
CN112990077B (zh) * 2021-04-02 2021-10-01 中国矿业大学 基于联合学习与光流估计的面部动作单元识别方法及装置
CN114022921B (zh) * 2021-09-13 2024-02-20 齐鲁工业大学 一种基于特征点和局部特征的面部表情分析方法
CN114842542B (zh) * 2022-05-31 2023-06-13 中国矿业大学 基于自适应注意力与时空关联的面部动作单元识别方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976450A (zh) * 2010-11-02 2011-02-16 北京航空航天大学 一种非对称人脸表情编码方法
CN103366153A (zh) * 2012-06-06 2013-10-23 北京科技大学 一种面部语义认知特征识别方法
CN104123545A (zh) * 2014-07-24 2014-10-29 江苏大学 一种实时表情特征提取及表情识别方法
CN104680141A (zh) * 2015-02-13 2015-06-03 华中师范大学 基于运动单元分层的人脸表情识别方法及系统

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8798374B2 (en) * 2008-08-26 2014-08-05 The Regents Of The University Of California Automated facial action coding system
US8311319B2 (en) * 2010-12-06 2012-11-13 Seiko Epson Corporation L1-optimized AAM alignment
KR102094723B1 (ko) * 2012-07-17 2020-04-14 삼성전자주식회사 견고한 얼굴 표정 인식을 위한 특징 기술자
CN103065122A (zh) * 2012-12-21 2013-04-24 西北工业大学 基于面部动作单元组合特征的人脸表情识别方法
CN103479367B (zh) * 2013-09-09 2016-07-20 广东工业大学 一种基于面部运动单元识别的驾驶员疲劳检测方法
US10169644B2 (en) * 2013-10-24 2019-01-01 Blue Line Security Solutions Llc Human facial detection and recognition system
US9361510B2 (en) * 2013-12-13 2016-06-07 Intel Corporation Efficient facial landmark tracking using online shape regression method
CN104376333A (zh) * 2014-09-25 2015-02-25 电子科技大学 基于随机森林的人脸表情识别方法
CN105989331B (zh) * 2015-02-11 2019-10-08 佳能株式会社 脸部特征提取装置、脸部特征提取方法、图像处理设备和图像处理方法
US9576190B2 (en) * 2015-03-18 2017-02-21 Snap Inc. Emotion recognition in video conferencing
CN104881660B (zh) * 2015-06-17 2018-01-09 吉林纪元时空动漫游戏科技集团股份有限公司 基于gpu加速的人脸表情识别及互动方法
US10043058B2 (en) * 2016-03-09 2018-08-07 International Business Machines Corporation Face detection, representation, and recognition
CN106295566B (zh) * 2016-08-10 2019-07-09 北京小米移动软件有限公司 人脸表情识别方法及装置
CN107007257B (zh) * 2017-03-17 2018-06-01 深圳大学 面部不自然度的自动评级方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976450A (zh) * 2010-11-02 2011-02-16 北京航空航天大学 一种非对称人脸表情编码方法
CN103366153A (zh) * 2012-06-06 2013-10-23 北京科技大学 一种面部语义认知特征识别方法
CN104123545A (zh) * 2014-07-24 2014-10-29 江苏大学 一种实时表情特征提取及表情识别方法
CN104680141A (zh) * 2015-02-13 2015-06-03 华中师范大学 基于运动单元分层的人脸表情识别方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KAZEMI, VAHID ET AL.: "One Millisecond Face Alignment with an Ensemble of Regression rees", CVPR2014, 27 June 2014 (2014-06-27), pages 1867 - 1874, XP032649427 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135279A (zh) * 2019-04-23 2019-08-16 深圳神目信息技术有限公司 一种基于人脸识别的预警方法、装置、设备及计算机可读介质
CN110222571A (zh) * 2019-05-06 2019-09-10 平安科技(深圳)有限公司 黑眼圈智能判断方法、装置及计算机可读存储介质
CN110222571B (zh) * 2019-05-06 2023-04-07 平安科技(深圳)有限公司 黑眼圈智能判断方法、装置及计算机可读存储介质

Also Published As

Publication number Publication date
US20190228211A1 (en) 2019-07-25
CN107633207A (zh) 2018-01-26
US10445562B2 (en) 2019-10-15
CN107633207B (zh) 2018-10-12

Similar Documents

Publication Publication Date Title
WO2019033525A1 (zh) Au特征识别方法、装置及存储介质
WO2019095571A1 (zh) 人物情绪分析方法、装置及存储介质
WO2019033572A1 (zh) 人脸遮挡检测方法、装置及存储介质
WO2019033569A1 (zh) 眼球动作分析方法、装置及存储介质
WO2019033571A1 (zh) 面部特征点检测方法、装置及存储介质
WO2019033573A1 (zh) 面部情绪识别方法、装置及存储介质
KR102174595B1 (ko) 비제약형 매체에 있어서 얼굴을 식별하는 시스템 및 방법
WO2020098074A1 (zh) 人脸样本图片标注方法、装置、计算机设备及存储介质
US9613296B1 (en) Selecting a set of exemplar images for use in an automated image object recognition system
WO2019033570A1 (zh) 嘴唇动作分析方法、装置及存储介质
WO2019033568A1 (zh) 嘴唇动作捕捉方法、装置及存储介质
JP2011511977A (ja) デジタル画像における顔の表情の検出
CN109376604B (zh) 一种基于人体姿态的年龄识别方法和装置
CN111626371A (zh) 一种图像分类方法、装置、设备及可读存储介质
Kalas Real time face detection and tracking using OpenCV
Anand et al. An improved local binary patterns histograms techniques for face recognition for real time application
JP2010108494A (ja) 画像内の顔の特性を判断する方法及びシステム
Lahiani et al. Hand pose estimation system based on Viola-Jones algorithm for android devices
CN114902299A (zh) 图像中关联对象的检测方法、装置、设备和存储介质
Shanthi et al. Algorithms for face recognition drones
CN110610131B (zh) 人脸运动单元的检测方法、装置、电子设备及存储介质
EP3200092A1 (en) Method and terminal for implementing image sequencing
KR100847142B1 (ko) 얼굴 인식을 위한 전처리 방법, 이를 이용한 얼굴 인식방법 및 장치
WO2015102711A2 (en) A method and system of enforcing privacy policies for mobile sensory devices
CN113255557A (zh) 一种基于深度学习的视频人群情绪分析方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17921570

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 23.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17921570

Country of ref document: EP

Kind code of ref document: A1