CN111178195A - Facial expression recognition method and device and computer readable storage medium - Google Patents

Facial expression recognition method and device and computer readable storage medium Download PDF

Info

Publication number
CN111178195A
CN111178195A CN201911315086.6A CN201911315086A CN111178195A CN 111178195 A CN111178195 A CN 111178195A CN 201911315086 A CN201911315086 A CN 201911315086A CN 111178195 A CN111178195 A CN 111178195A
Authority
CN
China
Prior art keywords
expression
face
facial expression
feature vector
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911315086.6A
Other languages
Chinese (zh)
Inventor
熊军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN201911315086.6A priority Critical patent/CN111178195A/en
Publication of CN111178195A publication Critical patent/CN111178195A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention provides a facial expression recognition method, a device and a computer readable storage medium, wherein the method comprises the steps of collecting facial expression images and numbering the collected images respectively; preprocessing and standardizing the collected facial expression images, and arranging according to the serial numbers to obtain an image group; carrying out face detection and face key point extraction according to the image group obtained by processing, and constructing a feature vector set of the sample according to the extracted key points; and putting the feature vector set corresponding to the face key points into a machine learning algorithm SVM for expression classification and recognition. The method provided by the invention can quickly capture the micro-expression change and realize the expression classification without a large amount of manual indexing, is simple and convenient to implement and high in accuracy, and enhances the reliability and the efficiency of recognition.

Description

Facial expression recognition method and device and computer readable storage medium
Technical Field
The invention relates to the technical field of computer image processing, in particular to a facial expression recognition method and device and a computer readable storage medium.
Background
Facial expression recognition is an important component of computer vision research and biometric recognition. The method mainly researches that the emotion contained in the human face can be automatically and efficiently recognized by a computer by transmitting the image or video containing the human face to the computer, and the method is widely applied to the fields of man-machine intelligent interaction, game entertainment, public safety and the like. The research content of the facial expression recognition mainly comprises image acquisition, image preprocessing, feature extraction and classification recognition, which are also processes of the facial expression recognition. Among them, feature extraction and classification identification are particularly critical.
The expression feature extraction is to acquire overall information and fine local information of the expression from an image containing the facial expression, and express a corresponding emotional state according to the overall information and the fine local information. At present, expression feature extraction algorithms can be divided into the following shape feature-based extraction methods:
(1) the shape feature-based extraction method is used for acquiring emotional state feature information by marking the geometric shape relation of facial expression feature points. Eyebrows, eyes, a nose and a mouth in the face can be rich in expression states, and the organs can be deformed along with the appearance of different expressions. The organs can be generally subjected to key point description, and then feature extraction, wherein the features mainly comprise positions, scales, ratios among the organs and the like, and the features represent a face with an expression in the form of a group of vectors. The characteristics have small memory requirements, but the extracted characteristic points are required to be very accurate, a large amount of manual labeling data is required, and training and prediction are time-consuming.
(2) The extraction method based on the texture features is to acquire the intrinsic information containing the emotional state in the expression image so as to obtain the expression features describing the whole or local changes of the face. The method has the advantages of simple and quick calculation and rich characteristic information, but is easily influenced by other factors such as illumination, noise and the like. Such as local binary, Gabor wavelets, the accuracy of expression recognition is greatly reduced.
Based on the defects of the existing methods, it is necessary to provide a new facial expression recognition method to meet the requirements of efficient and accurate facial expression recognition.
Disclosure of Invention
The invention provides a facial expression recognition method, and mainly aims to provide a simple and high-precision method to replace the existing facial expression recognition method.
In order to achieve the above object, the present invention further provides a facial expression recognition method, including:
step S1: collecting facial expression images, and numbering the collected images respectively;
step S2: preprocessing and standardizing the facial expression images collected in the step S1, and arranging the facial expression images according to the numbers to obtain an image group;
step S3: carrying out face detection and face key point extraction according to the image group obtained by processing, and constructing a feature vector set of the sample according to the extracted key points;
step S4: and putting the feature vector set corresponding to the face key points into a machine learning algorithm SVM for expression classification and recognition.
Preferably, in step S3, the face key point extracting method is to respectively obtain 68 face key points of each image based on the face detection tool dlib.
Preferably, the step S3 includes:
step S31: carrying out face detection on the preprocessed image through image pyramid and sliding window detection, and positioning the position of a face;
step S32: obtaining 68 face key points through a face detection tool dlib;
step S33: and constructing a feature vector set of the sample by using 68 face key points.
Preferably, the step S33 specifically includes: and manufacturing a nominal point by using the mean value of the x-axis and the y-axis of the 68 points, wherein the nominal point and the other 68 key points form 68 vectors with directions and sizes, and the 68 vectors are used as the expression feature vector set of the sample.
Preferably, the facial expression recognition method further includes the steps of: and constructing a feature matrix A [ A1, A2,. Ak ] ∈ Rm × n according to the expression feature vector set of the obtained sample, wherein m is a feature dimension, n is the total number of the sample, and A1, A2,. Ak is k expression feature vector sets of the sample.
Preferably, step S4 includes:
step S41: respectively inputting the feature vector sets of the facial expressions obtained in the step S3 into training samples, and setting parameters of SVM kernel functions after circulating for N times;
step S42: setting training sample weights, selecting different i values to calculate the training sample weights, and training an SVM weak classifier by inputting parameters of an SVM kernel function;
step S43: correcting the training result calculated by the SVM weak classifier, resetting the weight, circulating again, dynamically adjusting the penalty factor and the kernel function parameter of the support vector machine by using an improved artificial bee colony optimization algorithm, outputting an optimal parameter, and establishing an expression classification model by using the optimal parameter;
step S44: and inputting the feature vector set corresponding to the face key points into the expression classification model to realize expression recognition.
Preferably, step S42 is to set the training sample weight: wi 1And (4) initializing sample weights to be 1/N, wherein i is 1, … and N, and training an SVM weak classifier based on the SVM kernel function parameters by training the sample weights.
Preferably, the step S2 includes graying and histogram equalizing the whole facial expression image to obtain an image group.
In addition, to achieve the above object, the present invention further provides a facial expression recognition apparatus, including a memory and a processor, where the memory stores a facial recognition program operable on the processor and an SVM classifier retrievable by the processor, and the facial recognition program, when executed by the processor, implements the following steps:
step S1: collecting facial expression images, and numbering the collected images respectively;
step S2: preprocessing and standardizing the facial expression images collected in the step S1, and arranging the facial expression images according to the numbers to obtain an image group;
step S3: carrying out face detection and face key point extraction according to the image group obtained by processing, and constructing a feature vector set of the sample according to the extracted key points;
step S4: and putting the feature vector set corresponding to the face key points into a machine learning algorithm SVM for expression classification and recognition.
In addition, to achieve the above object, the present invention also provides a computer readable storage medium, on which a face recognition program is stored, the face recognition program being executable by one or more processors to implement the steps of the above facial expression recognition method.
The facial expression recognition method, the device and the computer-readable storage medium provided by the invention have the advantages that 68 key points of the face are obtained based on a face detection tool dlib, and facial expression feature vector sets are formed by extracting features such as mouth opening distance, eye opening distance, eyebrow inclination angle, width between two eyebrows, height between two eyebrows and the like through the positions of the key points and are put into a machine learning algorithm SVM for expression classification.
The invention has the beneficial effects that: the method has the advantages that the face key points are carried out on the collected image group, the feature vector set of the sample is constructed, the extraction of the fine local information of the expression can be facilitated, the overall information of the expression can be integrated through the combination of the vectors, the features are used for representing a face with the expression in the form of a group of vectors, the method is simple, and the reliability is high.
Drawings
Fig. 1 is a schematic flow chart of a facial expression recognition method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an internal structure of a facial expression recognition apparatus according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a facial expression recognition method. The method includes the steps that 68 face key points are obtained based on a face detection tool dlib, features such as mouth opening distance, eye opening distance, eyebrow inclination angle, width between two eyebrows, height between two eyebrows and the like are extracted through key point positions to form a face expression feature vector set, feature values are improved through specific training samples and set training sample weights, an SVM is used as a weak classifier, detection accuracy is improved, and micro-expression changes can be rapidly captured and expression classification can be achieved when detection is carried out by the trained SVM weak classifier through selection of appropriate parameters of an SVM kernel function. Referring to fig. 1, the facial expression recognition method of the present invention includes:
step S1: collecting facial expression images, acquiring the continuously output facial expression images within a preset interval time, and numbering and storing the collected pictures respectively, wherein the numbers of the pictures are 1-n respectively;
step S2: preprocessing and standardizing the collected facial expression images, and arranging the facial expression images according to the serial numbers to obtain image groups N1-Nn;
step S3: carrying out face detection and face key point extraction according to the image obtained by preprocessing, and constructing a feature vector set A1-An of the sample according to the extracted key points;
further, step S3 includes the following two steps:
step S31: carrying out face detection on the preprocessed image through image pyramid and sliding window detection, and positioning the position of a face;
step S32: 68 face key points are obtained by the face detection tool dlib, and a nominal point is manufactured by averaging the x-axis and the y-axis of the 68 key points, and the nominal point and the other 68 key points form 68 vectors with directions and sizes.
Step S33: and taking the 68 vectors as an expression feature vector set of the sample. And constructing a feature matrix A [ A1, A2,. Ak ] ∈ Rm × n according to the expression feature vector set of the obtained sample, wherein m is a feature dimension, n is the total number of the sample, and A1, A2,. Ak is 68 expression feature vector sets of the sample.
Step S4: and putting the feature vector set corresponding to the face key points into a machine learning algorithm SVM for expression classification and recognition. The method specifically comprises the following steps:
step S41: respectively inputting the feature vector sets of the facial expressions obtained in the step S3 into training samples, and setting parameters of SVM kernel functions after circulating for N times;
step S42: setting training sample weights, selecting different i values to calculate the training sample weights, and training an SVM weak classifier by inputting parameters of an SVM kernel function; in this step, the training sample weights are set as: wi 1And (4) initializing sample weights to be 1/N, wherein i is 1, … and N, and training an SVM weak classifier based on the SVM kernel function parameters by training the sample weights.
Step S43: correcting the training result calculated by the SVM weak classifier, resetting the weight, circulating again, dynamically adjusting the penalty factor and the kernel function parameter of the support vector machine by using an improved artificial bee colony optimization algorithm, outputting an optimal parameter, and establishing an expression classification model by using the optimal parameter;
step S44: and (4) inputting the face key point feature vector set corresponding to the continuous head images processed in the step (S2) into a trained classifier, namely an expression classification model, to realize expression recognition.
The following description is given with reference to specific examples:
the facial expression recognition method of the embodiment comprises the following steps:
1.1 image acquisition, image normalization for image preprocessing
Collecting facial expression images, acquiring the continuously output facial expression images within a preset interval time, and numbering and storing the collected images respectively; specifically, by using a portrait capturing technology, a designated portrait is automatically tracked when moving within the shooting range of a camera, and is shot and stored for further processing.
The preprocessing step adjusts the images used for adjustment, adjusts all human faces containing significant facial features into a uniform size, selects reference points of the images of the research objects, corresponds the reference points of the same position of the same research object in continuous intervals, and geometrically standardizes the images.
Further, still include: graying and histogram equalization are carried out on the whole human face expression image so as to eliminate the influence of illumination noise factors on human face detection and human face key point detection and improve the image quality.
1.2 Attribute extraction and face representation
68 face key points are obtained based on a face detection tool dlib; this attribute of the face special triangle is an experimentally important research attribute. Important facial reference points are selected in experiments, and adjacent reference points are connected to form a triangle containing important facial feature parts, wherein the triangle is called a facial special triangle.
The specific method is that a face detection tool dlib is used for positioning the face position in a face special triangle mode and extracting relevant key points to obtain 68 face key points. And sequentially extracting features according to an output sequence aiming at the head pictures continuously output in continuous time and realizing the association between the images.
1.2.1 geometric Properties
Normalized coordinates (x, y) of the 68 extraction points are defined. The mean value of the x-axis and the y-axis of the 68 points is made into a nominal point, the nominal point and the other 68 key points form 68 vectors with directions and sizes, and the 68 vectors are used as the expression feature vector set of the sample. And constructing a feature matrix A ═ A1, A2,. Ak ∈ Rm × n according to the feature vector set of the obtained samples, wherein m is a feature dimension, and n is the total number of the samples. Wherein the two reference coordinates for normalization between different images within the group of images have the same position for all images of the same type.
1.2.2 obtaining Overall Properties for 68 keypoints
The overall attribute extraction is performed on 68 key points by Principal Component Analysis (PCA). Each individual image is cropped to a standard area. After discarding as much as possible the interference information such as background pixels, etc., the PCA transformation is performed, and several characteristic dimensions representing the most image information are selected, which should account for more than 95% of the total information amount. And performing inverse transformation on the PCA matrix to obtain the characteristics after dimension reduction.
The image descriptor classified for local expressions by 68 key points is assisted by Scale Invariant Feature Transform (SIFT). Unified and unchangeable scaling and rotation are carried out through SIFT transformation, partial unchangeable affine distortion and illumination change are carried out, and the representation of the area near the coordinates of the key points is achieved.
Extracting characteristics such as mouth opening distance, eye opening distance, eyebrow inclination angle, width between two eyebrows, height between two eyebrows and the like at key point positions to form a facial expression characteristic vector set, and putting the facial expression vector set into a machine learning algorithm SVM for expression classification.
The method specifically comprises the following steps:
a, extracting features such as mouth opening distance, eye opening distance, eyebrow inclination angle, width between two eyebrows, height between two eyebrows and the like through key point positions to form a facial expression feature vector set, respectively inputting the facial expression feature vector set into a training sample, and setting parameters of an SVM kernel function after circulating for N times.
Given a training sample S { (x1, y1), …, (xn, yn) }, loop t times, set the SVM kernel parameters, with n variables: c ═ C1, C2 … }; wherein (x1, y1), …, (xn, yn) are 68 personal face key points obtained by dlib, C is an SVM kernel function parameter value, which is a one-dimensional array, and C is a multi-dimensional array;
B. setting the weight of the training sample, selecting different i values to calculate the weight of the training sample, and training the SVM weak classifier by inputting the parameters of the SVM kernel function.
Setting the weight of the training sample: wi 11/N, i is 1, …, N, initializing the sample weight to 1/N, and training an SVM weak classifier based on the SVM kernel function parameters through training the sample weight;
C. correcting the training result calculated by the SVM weak classifier, resetting the weight, circulating again, dynamically adjusting the penalty factor and the kernel function parameter of the support vector machine by using an improved artificial bee colony optimization algorithm, outputting an optimal parameter, and establishing an expression classification model by using the optimal parameter.
D. And inputting the trained expression classification through the continuously output human head pictures so as to realize expression recognition.
The invention also provides a facial expression recognition device. Fig. 2 is a schematic diagram of an internal structure of a facial expression recognition apparatus according to an embodiment of the present invention.
In the present embodiment, the facial expression recognition apparatus 1 may be a PC (Personal Computer), or may be a terminal device such as a smartphone, a tablet Computer, or a mobile Computer. The facial expression recognition apparatus 1 at least includes a memory 11, a processor 12, a network interface 13, and a communication bus 14.
The memory 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. A face recognition program operable on the processor and a SVM classifier retrievable by the processor are stored. The memory 11 may in some embodiments be an internal storage unit of the facial expression recognition apparatus 1, such as a hard disk of the facial expression recognition apparatus 1. The memory 11 may also be an external storage device of the facial expression recognition apparatus 1 in other embodiments, such as a plug-in hard disk provided on the facial expression recognition apparatus 1, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and so on. Further, the memory 11 may also include both an internal storage unit of the facial expression recognition apparatus 1 and an external storage device. The SVM memory may be stored in an internal memory unit or an external memory unit, preferably an external memory unit. The memory 11 may be used not only to store application software installed in the facial expression recognition apparatus 1 and various types of data, such as the code of the face recognition program 01, but also to temporarily store data that has been output or is to be output.
The processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip in some embodiments, and is used for executing program codes stored in the memory 11 or Processing data, such as executing the face recognition program 01.
The communication bus 14 is used to enable connection communication between these components.
The network interface 13 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), and is typically used to establish a communication link between the facial expression recognition apparatus 1 and other electronic devices.
Fig. 2 shows only the facial expression recognition apparatus 1 having the components 11 to 14 and the face recognition program 01, and it will be understood by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the facial expression recognition apparatus 1, and may include fewer or more components than those shown, or combine some components, or a different arrangement of components.
In the embodiment of the facial expression recognition apparatus 1 shown in fig. 2, a face recognition program 01 is stored in the memory 11; the processor 12, when executing the face recognition program 01 stored in the memory 11, implements the above method steps of the facial expression recognition method of the present invention, and specifically includes:
step S1: collecting facial expression images, acquiring the continuously output facial expression images within a preset interval time, and numbering and storing the collected images respectively;
step S2: preprocessing and standardizing the collected facial expression images to obtain an image group;
step S3: carrying out face detection and face key point extraction according to the image group obtained by processing, and constructing a feature vector set of the sample according to the extracted key points;
step S4: and putting the feature vector set corresponding to the face key points into a machine learning algorithm SVM for expression classification and recognition.
Further, in another embodiment of the apparatus of the present invention, the face recognition program 01 may also be called by the processor to implement the above method steps of the facial expression recognition method provided by the present invention.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a face recognition program is stored on the computer-readable storage medium, and the face recognition program can be executed by one or more processors to implement the above method steps of the facial expression recognition method provided by the present invention.
The embodiment of the computer-readable storage medium of the present invention is substantially the same as the embodiments of the facial expression recognition apparatus and method, and will not be described in detail herein.
The facial expression recognition method, the device and the computer-readable storage medium provided by the invention have the advantages that 68 key points of the face are obtained based on a face detection tool dlib, and facial expression feature vector sets are formed by extracting features such as mouth opening distance, eye opening distance, eyebrow inclination angle, width between two eyebrows, height between two eyebrows and the like through the positions of the key points and are put into a machine learning algorithm SVM for expression classification.
The invention has the beneficial effects that:
firstly, eyebrows, eyes, a nose and a mouth in the face can be rich in expression states, and the organs can be deformed along with the appearance of different expressions. The organs can be generally subjected to key point description, and then feature extraction is carried out, the features mainly comprise the positions and the scales of the organs, the ratios among the organs and the like, the features are used for representing a face with an expression in the form of a group of vectors, and the method is simple and high in reliability.
Through positioning and collecting 68 key points of the face, the complex change condition of the whole face can be represented by a small number of key points, complex and precise marking, training and prediction of various data are not needed, the labor cost is reduced, and the time consumption is reduced;
after the expression is extracted, the characteristics of unknown expression are divided into corresponding known expression categories through an SVM algorithm, so that the recognition precision can be greatly improved, the modeling words are classified and characterized for complex expression, and the accuracy is high.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A facial expression recognition method, the method comprising:
step S1: collecting facial expression images, and numbering the collected images respectively;
step S2: preprocessing and standardizing the facial expression images collected in the step S1, and arranging the facial expression images according to the numbers to obtain an image group;
step S3: carrying out face detection and face key point extraction according to the image group obtained by processing, and constructing a feature vector set of the sample according to the extracted key points;
step S4: and putting the feature vector set corresponding to the face key points into a machine learning algorithm SVM for expression classification and recognition.
2. The method for recognizing facial expressions according to claim 1, wherein in step S3, the method for extracting facial key points is to derive 68 facial key points of each image respectively based on a face detection tool dlib.
3. The method for recognizing facial expressions according to claim 2, wherein the step S3 includes:
step S31: carrying out face detection on the preprocessed image through image pyramid and sliding window detection, and positioning the position of a face;
step S32: obtaining 68 face key points through a face detection tool dlib;
step S33: and constructing a feature vector set of the sample by using 68 face key points.
4. The method for recognizing facial expressions according to claim 3, wherein the step S33 specifically comprises: and manufacturing a nominal point by using the mean value of the x-axis and the y-axis of the 68 points, wherein the nominal point and the other 68 key points form 68 vectors with directions and sizes, and the 68 vectors are used as the expression feature vector set of the sample.
5. A method for facial expression recognition as claimed in claim 3, the method further comprising the steps of: and constructing a feature matrix A [ A1, A2,. Ak ] ∈ Rm × n according to the expression feature vector set of the obtained sample, wherein m is a feature dimension, n is the total number of the sample, and A1, A2,. Ak is k expression feature vector sets of the sample.
6. The method for recognizing facial expressions according to claim 1, wherein the step S4 includes:
step S41: respectively inputting the feature vector sets of the facial expressions obtained in the step S3 into training samples, and setting parameters of SVM kernel functions after circulating for N times;
step S42: setting training sample weights, selecting different i values to calculate the training sample weights, and training an SVM weak classifier by inputting parameters of an SVM kernel function;
step S43: correcting the training result calculated by the SVM weak classifier, resetting the weight, circulating again, dynamically adjusting the penalty factor and the kernel function parameter of the support vector machine by using an improved artificial bee colony optimization algorithm, outputting an optimal parameter, and establishing an expression classification model by using the optimal parameter;
step S44: and inputting the feature vector set corresponding to the face key points into the expression classification model to realize expression recognition.
7. The method for recognizing facial expressions according to claim 6, wherein said step S42 sets the weights of the training samples as:
Figure FDA0002324578210000021
initializing the sample weight to be 1/N, and training an SVM weak classifier based on the SVM kernel function parameters through training the sample weight.
8. The method for recognizing facial expressions according to claim 1, wherein the step S2 includes: and carrying out graying and histogram equalization on the whole collected human face expression image to obtain an image group.
9. A facial expression recognition apparatus comprising a memory and a processor, wherein the memory stores a face recognition program operable on the processor, and the face recognition program when executed by the processor implements the steps of:
step S1: collecting facial expression images, and numbering the collected images respectively;
step S2: preprocessing and standardizing the facial expression images collected in the step S1, and arranging the facial expression images according to the numbers to obtain an image group;
step S3: carrying out face detection and face key point extraction according to the image group obtained by processing, and constructing a feature vector set of the sample according to the extracted key points;
step S4: and putting the feature vector set corresponding to the face key points into a machine learning algorithm SVM for expression classification and recognition.
10. A computer-readable storage medium having stored thereon a face recognition program executable by one or more processors to implement the steps of the method of facial expression recognition according to any one of claims 1 to 8.
CN201911315086.6A 2019-12-18 2019-12-18 Facial expression recognition method and device and computer readable storage medium Pending CN111178195A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911315086.6A CN111178195A (en) 2019-12-18 2019-12-18 Facial expression recognition method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911315086.6A CN111178195A (en) 2019-12-18 2019-12-18 Facial expression recognition method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111178195A true CN111178195A (en) 2020-05-19

Family

ID=70651986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911315086.6A Pending CN111178195A (en) 2019-12-18 2019-12-18 Facial expression recognition method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111178195A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307942A (en) * 2020-10-29 2021-02-02 广东富利盛仿生机器人股份有限公司 Facial expression quantitative representation method, system and medium
CN112528777A (en) * 2020-11-27 2021-03-19 富盛科技股份有限公司 Student facial expression recognition method and system used in classroom environment
CN112766538A (en) * 2020-12-26 2021-05-07 浙江天行健智能科技有限公司 Decision modeling method for pedestrian crossing road based on SVM algorithm
CN112883867A (en) * 2021-02-09 2021-06-01 广州汇才创智科技有限公司 Student online learning evaluation method and system based on image emotion analysis
CN113111789A (en) * 2021-04-15 2021-07-13 山东大学 Facial expression recognition method and system based on video stream

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307942A (en) * 2020-10-29 2021-02-02 广东富利盛仿生机器人股份有限公司 Facial expression quantitative representation method, system and medium
CN112528777A (en) * 2020-11-27 2021-03-19 富盛科技股份有限公司 Student facial expression recognition method and system used in classroom environment
CN112766538A (en) * 2020-12-26 2021-05-07 浙江天行健智能科技有限公司 Decision modeling method for pedestrian crossing road based on SVM algorithm
CN112883867A (en) * 2021-02-09 2021-06-01 广州汇才创智科技有限公司 Student online learning evaluation method and system based on image emotion analysis
CN113111789A (en) * 2021-04-15 2021-07-13 山东大学 Facial expression recognition method and system based on video stream
CN113111789B (en) * 2021-04-15 2022-12-20 山东大学 Facial expression recognition method and system based on video stream

Similar Documents

Publication Publication Date Title
Makhmudkhujaev et al. Facial expression recognition with local prominent directional pattern
US10445562B2 (en) AU feature recognition method and device, and storage medium
CN111178195A (en) Facial expression recognition method and device and computer readable storage medium
JP4161659B2 (en) Image recognition system, recognition method thereof, and program
Guo et al. Dynamic facial expression recognition with atlas construction and sparse representation
CN107463865B (en) Face detection model training method, face detection method and device
CN106897675A (en) The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features
Monzo et al. Precise eye localization using HOG descriptors
JP5574033B2 (en) Image recognition system, recognition method thereof, and program
CN109241890B (en) Face image correction method, apparatus and storage medium
JP5777380B2 (en) Image recognition apparatus, image recognition method, and program
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
CN111666976A (en) Feature fusion method and device based on attribute information and storage medium
CN110826534A (en) Face key point detection method and system based on local principal component analysis
US8861803B2 (en) Image recognition apparatus, image recognition method, and program
CN112101293A (en) Facial expression recognition method, device, equipment and storage medium
Curran et al. The use of neural networks in real-time face detection
Shukla et al. Deep Learning Model to Identify Hide Images using CNN Algorithm
CN210442821U (en) Face recognition device
Geetha et al. 3D face recognition using Hadoop
Reddy et al. Comparison of HOG and fisherfaces based face recognition system using MATLAB
CN110751126A (en) Analysis method for judging character characters based on face features
Mall et al. A neural network based face detection approach
Smiatacz Face recognition: shape versus texture
CN116758589B (en) Cattle face recognition method for processing gesture and visual angle correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination