CN109871751A - Attitude appraisal procedure, device and storage medium based on facial expression recognition - Google Patents

Attitude appraisal procedure, device and storage medium based on facial expression recognition Download PDF

Info

Publication number
CN109871751A
CN109871751A CN201910007335.9A CN201910007335A CN109871751A CN 109871751 A CN109871751 A CN 109871751A CN 201910007335 A CN201910007335 A CN 201910007335A CN 109871751 A CN109871751 A CN 109871751A
Authority
CN
China
Prior art keywords
expression
attendant
face
attitude
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910007335.9A
Other languages
Chinese (zh)
Inventor
苏玉峰
王晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910007335.9A priority Critical patent/CN109871751A/en
Publication of CN109871751A publication Critical patent/CN109871751A/en
Pending legal-status Critical Current

Links

Abstract

The present invention relates to a kind of field of artificial intelligence, disclose the attitude appraisal procedure based on facial expression recognition of attendant a kind of, comprising: the human face data for extracting a variety of expressions in default expression database establishes the comparison expression library of attendant;Using the human face expression figure of a variety of expressions of picture pick-up device shooting service personnel, Digital Image Processing is done to the human face expression figure of the attendant of shooting, establishes attendant's face expression database;Expression library training deep neural network model is compared using attendant's face expression database and the attendant, establishes Expression Recognition model;Using the Expression Recognition model, the expression of attendant is identified, to judge the attitude of attendant.The present invention also proposes a kind of attitude assessment device and a kind of computer readable storage medium based on facial expression recognition.The assessment of attendant's attitude may be implemented by the identification of expression in the present invention.

Description

Attitude appraisal procedure, device and storage medium based on facial expression recognition
Technical field
The present invention relates to field of artificial intelligence more particularly to a kind of attendant's services based on facial expression recognition Appraisal procedure, device and the computer readable storage medium of attitude.
Background technique
Expression Recognition technical application is extensive, current main application field include human-computer interaction, safety, robot building, Medical treatment, communication, automotive field and service industry etc..
For example, in service industry, by identifying that the expression of attendant may determine that the attitude of attendant, from And the examination as attendant works' performance.
Current Expression Recognition technology is broadly divided into three steps: image obtains, and passes through the image takings tool such as camera Obtain still image or dynamic image sequence;Image preprocessing, the normalization of the size and gray scale of image, head pose are rectified Just, image segmentation etc.;Dot matrix is converted to higher level Image Representation-such as shape, movement, color, texture, sky by feature extraction Between structure etc. dimension-reduction treatment is carried out to huge image data under the premise of guaranteeing stability and discrimination as far as possible.
The main method of feature extraction at present has: extracting geometrical characteristic, statistical nature, frequency characteristic of field and motion feature Deng.Carrying out feature extraction using geometrical characteristic is mainly the notable feature to human face expression, such as the position of eyes, eyebrow, mouth It sets variation to be positioned, measured, determines the features such as its size, distance, shape and mutual ratio, carry out Expression Recognition, but geometry Characteristic method is lost some important identifications and classification information, and accuracy as a result is not high.Based on the method for whole statistical nature, It mainly emphasizes the information as much as possible retained in original Facial Expression Image, and allows related in classifier discovery facial expression image Feature obtains feature and is identified by converting to whole picture Facial Expression Image, although precision is higher, separability compared with Difference.Extraction method based on motion feature mainly extracts the motion feature of dynamic image sequence, but computationally intensive.
Summary of the invention
The present invention provides a kind of attitude appraisal procedure based on facial expression recognition, device and computer-readable storage Medium can accurately judge the clothes of attendant when main purpose is to provide service in work position as attendant Business attitude.
To achieve the above object, a kind of attitude appraisal procedure based on facial expression recognition provided by the invention, packet It includes:
The human face data for extracting a variety of expressions in default expression database, establishes the comparison expression library of attendant;
Using the human face expression figure of a variety of expressions of picture pick-up device shooting service personnel, to the face of the attendant of shooting Expression figure does Digital Image Processing, establishes attendant's face expression database;
Using the Expression Recognition model, the expression of attendant is identified, to judge the service state of attendant Degree.
Optionally, it establishes attendant and compares expression library and attendant's face expression database, comprising:
It is compared using glad, normal, indignation three kinds of affective tags mark attendant's face expression database and attendant Picture in expression library;
Using the human face expression figure of a variety of expressions of picture pick-up device shooting service personnel, to the face of the attendant of shooting Expression figure does Digital Image Processing, establishes attendant's face expression database;
It is compared using glad, normal, indignation three kinds of affective tags mark attendant's face expression database and attendant Picture in expression library.
Optionally, doing Digital Image Processing to the human face expression figure of the attendant of shooting includes:
Face brightness in the human face expression figure of the attendant shot using histogram equalization method to picture pick-up device Do normalized;
Data enhancing processing is carried out to the human face expression figure using noise addition, random perturbation and transform method, to increase Add the quantity of Facial Expression Image, to establish attendant's face expression database.
Optionally, using the human face expression figure of a variety of expressions of picture pick-up device shooting service personnel, to the service people of shooting The human face expression figure of member does Digital Image Processing, establishes attendant's face expression database, comprising:
Input the face picture in the comparison expression library of the face picture in attendant's face expression database and attendant Into deep neural network model, the deep neural network model extracts the face in attendant's face expression database After human face characteristic point in picture, the face of locating human face's facial expression image simultaneously cuts face picture;
Face picture based on cutting, the deep neural network model is according in the comparison expression library of the attendant Face picture extract human face characteristic point again;
After extraction based on the characteristic point, facial image training is resurveyed, Expression Recognition model is established.
The optionally Expression Recognition model, identifies the expression of attendant, to judge the service of attendant Attitude, comprising:
According to the work hours of attendant, the period of picture pick-up device shooting service personnel human face expression figure is set;
Attendant's human face expression figure that picture pick-up device takes within the period is sequentially sent to the Expression Recognition mould Type, all recognition results, judge the attitude of attendant in binding time section.
In addition, to achieve the above object, the present invention also provides a kind of, and the attitude based on facial expression recognition assesses dress It sets, which includes memory and processor, and the attitude that can be run on the processor is stored in the memory Appraisal procedure, the attitude appraisal procedure realize following steps when being executed by the processor:
The human face data for extracting a variety of expressions in default expression database, establishes the comparison expression library of attendant;
Using the human face expression figure of a variety of expressions of picture pick-up device shooting service personnel, to the face of the attendant of shooting Expression figure does Digital Image Processing, establishes attendant's face expression database;
Using the Expression Recognition model, the expression of attendant is identified, to judge the service state of attendant Degree.
Optionally, it establishes attendant and compares expression library and attendant's face expression database, comprising:
It is compared using glad, normal, indignation three kinds of affective tags mark attendant's face expression database and attendant Picture in expression library;
Using the human face expression figure of a variety of expressions of picture pick-up device shooting service personnel, to the face of the attendant of shooting Expression figure does Digital Image Processing, establishes attendant's face expression database;
It is compared using glad, normal, indignation three kinds of affective tags mark attendant's face expression database and attendant Picture in expression library.
Optionally, doing Digital Image Processing to the human face expression figure of the attendant of shooting includes:
Face brightness in the human face expression figure of the attendant shot using histogram equalization method to picture pick-up device Do normalized;
Data enhancing processing is carried out to the human face expression figure using noise addition, random perturbation and transform method, to increase Add the quantity of Facial Expression Image, to establish attendant's face expression database.
Optionally, using the human face expression figure of a variety of expressions of picture pick-up device shooting service personnel, to the service people of shooting The human face expression figure of member does Digital Image Processing, establishes attendant's face expression database, comprising:
Input the face picture in the comparison expression library of the face picture in attendant's face expression database and attendant Into deep neural network model, the deep neural network model extracts the face in attendant's face expression database After human face characteristic point in picture, the face of locating human face's facial expression image simultaneously cuts face picture;
Face picture based on cutting, the deep neural network model is according in the comparison expression library of the attendant Face picture extract human face characteristic point again;
After extraction based on the characteristic point, facial image training is resurveyed, Expression Recognition model is established.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium Attitude appraisal procedure is stored on storage medium, the attitude appraisal procedure can be held by one or more processor Row, the step of to realize attitude appraisal procedure based on facial expression recognition as described above.
Attitude appraisal procedure, device and computer-readable storage medium proposed by the present invention based on facial expression recognition Matter, establishes attendant's face expression database and expression label and attendant compares expression library, utilizes attendant's human face expression Library and attendant compare expression library training deep neural network model, establish Expression Recognition, using the Expression Recognition model, The expression of attendant is identified, to judge the attitude of attendant.
Detailed description of the invention
Fig. 1 is the process signal for the attitude appraisal procedure based on facial expression recognition that one embodiment of the invention provides Figure;
Fig. 2 is the internal structure that the attitude based on facial expression recognition that one embodiment of the invention provides assesses device Schematic diagram;
Fig. 3 is that the attitude based on facial expression recognition that one embodiment of the invention provides assesses attitude in device The module diagram of appraisal procedure.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The present invention provides a kind of attitude appraisal procedure based on facial expression recognition.Shown in referring to Fig.1, for the present invention The flow diagram for the attitude appraisal procedure based on facial expression recognition that one embodiment provides.This method can be by one Device executes, which can be by software and or hardware realization.
In the present embodiment, the attitude appraisal procedure based on facial expression recognition includes:
S1, attendant's face expression database and expression label and attendant's comparison expression library are established.
Present pre-ferred embodiments are by extracting the data in the expression data library (JAFFE) that Japan ATR is established, foundation pair Than the expression library of attendant.Japan ATR (the Advanced Telecommunication Research InstituteInternational expression data library JAFFE (The Japanses Female Facial) Expression Database, Japanese women facial expression data library) it is used exclusively for the database of Expression Recognition research, it should The face phase of 213 width (resolution ratio of each image: 256 pixels × 256 pixels) Japanese women, each image are contained in database All it is marked with original expression definition.10 people are shared in expression library, everyone there are 7 kinds of expressions (normal (also referred to as neutral face), high Emerging, sad, surprised, angry, detest, fear).
Present pre-ferred embodiments obtain indignation, happiness and normal three kinds of expressions from the expression data library JAFFE Human face data.
Because too big to model training and model prediction pressure using six kinds of emotions as judging label, so this programme needle Mainly there is glad, normal, indignation state in attitude to attendant, affective tag is divided into three kinds: is glad (main Including attendant smile, i.e., mouth raise up, eyes it is smaller than normal condition.Because people is in happiness, pupil can reduce), normally (mainly including attendant's mouth, eyes program normal size), indignation (are mainly characterized by pupil amplification, eyes are than normal shape State is big.Because people is in indignation, magnifying state can be presented in pupil).
It is sieved from the expression data library JAFFE for the indignation of attendant, happiness and normal three kinds of expressions, this programme Choosing belongs to the human face data of these three expressions.Because the Japanese citizen expression data library that ATR is established classifies to face picture Good, each face picture corresponds to corresponding expression label.It is right in order to compare the purity of attendant's expression library data set The good corresponding indignation of the human face data picture and mark filtered out, glad or normal expression label can manually more detections one Time.Compare the main judgment basis that attendant's expression library has critically important model training effect and later period Expression Recognition.
Using the human face expression figure of camera shooting service personnel, to attendant's face brightness of shooting do normalization and Data enhancing processing, establishes attendant's face expression database.
From comparison attendant's Table storehouse that the Japanese citizen expression data library ATR constructs, the brightness of human face expression picture is all It maintains in a section.So when the human face expression for obtaining attendant by image takings tools such as cameras, building clothes When business personnel's face expression database, it is also desirable to which the normalized for carrying out brightness makes to obtain human face expression and compares attendant's table The picture in feelings library has similar brightness.Meanwhile the face that the image takings tool such as camera obtains, since data volume is not filled Foot, also will affect judgement of the model to human face expression, therefore to the Facial Expression Image taken, i.e. attendant's human face expression Library, it is also necessary to carry out data enhancing processing.
Further, it in present pre-ferred embodiments, utilizes histogram equalization (Histogram Equalization) Method does normalized to face brightness.The histogram equalization is also known as histogram equalization, and this method is to shooting first To Facial Expression Image carry out Nonlinear extension: set pixel grayscale in variable r representative image.Gray level is normalized Processing, then r is in section [0,1], and wherein r=0 is indicated black, and r=1 indicates white.The Facial Expression Image taken for a width For, gray level of each pixel value in [0,1] is random.With probability density function Pr(rk) indicate Facial Expression Image The distribution of gray level, wherein rkIndicate the gray level under discrete case, k indicates the number of pixels of Facial Expression Image, by histogram The function expression of figure equalization does histogram treatment to Facial Expression Image:
Wherein, n is the sum of all pixels of a secondary Facial Expression Image.Then the pixel value for redistributing facial expression image, makes one The quantity for determining pixel value in tonal range is roughly equal.In this way, the original darker region of Facial Expression Image, contrast are increased By force, the stronger region contrast reduction of image.The Facial Expression Image of output is shown by histogram, is a more flat point Section histogram, reaches the normalized purpose of face brightness.
Further, present pre-ferred embodiments carry out data enhancing using noise addition, random perturbation and transform method Processing, to increase Facial Expression Image quantity.
Based on the processed Facial Expression Image of histogram equalization method is utilized, this programme is done first at image transformation Reason, the mode of transformation have rotation, overturning, scaling, amplify four kinds of modes, thus obtain four times of Facial Expression Image.Then, exist In transformed Facial Expression Image, noise, including salt-pepper noise are added at random, 2 dimensions between speckle noise and face eyes The noise of gaussian random distribution.Based on the above, 81 times of data reinforcing effect, each Facial Expression Image size are reached For 39*39.
S2, using picture pick-up device shooting service personnel a variety of expressions human face expression figure, to the attendant's of shooting Human face expression figure does Digital Image Processing, establishes attendant's face expression database.
In present pre-ferred embodiments, the Digital Image Processing includes:
Face brightness in the human face expression figure of the attendant shot using histogram equalization method to picture pick-up device Do normalized;
Data enhancing processing is carried out to the human face expression figure using noise addition, random perturbation and transform method, to increase Add the quantity of Facial Expression Image, to establish attendant's face expression database.
S3, training deep neural network in expression library is compared using attendant's face expression database and the attendant Model establishes Expression Recognition model.
Present pre-ferred embodiments service people using the comparison of glad, normal, indignation three kinds of affective tag annotation steps 1 Member expression library and step 2 shooting, it is processed after attendant's face expression database in picture, and the picture is put into DCNN (Deep Convolutional Network Cascade for Facial Point Detection) depth convolutional network mould Location cutting is carried out in type, feature extraction obtains output result after comparing.
Further, after the face of present pre-ferred embodiments locating human face facial expression image, face picture is cut.DCNN is first First comparison attendant's expression library based on input carries out Face detection to the image of attendant's face expression database of input, Then the face of positioning is cut out.Because directly a whole picture captured by camera is put into DCNN, due to face table The range that feelings image includes is too big, will affect the judgement to facial expression recognition.Therefore, first part's convolution net of DCNN network Network model passes through 5 characteristic points (left and right eye, nose, the left and right corners of the mouth) that need to look for face, orients face, cuts out face. Specifically, the convolutional neural networks of DCNN first part are made of three convolutional neural networks, these three convolutional neural networks It is respectively designated as: F1 (input of network is one whole face picture), EN1 (input picture contains eyes and nose), NM1 (containing nose and mouth region).For the Facial Expression Image of a 39*39 of input, one 10 dimension is exported by F1 Feature vector (5 characteristic points);According to 10 dimensional feature vectors of output, EN1 is for positioning three left eye, right eye and nose features Point;Simultaneously according to 10 dimensional feature vectors of output, NM1 positions three the left corners of the mouth, the right corners of the mouth and nose characteristic points, fixed in conjunction with EN1 After the nose characteristic point of position, the human face region picture comprising eyes, nose mouth is cut out.
Further, with the characteristic point basis of above-mentioned prediction, 5 human face characteristic points are predicted again.By it is above-mentioned can be rough Orient the position of each characteristic point, this step is the characteristic point basis with above-mentioned prediction, with this five predicted characteristics points is The heart continues to do feature location with the convolutional neural networks model of DCNN second part, finally compares 5 spies predicted under two links Sign point.The convolutional neural networks model of second part is made of 10 CNN, this 10 CNN are respectively used to 5 characteristic points of prediction, Each characteristic point uses two CNN, latter two right CNN to be averaged the result of prediction.
After feature point prediction twice, facial image is resurveyed.The neural network model of DCNN Part III is On the basis of the position of preceding feature point prediction twice, cutting is re-started.The neural network model of DCNN Part III and Two part-structures are identical, and are made of 10 CNN.Link, the sanction of the face expression database image of attendant are cut by 3.3 Cutting region becomes smaller.
Further, it is based on test error evaluation criterion, judges the expression classification of attendant's face expression database.By preceding After the characteristic point detection of three steps, the characteristic point of attendant's face expression database and characteristic point region are all extracted, This step is to be based on last depth by comparing comparison attendant's expression library in attendant's face expression database and step 1 Learning model judgment module judges attendant's expression in face expression database.Wherein, the judgment criteria of model judgment module It is by test error evaluation criterion formula:
Wherein l is the picture traverse of Facial Expression Image, is a fixed value;X is attendant's face expression database picture The vectors of 5 characteristic points indicate that x ' is the feature vector for comparing attendant's expression library data, y ' is that corresponding comparison services The expression label (angry, glad, normal) in personnel's expression library.Given threshold err=0.1.When the feature vector of x and x ' is less than threshold When value err=0.1, counter the pushing away of test error evaluation criterion formula is utilized, it can be deduced that the result of final y.If x is attendant The vector of 5 characteristic points of face expression database picture A indicates, then (compares the spy of attendant's expression library data with x ' respectively Vector is levied, glad, indignation, the feature vector of normal three labels are divided into) test error evaluation is done, if with glad test error When evaluation criterion value is less than threshold value err=0.1, then judge picture A for glad expression label.If glad and normal test misses Poor evaluation criterion value is both less than threshold value err=0.1, then chooses expression mark corresponding to test error evaluation criterion value minimum value Label.If all test error evaluation criterion values are both greater than threshold value err=0.1, picture A is summarized as to identify expression figure Piece remains the model training of step 3.Based on the above.Complete the image expression identification of attendant's face expression database.Work as mould When the threshold value of the loss function of type training is set as 0.05, once reaching threshold value, then model training process is completed.
S4, using the Expression Recognition model, the expression of attendant is identified, to judge the service of attendant Attitude.
If what is transmitted is the deep learning network of network that video file enters step S4, too big to whole network pressure, therefore Setting camera shooting equips every prefixed time interval and transmits the human face expression picture of an attendant such as every 5 seconds, in conjunction with having instructed Practice the deep learning model completed, completes to identify the expression of the attitude of attendant.
One embodiment of the invention by a preset time period, as in one day from early 8 points to 6 points of the working time of evening It is interior, the probability of the expression label of all face expression pictures of a certain attendant is calculated, acquisition probability value is highest expression Label, the attitude as the attendant in the preset time period.
For example, obtaining 7200 human face expression pictures altogether in the preset time period, wherein utilize the depth It practises model and is judged as that glad human face expression picture is 3600, be judged as that normal human face expression picture is 1800, and The human face expression picture for being judged as indignation is 1800, then the probability highest of glad expression label, is 50%, then can be determined that The service of the attendant is well;The probability highest of expression label if normal, then can be determined that the service of the attendant Generally;If the probability highest of the expression label of indignation, can be determined that the service of the attendant is poor.
Another embodiment of the present invention by a preset time period, as in one day from early 8 points to 6 points of the working time of evening It is interior, the corresponding expression label of all face expression pictures of a certain attendant is calculated, and according to predetermined analysis Algorithm analyzes the attitude of the attendant in the preset time period.
For example, the predetermined parser are as follows:
If in the preset time period, represent the first expression label (for example, " happiness " label) picture number account for it is all First ratio of picture number is more than or equal to first threshold (for example, 80%), and represents third expression label (for example, " anger Anger " label) picture number account for all picture numbers third ratio be less than or equal to second threshold (for example, 0.5%), then The attitude for determining the attendant in the preset time period is the first expression label;
If first ratio is less than first threshold and is greater than third threshold value (for example, 50%), and the third ratio is small In or equal to second threshold, it is determined that the attitude of the attendant is the first expression mark in the preset time period Label;
If first ratio is less than or equal to third threshold value, and the third ratio is less than or equal to the second threshold Value, it is determined that the attitude of the attendant is the second expression label (for example, " normal " mark in the preset time period Label);
If the third ratio is greater than second threshold, it is determined that the service state of the attendant in the preset time period Degree is third expression label.
Invention also provides a kind of attitude assessment device based on facial expression recognition.Referring to shown in Fig. 2, for the present invention The schematic diagram of internal structure for the attitude assessment device based on facial expression recognition that one embodiment provides.
In the present embodiment, the attitude assessment device 1 based on facial expression recognition can be PC The terminal devices such as (PersonalComputer, PC) or smart phone, tablet computer, portable computer, can also To be a kind of server etc..The attitude assessment device 1 based on facial expression recognition includes at least memory 11, processor 12, communication bus 13 and network interface 14.
Wherein, memory 11 include at least a type of readable storage medium storing program for executing, the readable storage medium storing program for executing include flash memory, Hard disk, multimedia card, card-type memory (for example, SD or DX memory etc.), magnetic storage, disk, CD etc..Memory 11 It can be the internal storage unit of the attitude assessment device 1 based on facial expression recognition in some embodiments, such as should The hard disk of attitude assessment device 1 based on facial expression recognition.Memory 11 is also possible to base in further embodiments In the External memory equipment of the attitude assessment device 1 of facial expression recognition, such as the service state based on facial expression recognition The plug-in type hard disk being equipped on degree assessment device 1, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc..Further, memory 11 can also both include being based on The internal storage unit of the attitude assessment device 1 of facial expression recognition also includes External memory equipment.Memory 11 is not only It can be used for storing the application software and Various types of data for being installed on the assessment device 1 of the attitude based on facial expression recognition, example Such as code of attitude appraisal procedure 01 can be also used for temporarily storing the data that has exported or will export.
Processor 12 can be in some embodiments a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor or other data processing chips, the program for being stored in run memory 11 Code or processing data, such as execute attitude appraisal procedure 01 etc..
Communication bus 13 is for realizing the connection communication between these components.
Network interface 14 optionally may include standard wireline interface and wireless interface (such as WI-FI interface), be commonly used in Communication connection is established between the device 1 and other electronic equipments.
Optionally, which can also include user interface, and user interface may include display (Display), input Unit such as keyboard (Keyboard), optional user interface can also include standard wireline interface and wireless interface.It is optional Ground, in some embodiments, display can be light-emitting diode display, liquid crystal display, touch-control liquid crystal display and OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) touches device etc..Wherein, display can also be appropriate Referred to as display screen or display unit, for being shown in the letter handled in the attitude assessment device 1 based on facial expression recognition It ceases and for showing visual user interface.
Fig. 2 illustrates only the clothes based on facial expression recognition with component 11-14 and attitude appraisal procedure 01 Attitude of being engaged in assesses device 1, it will be appreciated by persons skilled in the art that structure shown in fig. 1 is not constituted to based on face table The restriction of the attitude assessment device 1 of feelings identification may include more certain than illustrating less perhaps more components or combination Component or different component layouts.
In 1 embodiment of device shown in Fig. 2, attitude appraisal procedure 01 is stored in memory 11;Processor 12 Following steps are realized when executing the attitude appraisal procedure 01 stored in memory 11:
Step 1: establishing attendant's face expression database and expression label and attendant's comparison expression library.
Present pre-ferred embodiments are by extracting the data in the expression data library (JAFFE) that Japan ATR is established, foundation pair Than the expression library of attendant.Japan ATR (the Advanced Telecommunication Research InstituteInternational expression data library JAFFE (The Japanses Female Facial) Expression Database, Japanese women facial expression data library) it is used exclusively for the database of Expression Recognition research, it should The face phase of 213 width (resolution ratio of each image: 256 pixels × 256 pixels) Japanese women, each image are contained in database All it is marked with original expression definition.10 people are shared in expression library, everyone there are 7 kinds of expressions (normal (also referred to as neutral face), high Emerging, sad, surprised, angry, detest, fear).
Present pre-ferred embodiments obtain indignation, happiness and normal three kinds of expressions from the expression data library JAFFE Human face data.
Because too big to model training and model prediction pressure using six kinds of emotions as judging label, so this programme needle Mainly there is glad, normal, indignation state in attitude to attendant, affective tag is divided into three kinds: is glad (main Including attendant smile, i.e., mouth raise up, eyes it is smaller than normal condition.Because people is in happiness, pupil can reduce), normally (mainly including attendant's mouth, eyes program normal size), indignation (are mainly characterized by pupil amplification, eyes are than normal shape State is big.Because people is in indignation, magnifying state can be presented in pupil).
It is sieved from the expression data library JAFFE for the indignation of attendant, happiness and normal three kinds of expressions, this programme Choosing belongs to the human face data of these three expressions.Because the Japanese citizen expression data library that ATR is established classifies to face picture Good, each face picture corresponds to corresponding expression label.It is right in order to compare the purity of attendant's expression library data set The good corresponding indignation of the human face data picture and mark filtered out, glad or normal expression label can manually more detections one Time.Compare the main judgment basis that attendant's expression library has critically important model training effect and later period Expression Recognition.
Using the human face expression figure of camera shooting service personnel, to attendant's face brightness of shooting do normalization and Data enhancing processing, establishes attendant's face expression database.
From comparison attendant's Table storehouse that the Japanese citizen expression data library ATR constructs, the brightness of human face expression picture is all It maintains in a section.So when the human face expression for obtaining attendant by image takings tools such as cameras, building clothes When business personnel's face expression database, it is also desirable to which the normalized for carrying out brightness makes to obtain human face expression and compares attendant's table The picture in feelings library has similar brightness.Meanwhile the face that the image takings tool such as camera obtains, since data volume is not filled Foot, also will affect judgement of the model to human face expression, therefore to the Facial Expression Image taken, i.e. attendant's human face expression Library, it is also necessary to carry out data enhancing processing.
Further, it in present pre-ferred embodiments, utilizes histogram equalization (Histogram Equalization) Method does normalized to face brightness.The histogram equalization is also known as histogram equalization, and this method is to shooting first To Facial Expression Image carry out Nonlinear extension: set pixel grayscale in variable r representative image.Gray level is normalized Processing, then r is in section [0,1], and wherein r=0 is indicated black, and r=1 indicates white.The Facial Expression Image taken for a width For, gray level of each pixel value in [0,1] is random.With probability density function Pr(rk) indicate Facial Expression Image The distribution of gray level, wherein rkIndicate the gray level under discrete case, k indicates the number of pixels of Facial Expression Image, by histogram The function expression of figure equalization does histogram treatment to Facial Expression Image:
Wherein, n is the sum of all pixels of a secondary Facial Expression Image.Then the pixel value for redistributing facial expression image, makes one The quantity for determining pixel value in tonal range is roughly equal.In this way, the original darker region of Facial Expression Image, contrast are increased By force, the stronger region contrast reduction of image.The Facial Expression Image of output is shown by histogram, is a more flat point Section histogram, reaches the normalized purpose of face brightness.
Further, present pre-ferred embodiments carry out data enhancing using noise addition, random perturbation and transform method Processing, to increase Facial Expression Image quantity.
Based on the processed Facial Expression Image of histogram equalization method is utilized, this programme is done first at image transformation Reason, the mode of transformation have rotation, overturning, scaling, amplify four kinds of modes, thus obtain four times of Facial Expression Image.Then, exist In transformed Facial Expression Image, noise, including salt-pepper noise are added at random, 2 dimensions between speckle noise and face eyes The noise of gaussian random distribution.Based on the above, 81 times of data reinforcing effect, each Facial Expression Image size are reached For 39*39.
Step 2: the human face expression figure of a variety of expressions using picture pick-up device shooting service personnel, to the service people of shooting The human face expression figure of member does Digital Image Processing, establishes attendant's face expression database.
In present pre-ferred embodiments, the Digital Image Processing includes:
Face brightness in the human face expression figure of the attendant shot using histogram equalization method to picture pick-up device Do normalized;
Data enhancing processing is carried out to the human face expression figure using noise addition, random perturbation and transform method, to increase Add the quantity of Facial Expression Image, to establish attendant's face expression database.
Step 3: comparing expression library training depth nerve using attendant's face expression database and the attendant Network model establishes Expression Recognition model.
Present pre-ferred embodiments service people using the comparison of glad, normal, indignation three kinds of affective tag annotation steps 1 Member expression library and step 2 shooting, it is processed after attendant's face expression database in picture, and the picture is put into DCNN (Deep Convolutional Network Cascade for Facial Point Detection) depth convolutional network mould Location cutting is carried out in type, feature extraction obtains output result after comparing.
Further, after the face of present pre-ferred embodiments locating human face facial expression image, face picture is cut.DCNN is first First comparison attendant's expression library based on input carries out Face detection to the image of attendant's face expression database of input, Then the face of positioning is cut out.Because directly a whole picture captured by camera is put into DCNN, due to face table The range that feelings image includes is too big, will affect the judgement to facial expression recognition.Therefore, first part's convolution net of DCNN network Network model passes through 5 characteristic points (left and right eye, nose, the left and right corners of the mouth) that need to look for face, orients face, cuts out face. Specifically, the convolutional neural networks of DCNN first part are made of three convolutional neural networks, these three convolutional neural networks It is respectively designated as: F1 (input of network is one whole face picture), EN1 (input picture contains eyes and nose), NM1 (containing nose and mouth region).For the Facial Expression Image of a 39*39 of input, one 10 dimension is exported by F1 Feature vector (5 characteristic points);According to 10 dimensional feature vectors of output, EN1 is for positioning three left eye, right eye and nose features Point;Simultaneously according to 10 dimensional feature vectors of output, NM1 positions three the left corners of the mouth, the right corners of the mouth and nose characteristic points, fixed in conjunction with EN1 After the nose characteristic point of position, the human face region picture comprising eyes, nose mouth is cut out.
Further, with the characteristic point basis of above-mentioned prediction, 5 human face characteristic points are predicted again.By it is above-mentioned can be rough Orient the position of each characteristic point, this step is the characteristic point basis with above-mentioned prediction, with this five predicted characteristics points is The heart continues to do feature location with the convolutional neural networks model of DCNN second part, finally compares 5 spies predicted under two links Sign point.The convolutional neural networks model of second part is made of 10 CNN, this 10 CNN are respectively used to 5 characteristic points of prediction, Each characteristic point uses two CNN, latter two right CNN to be averaged the result of prediction.
After feature point prediction twice, facial image is resurveyed.The neural network model of DCNN Part III is On the basis of the position of preceding feature point prediction twice, cutting is re-started.The neural network model of DCNN Part III and Two part-structures are identical, and are made of 10 CNN.Link, the sanction of the face expression database image of attendant are cut by 3.3 Cutting region becomes smaller.
Further, it is based on test error evaluation criterion, judges the expression classification of attendant's face expression database.By preceding After the characteristic point detection of three steps, the characteristic point of attendant's face expression database and characteristic point region are all extracted, This step is to be based on last depth by comparing comparison attendant's expression library in attendant's face expression database and step 1 Learning model judgment module judges attendant's expression in face expression database.Wherein, the judgment criteria of model judgment module It is by test error evaluation criterion formula:
Wherein l is the picture traverse of Facial Expression Image, is a fixed value;X is attendant's face expression database picture The vectors of 5 characteristic points indicate that x ' is the feature vector for comparing attendant's expression library data, y ' is that corresponding comparison services The expression label (angry, glad, normal) in personnel's expression library.Given threshold err=0.1.When the feature vector of x and x ' is less than threshold When value err=0.1, counter the pushing away of test error evaluation criterion formula is utilized, it can be deduced that the result of final y.If x is attendant The vector of 5 characteristic points of face expression database picture A indicates, then (compares the spy of attendant's expression library data with x ' respectively Vector is levied, glad, indignation, the feature vector of normal three labels are divided into) test error evaluation is done, if with glad test error When evaluation criterion value is less than threshold value err=0.1, then judge picture A for glad expression label.If glad and normal test misses Poor evaluation criterion value is both less than threshold value err=0.1, then chooses expression mark corresponding to test error evaluation criterion value minimum value Label.If all test error evaluation criterion values are both greater than threshold value err=0.1, picture A is summarized as to identify expression figure Piece remains the model training of step 3.Based on the above.Complete the image expression identification of attendant's face expression database.Work as mould When the threshold value of the loss function of type training is set as 0.05, once reaching threshold value, then model training process is completed.
Step 4: being identified using the Expression Recognition model to the expression of attendant, to judge attendant's Attitude.
If what is transmitted is the deep learning network of network that video file enters step four, too big to whole network pressure, therefore Setting camera shooting equips every prefixed time interval and transmits the human face expression picture of an attendant such as every 5 seconds, in conjunction with having instructed Practice the deep learning model completed, completes to identify the expression of the attitude of attendant.
One embodiment of the invention by a preset time period, as in one day from early 8 points to 6 points of the working time of evening It is interior, the probability of the expression label of all face expression pictures of a certain attendant is calculated, acquisition probability value is highest expression Label, the attitude as the attendant in the preset time period.
For example, obtaining 7200 human face expression pictures altogether in the preset time period, wherein utilize the depth It practises model and is judged as that glad human face expression picture is 3600, be judged as that normal human face expression picture is 1800, and The human face expression picture for being judged as indignation is 1800, then the probability highest of glad expression label, is 50%, then can be determined that The service of the attendant is well;The probability highest of expression label if normal, then can be determined that the service of the attendant Generally;If the probability highest of the expression label of indignation, can be determined that the service of the attendant is poor.
Another embodiment of the present invention by a preset time period, as in one day from early 8 points to 6 points of the working time of evening It is interior, the corresponding expression label of all face expression pictures of a certain attendant is calculated, and according to predetermined analysis Algorithm analyzes the attitude of the attendant in the preset time period.
For example, the predetermined parser are as follows:
If in the preset time period, represent the first expression label (for example, " happiness " label) picture number account for it is all First ratio of picture number is more than or equal to first threshold (for example, 80%), and represents third expression label (for example, " anger Anger " label) picture number account for all picture numbers third ratio be less than or equal to second threshold (for example, 0.5%), then The attitude for determining the attendant in the preset time period is the first expression label;
If first ratio is less than first threshold and is greater than third threshold value (for example, 50%), and the third ratio is small In or equal to second threshold, it is determined that the attitude of the attendant is the first expression mark in the preset time period Label;
If first ratio is less than or equal to third threshold value, and the third ratio is less than or equal to the second threshold Value, it is determined that the attitude of the attendant is the second expression label (for example, " normal " mark in the preset time period Label);
If the third ratio is greater than second threshold, it is determined that the service state of the attendant in the preset time period Degree is third expression label.
Optionally, in other embodiments, attitude appraisal procedure can also be divided into one or more module, One or more module is stored in memory 11, and by one or more processors (the present embodiment is processor 12) institute It executes to complete the present invention, the so-called module of the present invention is the series of computation machine program instruction for referring to complete specific function Section, for describing implementation procedure of the attitude appraisal procedure in the attitude assessment device based on facial expression recognition.
For example, referring to shown in Fig. 3, for the present invention is based on the attitudes of facial expression recognition to assess in one embodiment of device Attitude appraisal procedure program module schematic diagram, in the embodiment, the attitude appraisal procedure can be divided Module 10, deep learning model training module 20, Expression Recognition module 30 are established for face expression database, illustratively:
The face expression database is established module 10 and is used for: establishing attendant's face expression database and expression label and service people Member's comparison expression library.
Deep learning training module 20 is used for: utilizing the human face expression of a variety of expressions of picture pick-up device shooting service personnel Figure, does Digital Image Processing to the human face expression figure of the attendant of shooting, establishes attendant's face expression database.
Expression Recognition module 30 is used for: being utilized the Expression Recognition model, is identified to the expression of attendant, to sentence The attitude of disconnected attendant.
Above-mentioned keyword face expression database establishes module 10, deep learning model training module 20, Expression Recognition module 30 Etc. program modules be performed realized functions or operations step and be substantially the same with above-described embodiment, details are not described herein.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium On be stored with attitude appraisal procedure, the attitude appraisal procedure can be executed by one or more processors, with realize Following operation:
It establishes attendant's face expression database and expression label and attendant compares expression library;
Using the human face expression figure of a variety of expressions of picture pick-up device shooting service personnel, to the face of the attendant of shooting Expression figure does Digital Image Processing, establishes attendant's face expression database;
Using the Expression Recognition model, the expression of attendant is identified, to judge the service state of attendant Degree.
Computer readable storage medium specific embodiment of the present invention and the above-mentioned attitude based on facial expression recognition It is essentially identical to assess each embodiment of device and method, does not make tired state herein.
It should be noted that the serial number of the above embodiments of the invention is only for description, do not represent the advantages or disadvantages of the embodiments.And The terms "include", "comprise" herein or any other variant thereof is intended to cover non-exclusive inclusion, so that packet Process, device, article or the method for including a series of elements not only include those elements, but also including being not explicitly listed Other element, or further include for this process, device, article or the intrinsic element of method.Do not limiting more In the case where, the element that is limited by sentence "including a ...", it is not excluded that including process, device, the article of the element Or there is also other identical elements in method.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in one as described above In storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that terminal device (it can be mobile phone, Computer, server or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of attitude appraisal procedure based on facial expression recognition, which is characterized in that the described method includes:
The human face data for extracting a variety of expressions in default expression database, establishes the comparison expression library of attendant;
Using the human face expression figure of a variety of expressions of picture pick-up device shooting service personnel, to the human face expression of the attendant of shooting Figure does Digital Image Processing, establishes attendant's face expression database;
Expression library training deep neural network model is compared using attendant's face expression database and the attendant, is built Vertical Expression Recognition model;
Using the Expression Recognition model, the expression of attendant is identified, to judge the attitude of attendant.
2. the attitude appraisal procedure based on facial expression recognition as described in claim 1, which is characterized in that described to be based on The attitude appraisal procedure of facial expression recognition further include:
Expression is compared using glad, normal, indignation three kinds of affective tags mark attendant's face expression database and attendant Picture in library.
3. the attitude appraisal procedure based on facial expression recognition as described in claim 1, which is characterized in that described pair of bat The human face expression figure of the attendant taken the photograph does Digital Image Processing and includes:
Face brightness in the human face expression figure of the attendant shot using histogram equalization method to picture pick-up device, which is done, returns One change processing;
Data enhancing processing is carried out to the human face expression figure using noise addition, random perturbation and transform method, to increase people The quantity of face facial expression image, to establish attendant's face expression database.
4. the attitude appraisal procedure based on facial expression recognition as described in claim 1, which is characterized in that the foundation Expression Recognition model includes:
The face picture in the comparison expression library of the face picture in attendant's face expression database and attendant is inputted to deeply It spends in neural network model, the deep neural network model extracts the face picture in attendant's face expression database In human face characteristic point after, the face of locating human face's facial expression image simultaneously cuts face picture;
Face picture based on cutting, the deep neural network model is according to the people in the comparison expression library of the attendant Face picture extracts human face characteristic point again;
After extraction based on the characteristic point, facial image training is resurveyed, the Expression Recognition model is established.
5. the attitude appraisal procedure based on facial expression recognition as described in any one of Claims 1-4, feature It is, it is described to utilize the Expression Recognition model, the expression of attendant is identified, to judge the service state of attendant Degree, comprising:
According to the work hours of attendant, the period of picture pick-up device shooting service personnel human face expression figure is set;
The attendant's human face expression figure taken during the period of time is sequentially sent to the Expression Recognition model, is conducive to pre- First determining parser, analyzes the attitude of the attendant in the preset time period, wherein described true in advance Fixed parser are as follows:
If in the period, represent the first expression label picture number account for all picture numbers the first ratio be greater than or Equal to first threshold and the picture number that represents third expression label accounts for the third ratios of all picture numbers and is less than or equal to Second threshold, it is determined that the attitude of the attendant is the first expression label in the preset time period;
If first ratio is less than first threshold and is greater than third threshold value, and the third ratio is less than or equal to the second threshold Value, it is determined that the attitude of the attendant is the first expression label in the preset time period;
If first ratio is less than or equal to third threshold value, and the third ratio is less than or equal to second threshold, then The attitude for determining the attendant in the preset time period is the second expression label;
If the third ratio is greater than second threshold, it is determined that the attitude of the attendant is in the preset time period Third expression label.
6. a kind of attitude based on facial expression recognition assesses device, which is characterized in that described device include memory and Processor is stored with the attitude appraisal procedure that can be run on the processor, the attitude on the memory Appraisal procedure realizes following steps when being executed by the processor:
The human face data for extracting a variety of expressions in default expression database, establishes the comparison expression library of attendant;
Using the human face expression figure of a variety of expressions of picture pick-up device shooting service personnel, to the human face expression of the attendant of shooting Figure does Digital Image Processing, establishes attendant's face expression database;
Expression library training deep neural network model is compared using attendant's face expression database and the attendant, is built Vertical Expression Recognition model;
Using the Expression Recognition model, the expression of attendant is identified, to judge the attitude of attendant.
7. the attitude based on facial expression recognition assesses device as claimed in claim 6, which is characterized in that the service Attitude appraisal procedure also realizes following steps when being executed by the processor:
Expression is compared using glad, normal, indignation three kinds of affective tags mark attendant's face expression database and attendant Picture in library.
8. the attitude based on facial expression recognition assesses device as claimed in claim 6, which is characterized in that shooting The human face expression figure of attendant does Digital Image Processing and includes:
Face brightness in the human face expression figure of the attendant shot using histogram equalization method to picture pick-up device, which is done, returns One change processing;
Data enhancing processing is carried out to the human face expression figure using noise addition, random perturbation and transform method, to increase people The quantity of face facial expression image, to establish attendant's face expression database.
9. the attitude based on facial expression recognition assesses device as claimed in claim 6, which is characterized in that the foundation Expression Recognition model includes:
The face picture in the comparison expression library of the face picture in attendant's face expression database and attendant is inputted to deeply It spends in neural network model, the deep neural network model extracts the face picture in attendant's face expression database In human face characteristic point after, the face of locating human face's facial expression image simultaneously cuts face picture;
Face picture based on cutting, the deep neural network model is according to the people in the comparison expression library of the attendant Face picture extracts human face characteristic point again;
After extraction based on the characteristic point, facial image training is resurveyed, Expression Recognition model is established.
10. a kind of computer readable storage medium, which is characterized in that be stored with service state on the computer readable storage medium Spend appraisal procedure, the attitude appraisal procedure can execute by one or more processor, with realize as claim 1 to The step of attitude appraisal procedure described in any one of 5 based on facial expression recognition.
CN201910007335.9A 2019-01-04 2019-01-04 Attitude appraisal procedure, device and storage medium based on facial expression recognition Pending CN109871751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910007335.9A CN109871751A (en) 2019-01-04 2019-01-04 Attitude appraisal procedure, device and storage medium based on facial expression recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910007335.9A CN109871751A (en) 2019-01-04 2019-01-04 Attitude appraisal procedure, device and storage medium based on facial expression recognition

Publications (1)

Publication Number Publication Date
CN109871751A true CN109871751A (en) 2019-06-11

Family

ID=66917509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910007335.9A Pending CN109871751A (en) 2019-01-04 2019-01-04 Attitude appraisal procedure, device and storage medium based on facial expression recognition

Country Status (1)

Country Link
CN (1) CN109871751A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458008A (en) * 2019-07-04 2019-11-15 深圳壹账通智能科技有限公司 Method for processing video frequency, device, computer equipment and storage medium
CN110895685A (en) * 2019-11-25 2020-03-20 创新奇智(上海)科技有限公司 Smile service quality evaluation system and evaluation method based on deep learning
CN111885343A (en) * 2020-07-31 2020-11-03 中国工商银行股份有限公司 Feature processing method and device, electronic equipment and readable storage medium
WO2020248782A1 (en) * 2019-06-14 2020-12-17 南京云创大数据科技股份有限公司 Intelligent establishment method for asian face database
CN114173061A (en) * 2021-12-13 2022-03-11 深圳万兴软件有限公司 Multi-mode camera shooting control method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100105247A (en) * 2009-03-20 2010-09-29 한국전자통신연구원 System and method for face recognition performance measuring of intelligent robot
CN105354527A (en) * 2014-08-20 2016-02-24 南京普爱射线影像设备有限公司 Negative expression recognizing and encouraging system
CN105608447A (en) * 2016-02-17 2016-05-25 陕西师范大学 Method for detecting human face smile expression depth convolution nerve network
CN106096831A (en) * 2016-06-08 2016-11-09 山西万立科技有限公司 Expressway Civilization services overall evaluation system
CN108427916A (en) * 2018-02-11 2018-08-21 上海复旦通讯股份有限公司 A kind of monitoring system and monitoring method of mood of attending a banquet for customer service
CN108694372A (en) * 2018-03-23 2018-10-23 广东亿迅科技有限公司 A kind of net cast customer service attitude evaluation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100105247A (en) * 2009-03-20 2010-09-29 한국전자통신연구원 System and method for face recognition performance measuring of intelligent robot
CN105354527A (en) * 2014-08-20 2016-02-24 南京普爱射线影像设备有限公司 Negative expression recognizing and encouraging system
CN105608447A (en) * 2016-02-17 2016-05-25 陕西师范大学 Method for detecting human face smile expression depth convolution nerve network
CN106096831A (en) * 2016-06-08 2016-11-09 山西万立科技有限公司 Expressway Civilization services overall evaluation system
CN108427916A (en) * 2018-02-11 2018-08-21 上海复旦通讯股份有限公司 A kind of monitoring system and monitoring method of mood of attending a banquet for customer service
CN108694372A (en) * 2018-03-23 2018-10-23 广东亿迅科技有限公司 A kind of net cast customer service attitude evaluation method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020248782A1 (en) * 2019-06-14 2020-12-17 南京云创大数据科技股份有限公司 Intelligent establishment method for asian face database
CN110458008A (en) * 2019-07-04 2019-11-15 深圳壹账通智能科技有限公司 Method for processing video frequency, device, computer equipment and storage medium
CN110895685A (en) * 2019-11-25 2020-03-20 创新奇智(上海)科技有限公司 Smile service quality evaluation system and evaluation method based on deep learning
CN111885343A (en) * 2020-07-31 2020-11-03 中国工商银行股份有限公司 Feature processing method and device, electronic equipment and readable storage medium
CN111885343B (en) * 2020-07-31 2022-06-14 中国工商银行股份有限公司 Feature processing method and device, electronic equipment and readable storage medium
CN114173061A (en) * 2021-12-13 2022-03-11 深圳万兴软件有限公司 Multi-mode camera shooting control method and device, computer equipment and storage medium
CN114173061B (en) * 2021-12-13 2023-09-29 深圳万兴软件有限公司 Multi-mode camera shooting control method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107633204B (en) Face occlusion detection method, apparatus and storage medium
CN109871751A (en) Attitude appraisal procedure, device and storage medium based on facial expression recognition
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
Makhmudkhujaev et al. Facial expression recognition with local prominent directional pattern
CN107679448B (en) Eyeball action-analysing method, device and storage medium
CN110728209B (en) Gesture recognition method and device, electronic equipment and storage medium
CN108829900B (en) Face image retrieval method and device based on deep learning and terminal
Baskan et al. Projection based method for segmentation of human face and its evaluation
WO2021051611A1 (en) Face visibility-based face recognition method, system, device, and storage medium
CN110516544B (en) Face recognition method and device based on deep learning and computer readable storage medium
CN104732200B (en) A kind of recognition methods of skin type and skin problem
Gosavi et al. Facial expression recognition using principal component analysis
CN112800903A (en) Dynamic expression recognition method and system based on space-time diagram convolutional neural network
CN109190535A (en) A kind of face blee analysis method and system based on deep learning
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
CN111178195A (en) Facial expression recognition method and device and computer readable storage medium
CN110309709A (en) Face identification method, device and computer readable storage medium
Paul et al. Extraction of facial feature points using cumulative histogram
CN115035581A (en) Facial expression recognition method, terminal device and storage medium
CN115205933A (en) Facial expression recognition method, device, equipment and readable storage medium
CN110363747A (en) Intelligent abnormal cell judgment method, device and computer readable storage medium
CN113468925B (en) Occlusion face recognition method, intelligent terminal and storage medium
Abdallah et al. A new color image database for benchmarking of automatic face detection and human skin segmentation techniques
CN110222571B (en) Intelligent judgment method and device for black eye and computer readable storage medium
CN115273150A (en) Novel identification method and system for wearing safety helmet based on human body posture estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination