CN104881660A - Facial expression recognition and interaction method based on GPU acceleration - Google Patents

Facial expression recognition and interaction method based on GPU acceleration Download PDF

Info

Publication number
CN104881660A
CN104881660A CN201510335907.8A CN201510335907A CN104881660A CN 104881660 A CN104881660 A CN 104881660A CN 201510335907 A CN201510335907 A CN 201510335907A CN 104881660 A CN104881660 A CN 104881660A
Authority
CN
China
Prior art keywords
expression
face
micro
recognition
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510335907.8A
Other languages
Chinese (zh)
Other versions
CN104881660B (en
Inventor
潘志庚
严政
张明敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Jidong Culture and Art Group Co., Ltd.
Original Assignee
Jilin Jiyuan Space-Time Animation Game Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Jiyuan Space-Time Animation Game Technology Co Ltd filed Critical Jilin Jiyuan Space-Time Animation Game Technology Co Ltd
Priority to CN201510335907.8A priority Critical patent/CN104881660B/en
Publication of CN104881660A publication Critical patent/CN104881660A/en
Application granted granted Critical
Publication of CN104881660B publication Critical patent/CN104881660B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a facial expression recognition and interaction method based on GPU acceleration, and belongs to the field of mode recognition. The method establishes a method framework of rapidly recognizing micro-expression based on GPU acceleration. The method comprises: firstly, acquiring user facial images through an ordinary camera or a video, performing facial detection and facial feature point recognition by using a recognition method based on Haar characteristics and AdaBoost, to recognize position coordinates of user pupils, apex nasi, etc., dividing a face into a plurality of critical areas, using a Gabor filtering method based on GPU acceleration, performing Gabor filtering on the whole face in five dimensions and eight directions, and performing feature extraction on the critical areas near the facial feature points. According to the method, features of CK, CK+, MMI extracted from a database are marked in labels according to whether micro-expression occurs or not, and through putting the features with the labels to a SVM, a recognition model is obtained. A user puts the extracted features into the recognition model to obtain specific information of the micro-expression.

Description

The expression recognition accelerated based on GPU and interactive approach
Technical field
The present invention relates to computer digit multimedia, augmented reality and body sense interaction technique field, particularly a kind of human face expression based on common camera detects and interactive approach, espespecially a kind of expression recognition based on GPU acceleration and interactive approach.
Background technology
Face is one of expressive passage of most in nonverbal communication, and human face expression is research mood, intention, personality provide clue.The ability of precise and high efficiency identification human face expression can provide support for a lot of application, and imagination space is huge.
In the past few decades, a lot of computer system is fabricated out, is used for understanding human expressions or interactive with it.Expression Recognition is all that some prototypes express one's feelings (glad, surprised, angry, sad, frightened, nauseating) by most systems.In daily life, these prototypes expression seldom occurs, and has exchanged a lot by little one or two change on face characteristic, such as closes lip and represent indignation, and lip tilts down to represent sadness.The change of some independent features, particularly in the change in eyebrow and eyelid region, a kind of paralanguage especially, represents as lifted eyebrow happily.In order to obtain exchanging of these meticulous human emotion and paralanguage, the automatic detection of human expressions's slight change is very important.
Facial Coding System (FACS) be most popular, based on the coded system of anatomy principle, it, by observing the instantaneous light microvariations of facial appearance, is encoded into the motion of different muscle.Use FACS, researcher by any facial expression feasible anatomically, can be decoded into Action Unit.To happy, sad, surprised, frightened, angry, these six kinds of basic facial expressions of feeling sick, then the combination of AU can be used to represent, such as happiness is the combination of AU6 and AU12.
Mainly be divided into the identification of prototype expression and the identification to face Action Unit to the identification of human face expression, shan uses LBP to carry out discriminator to expression prototype.Zhao proposes a kind of dynamic texture recognizer based on LBP, and has carried out discriminator by this algorithm to prototype expression.Cohn uses Gabor filtering identification smiling face.Sebe employs the mode that PBVD model specified by user and obtains facial feature points, then obtains expressive features by model deformation, uses SVM support vector machine to classify to feature thus obtains prototype to express one's feelings.
Have a lot to the algorithm that face Action Unit identifies, Lucey uses the movable contour model of AAM to mate 63 of face unique points, carries out discriminator by SVM support vector machine, NN arest neighbors and LDA to AU.The LPQ-TOP method that Jiang proposes is a kind of improving one's methods based on LBP-TOP and LBQ, and achieve the discriminator to 9 AU, accuracy rate reaches 84.6%.Bartlett and Littlewort uses Gabor filtering to identify expression prototype and AU.Littlewort also uses Gabor filtering to carry out feature extraction to facial image, and uses SVM to carry out AU discriminator.
In the research of expression recognition, Many researchers pays close attention to the accuracy rate identified, and be not take notice of especially to efficiency, therefore, develop a kind of can carry out in real time expression mutual system be very necessary, this can allow user pass judgment on interactive system better on the one hand, on the other hand, also can carry out more Mutual effect based on this.
Real-time expression interactive system is a Software tool that may be used for Real time identification human face expression AU.System by camera process real-time video, also can process video file or single picture.By the identification to 16 facial AU, system both can respond single AU, also by combination, can identify, or customize interactive action these six kinds of prototype expressions happy, sad, surprised, angry, nauseating and frightened.The output of this systems all shows at graphical interfaces, is directly output as file.By the interprocess communication means based on socket, expression interactive system can provide AU recognition function for other application in real time, contributes to the application and development based on AU.
Summary of the invention
The object of the present invention is to provide a kind of expression recognition based on GPU acceleration and interactive approach, solve the problems such as the recognition of face inefficiency of prior art existence.Based on the Gabor filtering identification human face expression that GPU accelerates, the method can allow user before common camera, comes to carry out alternately with computing machine, in digital home, game and medically have broad application prospects by the change of expression.
Above-mentioned purpose of the present invention is achieved through the following technical solutions:
The expression recognition accelerated based on GPU and interactive approach, comprise the following steps:
Step (1), obtain dynamic human face expression by common camera or video:
Common camera is connected to computing machine, and common camera is positioned over player face dead ahead, distance face 50-60 centimetre, obtains the image comprising front face by camera;
Step (2), use the recognition methods based on Haar characteristic sum AdaBoost cascade classifier to detect face, extract from camera nearest, namely occupy the facial image that picture is maximum:
Step (3), facial image by extracting, use the recognition methods based on Haar feature and AdaBoost, identify the position coordinates such as pupil, nose.
Step (4), face is divided into several critical areas:
By the analysis to the micro-expression of face, face is divided into several regions by us, wherein, centered by pupil position, 15 pixels left, to the right 15 pixels, upwards 35 pixels, the region of downward 15 pixels is eyebrow expression region, is used for detecting the slight change of eyebrow, also has the region covering the micro-expression of eyes, the micro-expression of cheek and the micro-expression of lip in addition respectively;
Step (5), the Gabor filtering that whole face realization is accelerated based on GPU:
Gabor filtering needs the convolution operation respectively Gabor being examined to portion and imaginary part in a large number, and Gabor core is larger, and convolved image is larger, consuming time longer, and we select the Gabor core of 21*21 pixel size, and convolved image is 150*150.For the characteristic of convolution algorithm length consuming time, employ the method based on FFT, the convolution operation of spatial domain is converted into the multiplication operations of frequency field, each convolution only needs 1 FFT conversion, is multiplied for 1 time and 1 inverse FFT conversion, time complexity is nlog (n), speed faster can be reached, use GPU to accelerate parallel processing technique simultaneously, FFT conversion is accelerated, the FFT transformation results of Gabor core is kept in video memory simultaneously, reduces the time of computing.After Gabor filtering, each pixel has 40 amplitudes as feature.
Step (6), feature extraction is carried out to the critical area near human face characteristic point:
By the eyebrow expression region ROI and other ROI will obtained in step (4), the pixel in each ROI is arranged according to order from left to right, from top to bottom, then 40 amplitudes are substituted into, obtain the feature of this ROI;
Step (7), if be now training mode, then whether the feature extracted is occurred stamping label according to micro-expression, and generate model of cognition by the mode that increment SVM trains: CK, CK+, in the Facial expression databases such as MMI, comprise the picture of expression and micro-expression information of wright's manual markings, by the image of correspondence is passed through step (1) to step (6), obtain characteristic of correspondence, by the micro-expression information according to wright's manual markings, using concrete micro-expression as the micro-expression provincial characteristics of classification set, trained by the SVM support vector machine of band punishment, wherein punish that parameter is 10, obtain micro-Expression Recognition model, altogether 16 expressions are identified, so generate 16 micro-Expression Recognition models,
Step (8) is if be now recognition mode, then the feature extracted is dropped in corresponding model of cognition, obtain the specifying information of micro-expression: by step (7), 16 micro-Expression Recognition models can be generated, by the feature will generated in step (6), put in corresponding C-SVM, the information whether micro-expression occurs can be obtained accurately.
Use the recognition methods based on Haar characteristic sum AdaBoost cascade classifier to detect face in described step (2), extract from camera nearest, namely occupy the facial image that picture is maximum, the steps include:
(2.1), in the image basis, in step (1) obtained, what use OpenCV to carry carries out Face datection based on Haar characteristic sum AdaBoost cascade classifier method, and wherein scale parameter is 1.1, minNeighbors parameter is 3;
(2.2), all faces detected are sorted from big to small with face size, calculate the median of face size, and delete the face of larger than median 30% and less than median 30%, in remaining face, select maximum one and record its coordinate;
(2.3), Image semantic classification: in Expression Recognition, source picture has a lot of difference in size, illumination, position, and desirable input is pure expression region, so need the pre-service through following steps:
(2.3.1), unitary of illumination, namely Nogata is balanced;
(2.3.2), geometrical normalization, be namely converted to 150*150 resolution;
(2.4), an oval mask is covered on image with the center of face, long axis length is 47% of picture altitude, and minor axis length is 41.6% of picture traverse, marks pure expression region, effectively to exclude the noise outside face.
Beneficial effect of the present invention is: the problem that the invention solves recognition of face inefficiency, and being connected by Socket allows user can carry out alternately with other programs, user is without the need to using the interactive devices such as mouse-keyboard, only need the expression using oneself just can carry out alternately with computer, have broad prospects at digital home, game and medical field, and there is higher availability and realistic meaning.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide a further understanding of the present invention, forms a application's part, and illustrative example of the present invention and explanation thereof, for explaining the present invention, do not form inappropriate limitation of the present invention.
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is software interface figure of the present invention.
Embodiment
Detailed content of the present invention and embodiment thereof is further illustrated below in conjunction with accompanying drawing.
Shown in Fig. 1 and Fig. 2, the expression recognition based on GPU acceleration of the present invention and interactive approach, comprise the following steps:
Step 1. obtains dynamic human face expression by common camera or video: common camera is connected to computing machine, and common camera is positioned over player face dead ahead, distance face 50-60 centimetre, obtains the image comprising front face by camera.
Step 2. uses the recognition methods based on Haar characteristic sum AdaBoost cascade classifier to detect face, extracts the facial image from camera nearest (occupying picture maximum):
2.1 in the image basis of step 1 acquisition, and what use OpenCV to carry carries out Face datection based on Haar characteristic sum AdaBoost cascade classifier method, and wherein scale parameter is 1.1, minNeighbors parameter is 3;
All faces detected sort with face size by 2.2 from big to small, calculate the median of face size, and delete the face of excessive (larger than median 30%) and too small (less than median 30%), in remaining face, select maximum one and record its coordinate;
2.3 Image semantic classification: in Expression Recognition, source picture has a lot of difference in size, illumination, position, and the input of our ideal is pure expression region, so need the pre-service through following steps:
2.3.1 unitary of illumination (Nogata is balanced);
2.3.2 geometrical normalization (being converted to 150*150 resolution);
2.4 one oval mask covers (long axis length is 47% of picture altitude, and minor axis length is 41.6% of picture traverse) on image with the center of face, marks pure expression region, effectively can exclude the noise outside face.
The facial image of step 3. by extracting, uses the recognition methods based on Haar feature and AdaBoost, identifies the position coordinates such as pupil, nose.
Face is divided into several critical areas by step 4.:
By the analysis to the micro-expression of face, face is divided into several regions, wherein, centered by pupil position, 15 pixels left, to the right 15 pixels, upwards 35 pixels, the region of downward 15 pixels is used for detecting the slight change of eyebrow, also has the region covering the micro-expression of eyes, the micro-expression of cheek and the micro-expression of lip in addition respectively.
Step 5. realizes the Gabor filtering accelerated based on GPU to whole face:
Gabor filtering needs the convolution operation respectively Gabor being examined to portion and imaginary part in a large number, and Gabor core is larger, and convolved image is larger, consuming time longer, and we select the Gabor core of 21*21 pixel size, and convolved image is 150*150.For the characteristic of convolution algorithm length consuming time, employ the method based on FFT, the convolution operation of spatial domain is converted into the multiplication operations of frequency field, each convolution only needs 1 FFT conversion, is multiplied for 1 time and 1 inverse FFT conversion, time complexity is nlog (n), speed faster can be reached, use GPU to accelerate parallel processing technique simultaneously, FFT conversion is accelerated, the FFT transformation results of Gabor core is kept in video memory simultaneously, reduces the time of computing.After Gabor filtering, each pixel has 40 amplitudes as feature.
Critical area near step 6. pair human face characteristic point carries out feature extraction:
By the eyebrow expression region ROI and other ROI will obtained in step 4, the pixel in each ROI is arranged according to order from left to right, from top to bottom, then 40 amplitudes are substituted into, obtain the feature of this ROI.
If step 7. is now training mode, then whether the feature extracted is occurred stamping label according to micro-expression, and generate model of cognition by the mode that increment SVM trains: CK, CK+, in the Facial expression databases such as MMI, comprise the picture of expression and micro-expression information of wright's manual markings, by the image of correspondence is passed through step 1 to step 6, obtain characteristic of correspondence, by the micro-expression information according to wright's manual markings, using concrete micro-expression as the micro-expression provincial characteristics of classification set, trained by the SVM support vector machine of band punishment, wherein punish that parameter is 10, obtain micro-Expression Recognition model.Altogether 16 expressions are identified, so generate 16 micro-Expression Recognition models.
If step 8. is now recognition mode, then the feature extracted is dropped in corresponding model of cognition, obtain the specifying information of micro-expression: by step 7,16 micro-Expression Recognition models can be generated, by the feature will generated in step 6, put in corresponding C-SVM, the information whether micro-expression occurs can be obtained accurately.
The foregoing is only preferred embodiment of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.All any amendments made for the present invention, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (2)

1., based on expression recognition and the interactive approach of GPU acceleration, it is characterized in that: comprise the following steps:
Step (1), obtain dynamic human face expression by common camera or video:
Common camera is connected to computing machine, and common camera is positioned over player face dead ahead, distance face 50-60 centimetre, obtains the image comprising front face by camera;
Step (2), use the recognition methods based on Haar characteristic sum AdaBoost cascade classifier to detect face, extract from camera nearest, namely occupy the facial image that picture is maximum:
Step (3), facial image by extracting, use the recognition methods based on Haar feature and AdaBoost, identify pupil, nose position coordinates;
Step (4), face is divided into several critical areas:
By the analysis to the micro-expression of face, face is divided into several regions by us, wherein, centered by pupil position, 15 pixels left, to the right 15 pixels, upwards 35 pixels, the region of downward 15 pixels is eyebrow expression region, is used for detecting the slight change of eyebrow, also has the region covering the micro-expression of eyes, the micro-expression of cheek and the micro-expression of lip in addition respectively;
Step (5), the Gabor filtering that whole face realization is accelerated based on GPU:
Gabor filtering needs the convolution operation respectively Gabor being examined to portion and imaginary part in a large number, and Gabor core is larger, and convolved image is larger, consuming time longer, and we select the Gabor core of 21*21 pixel size, and convolved image is 150*150; For the characteristic of convolution algorithm length consuming time, employ the method based on FFT, the convolution operation of spatial domain is converted into the multiplication operations of frequency field, each convolution only needs 1 FFT conversion, is multiplied for 1 time and 1 inverse FFT conversion, time complexity is nlog (n), speed faster can be reached, use GPU to accelerate parallel processing technique simultaneously, FFT conversion is accelerated, the FFT transformation results of Gabor core is kept in video memory simultaneously, reduces the time of computing; After Gabor filtering, each pixel has 40 amplitudes as feature;
Step (6), feature extraction is carried out to the critical area near human face characteristic point:
By the eyebrow expression region ROI and other ROI will obtained in step (4), the pixel in each ROI is arranged according to order from left to right, from top to bottom, then 40 amplitudes are substituted into, obtain the feature of this ROI;
Step (7), if be now training mode, then whether the feature extracted is occurred stamping label according to micro-expression, and generate model of cognition by the mode that increment SVM trains: CK, CK+, in MMI Facial expression database, comprise the picture of expression and micro-expression information of wright's manual markings, by the image of correspondence is passed through step (1) to step (6), obtain characteristic of correspondence, by the micro-expression information according to wright's manual markings, using concrete micro-expression as the micro-expression provincial characteristics of classification set, trained by the SVM support vector machine of band punishment, wherein punish that parameter is 10, obtain micro-Expression Recognition model, altogether 16 expressions are identified, so generate 16 micro-Expression Recognition models,
Step (8) is if be now recognition mode, then the feature extracted is dropped in corresponding model of cognition, obtain the specifying information of micro-expression: by step (7), 16 micro-Expression Recognition models can be generated, by the feature will generated in step (6), put in corresponding C-SVM, the information whether micro-expression occurs can be obtained accurately.
2. the expression recognition based on GPU acceleration according to claim 1 and interactive approach, it is characterized in that: in described step (2), use the recognition methods based on Haar characteristic sum AdaBoost cascade classifier to detect face, extract from camera nearest, namely occupy the facial image that picture is maximum, the steps include:
(2.1), in the image basis, in step (1) obtained, what use OpenCV to carry carries out Face datection based on Haar characteristic sum AdaBoost cascade classifier method, and wherein scale parameter is 1.1, minNeighbors parameter is 3;
(2.2), all faces detected are sorted from big to small with face size, calculate the median of face size, and delete the face of larger than median 30% and less than median 30%, in remaining face, select maximum one and record its coordinate;
(2.3), Image semantic classification: in Expression Recognition, source picture has a lot of difference in size, illumination, position, and desirable input is pure expression region, so need the pre-service through following steps:
(2.3.1), unitary of illumination, namely Nogata is balanced;
(2.3.2), geometrical normalization, be namely converted to 150*150 resolution;
(2.4), an oval mask is covered on image with the center of face, long axis length is 47% of picture altitude, and minor axis length is 41.6% of picture traverse, marks pure expression region, effectively to exclude the noise outside face.
CN201510335907.8A 2015-06-17 2015-06-17 The expression recognition and interactive approach accelerated based on GPU Active CN104881660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510335907.8A CN104881660B (en) 2015-06-17 2015-06-17 The expression recognition and interactive approach accelerated based on GPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510335907.8A CN104881660B (en) 2015-06-17 2015-06-17 The expression recognition and interactive approach accelerated based on GPU

Publications (2)

Publication Number Publication Date
CN104881660A true CN104881660A (en) 2015-09-02
CN104881660B CN104881660B (en) 2018-01-09

Family

ID=53949147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510335907.8A Active CN104881660B (en) 2015-06-17 2015-06-17 The expression recognition and interactive approach accelerated based on GPU

Country Status (1)

Country Link
CN (1) CN104881660B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827976A (en) * 2016-04-26 2016-08-03 北京博瑞空间科技发展有限公司 GPU (graphics processing unit)-based video acquisition and processing device and system
CN107273876A (en) * 2017-07-18 2017-10-20 山东大学 A kind of micro- expression automatic identifying method of ' the grand micro- transformation models of to ' based on deep learning
CN107341435A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Processing method, device and the terminal device of video image
CN107480622A (en) * 2017-08-07 2017-12-15 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device and storage medium
CN107633207A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 AU characteristic recognition methods, device and storage medium
CN107832691A (en) * 2017-10-30 2018-03-23 北京小米移动软件有限公司 Micro- expression recognition method and device
CN108288034A (en) * 2018-01-11 2018-07-17 中国地质大学(武汉) A kind of method for evaluating quality and system of game design
CN108646920A (en) * 2018-05-16 2018-10-12 Oppo广东移动通信有限公司 Identify exchange method, device, storage medium and terminal device
CN108985873A (en) * 2017-05-30 2018-12-11 株式会社途伟尼 Cosmetics recommended method, the recording medium for being stored with program, the computer program to realize it and cosmetics recommender system
CN109344760A (en) * 2018-09-26 2019-02-15 江西师范大学 A kind of construction method of natural scene human face expression data collection
CN109389074A (en) * 2018-09-29 2019-02-26 东北大学 A kind of expression recognition method extracted based on human face characteristic point
CN109472198A (en) * 2018-09-28 2019-03-15 武汉工程大学 A kind of video smiling face's recognition methods of attitude robust
CN109753848A (en) * 2017-11-03 2019-05-14 杭州海康威视数字技术股份有限公司 Execute the methods, devices and systems of face identifying processing
CN110046559A (en) * 2019-03-28 2019-07-23 广东工业大学 A kind of face identification method
CN110110671A (en) * 2019-05-09 2019-08-09 谷泽丰 A kind of character analysis method, apparatus and electronic equipment
CN110621228A (en) * 2017-05-01 2019-12-27 三星电子株式会社 Determining emotions using camera-based sensing
CN110852220A (en) * 2019-10-30 2020-02-28 深圳智慧林网络科技有限公司 Intelligent recognition method of facial expression, terminal and computer readable storage medium
CN111353354A (en) * 2018-12-24 2020-06-30 杭州海康威视数字技术股份有限公司 Human body stress information identification method and device and electronic equipment
CN115620117A (en) * 2022-12-20 2023-01-17 吉林省信息技术研究所 Face information encryption method and system for network access authority authentication

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030133599A1 (en) * 2002-01-17 2003-07-17 International Business Machines Corporation System method for automatically detecting neutral expressionless faces in digital images
CN101739712A (en) * 2010-01-25 2010-06-16 四川大学 Video-based 3D human face expression cartoon driving method
CN103714574A (en) * 2013-12-19 2014-04-09 浙江大学 GPU acceleration-based sea scene modeling and real-time interactive rendering method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030133599A1 (en) * 2002-01-17 2003-07-17 International Business Machines Corporation System method for automatically detecting neutral expressionless faces in digital images
CN101739712A (en) * 2010-01-25 2010-06-16 四川大学 Video-based 3D human face expression cartoon driving method
CN103714574A (en) * 2013-12-19 2014-04-09 浙江大学 GPU acceleration-based sea scene modeling and real-time interactive rendering method

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827976A (en) * 2016-04-26 2016-08-03 北京博瑞空间科技发展有限公司 GPU (graphics processing unit)-based video acquisition and processing device and system
CN107341435A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Processing method, device and the terminal device of video image
CN110621228A (en) * 2017-05-01 2019-12-27 三星电子株式会社 Determining emotions using camera-based sensing
CN110621228B (en) * 2017-05-01 2022-07-29 三星电子株式会社 Determining emotions using camera-based sensing
CN108985873A (en) * 2017-05-30 2018-12-11 株式会社途伟尼 Cosmetics recommended method, the recording medium for being stored with program, the computer program to realize it and cosmetics recommender system
CN107273876A (en) * 2017-07-18 2017-10-20 山东大学 A kind of micro- expression automatic identifying method of ' the grand micro- transformation models of to ' based on deep learning
CN107273876B (en) * 2017-07-18 2019-09-10 山东大学 A kind of micro- expression automatic identifying method of ' the macro micro- transformation model of to ' based on deep learning
WO2019029261A1 (en) * 2017-08-07 2019-02-14 深圳市科迈爱康科技有限公司 Micro-expression recognition method, device and storage medium
CN107480622A (en) * 2017-08-07 2017-12-15 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device and storage medium
CN107633207A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 AU characteristic recognition methods, device and storage medium
CN107832691A (en) * 2017-10-30 2018-03-23 北京小米移动软件有限公司 Micro- expression recognition method and device
CN107832691B (en) * 2017-10-30 2021-10-26 北京小米移动软件有限公司 Micro-expression identification method and device
CN109753848B (en) * 2017-11-03 2021-01-26 杭州海康威视数字技术股份有限公司 Method, device and system for executing face recognition processing
CN109753848A (en) * 2017-11-03 2019-05-14 杭州海康威视数字技术股份有限公司 Execute the methods, devices and systems of face identifying processing
CN108288034A (en) * 2018-01-11 2018-07-17 中国地质大学(武汉) A kind of method for evaluating quality and system of game design
CN108646920A (en) * 2018-05-16 2018-10-12 Oppo广东移动通信有限公司 Identify exchange method, device, storage medium and terminal device
CN109344760A (en) * 2018-09-26 2019-02-15 江西师范大学 A kind of construction method of natural scene human face expression data collection
CN109472198B (en) * 2018-09-28 2022-03-15 武汉工程大学 Gesture robust video smiling face recognition method
CN109472198A (en) * 2018-09-28 2019-03-15 武汉工程大学 A kind of video smiling face's recognition methods of attitude robust
CN109389074A (en) * 2018-09-29 2019-02-26 东北大学 A kind of expression recognition method extracted based on human face characteristic point
CN109389074B (en) * 2018-09-29 2022-07-01 东北大学 Facial feature point extraction-based expression recognition method
CN111353354A (en) * 2018-12-24 2020-06-30 杭州海康威视数字技术股份有限公司 Human body stress information identification method and device and electronic equipment
CN111353354B (en) * 2018-12-24 2024-01-23 杭州海康威视数字技术股份有限公司 Human body stress information identification method and device and electronic equipment
CN110046559A (en) * 2019-03-28 2019-07-23 广东工业大学 A kind of face identification method
CN110110671A (en) * 2019-05-09 2019-08-09 谷泽丰 A kind of character analysis method, apparatus and electronic equipment
CN110852220A (en) * 2019-10-30 2020-02-28 深圳智慧林网络科技有限公司 Intelligent recognition method of facial expression, terminal and computer readable storage medium
CN110852220B (en) * 2019-10-30 2023-08-18 深圳智慧林网络科技有限公司 Intelligent facial expression recognition method, terminal and computer readable storage medium
CN115620117A (en) * 2022-12-20 2023-01-17 吉林省信息技术研究所 Face information encryption method and system for network access authority authentication
CN115620117B (en) * 2022-12-20 2023-03-14 吉林省信息技术研究所 Face information encryption method and system for network access authority authentication

Also Published As

Publication number Publication date
CN104881660B (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN104881660B (en) The expression recognition and interactive approach accelerated based on GPU
Fan et al. A dynamic framework based on local Zernike moment and motion history image for facial expression recognition
Burkert et al. Dexpression: Deep convolutional neural network for expression recognition
CN111563417B (en) Pyramid structure convolutional neural network-based facial expression recognition method
Zheng et al. Recent advances of deep learning for sign language recognition
CN103824052A (en) Multilevel semantic feature-based face feature extraction method and recognition method
CN110909680A (en) Facial expression recognition method and device, electronic equipment and storage medium
Mohseni et al. Facial expression recognition using anatomy based facial graph
Kulkarni et al. Analysis on techniques used to recognize and identifying the Human emotions
Liliana et al. Geometric facial components feature extraction for facial expression recognition
Ullah et al. Emotion recognition from occluded facial images using deep ensemble model
Sun et al. General-to-specific learning for facial attribute classification in the wild
Sun et al. Deep Facial Attribute Detection in the Wild: From General to Specific.
Zhang et al. Biometric recognition
Azam et al. Feature extraction trends for intelligent facial expression recognition: A survey
Wu et al. Spontaneous versus posed smile recognition via region-specific texture descriptor and geometric facial dynamics
Moreira et al. Eyes and eyebrows detection for performance driven animation
Singh et al. Continuous multimodal emotion recognition approach for AVEC 2017
Wang et al. Video-based emotion recognition using face frontalization and deep spatiotemporal feature
Rani et al. Implementation of emotion detection system using facial expressions
Wei et al. 3D facial expression recognition based on Kinect
Verma et al. Facial expression recognition: A review
Hu et al. Natural scene facial expression recognition with dimension reduction network
Mahajan et al. FCA: A Proposed Method for an Automatic Facial Expression Recognition System using ANN
Rivera et al. Development of an automatic expression recognition system based on facial action coding system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 130012 Jilin province city Changchun well-informed high tech Industrial Development Zone, Road No. 168

Applicant after: JILIN JIYUAN SPACE-TIME CARTOON GAME SCIENCE AND TECHNOLOGY GROUP CO., LTD.

Address before: No. 2888, Silicon Valley Avenue, Changchun high tech Zone, Jilin Province

Applicant before: JILIN JIYUAN SPACE-TIME ANIMATION GAME TECHNOLOGY CO., LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 130012 No. 168 Boxue Road, Changchun High-tech Industrial Development Zone, Jilin Province

Patentee after: Jilin Jidong Culture and Art Group Co., Ltd.

Address before: 130012 No. 168 Boxue Road, Changchun High-tech Industrial Development Zone, Jilin Province

Patentee before: JILIN JIYUAN SPACE-TIME CARTOON GAME SCIENCE AND TECHNOLOGY GROUP CO., LTD.

CP01 Change in the name or title of a patent holder