CN111414884A - Facial expression recognition method based on edge calculation - Google Patents

Facial expression recognition method based on edge calculation Download PDF

Info

Publication number
CN111414884A
CN111414884A CN202010230483.XA CN202010230483A CN111414884A CN 111414884 A CN111414884 A CN 111414884A CN 202010230483 A CN202010230483 A CN 202010230483A CN 111414884 A CN111414884 A CN 111414884A
Authority
CN
China
Prior art keywords
facial
facial image
image
expression
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010230483.XA
Other languages
Chinese (zh)
Inventor
钱甜甜
张帆
杨健楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Moshen Information Technology Co ltd
Nanjing Tech University
Original Assignee
Nanjing Moshen Information Technology Co ltd
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Moshen Information Technology Co ltd, Nanjing Tech University filed Critical Nanjing Moshen Information Technology Co ltd
Priority to CN202010230483.XA priority Critical patent/CN111414884A/en
Publication of CN111414884A publication Critical patent/CN111414884A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

A facial expression recognition method based on edge calculation is executed on an edge device, and can reduce request response time and network bandwidth, improve battery life and ensure safety and privacy when facial expression recognition is carried out.

Description

Facial expression recognition method based on edge calculation
Technical Field
The invention relates to the field of image processing, in particular to a facial expression recognition technology based on image processing, and relates to a facial expression recognition method based on edge calculation.
Background
Facial expressions are an important way that humans express emotions, and they can be used to identify and determine a person's emotional state, and analysis of facial behavior has been used in many different applications to facilitate human-computer interaction, such as in the security, police, military, medical, and other industries. Where facial action unit detection may identify facial expressions by analyzing cues of certain muscle movements in local facial regions, the corresponding micro-expressions are predicted.
At present, simple image classification, gesture recognition, sound detection, motion analysis and the like can be completed on the edge device. Because only the final result is transmitted, the delay can be reduced to the maximum extent, the privacy is improved, and the bandwidth in the Internet of things system is saved.
In the era of internet of things, a large number of electronic devices flood the internet and generate a large amount of data, so that the data cannot be timely and effectively processed by traditional cloud computing, and particularly when a high-grade system is used in an actual production process, a large amount of frames can be transmitted from a camera to a back-end server by a large network bandwidth, and for some emergency situations, time delay is very important for real-time data to move back and forth.
Disclosure of Invention
The invention aims to solve the problems and provide a facial expression recognition method based on edge calculation; the invention can reduce the request response time and network bandwidth when performing expression recognition, improve the battery life, and ensure the safety and privacy.
The technical scheme of the invention is as follows:
the invention provides a facial expression recognition method based on edge calculation, which is executed on edge equipment and comprises the following steps:
s1, facial image sample collection: for a plurality of individuals, respectively acquiring eight basic facial expressions including neutral expressions as facial image training samples;
s2, face image sample training step: respectively extracting AU (AU) features of the facial image samples, acquiring SVM (support vector machine) classifiers through training corresponding to basic expressions, and arranging the SVM classifiers in edge equipment;
s3, facial image expression recognition: the method comprises the steps of obtaining a facial image to be recognized, sending the facial image to edge equipment, processing the facial image in the edge equipment, calculating AU (AU) features of the facial image, inputting the AU features into an SVM (support vector machine) classifier, and obtaining facial expressions to be recognized.
Further, the AU features include appearance features and geometric features.
Further, the appearance feature extraction includes: for a face image, a face region is separated and adjusted to a fixed image size, and a histogram of directional gradients of the image is extracted as an appearance feature.
Further, the geometric feature extraction comprises the steps of selecting a neutral expression facial image, using a CE-C L M model to mark 68 feature points on the facial image with the neutral expression, selecting other basic facial expressions of the person, using a CE-C L M model to mark 68 feature points on the basic facial image, then obtaining a comparison result of the basic facial expression image and the face marks of the neutral expression image, namely a feature set of each expression facial image through calculation, and taking the feature set of each expression facial image as a geometric feature.
Further, in the facial image expression recognition step: for AU characteristics, preprocessing AU characteristic values, including normalizing the AU characteristic values, squaring the result, and inputting the result into an SVM classifier.
Further, the edge device employs Jetson TX 2.
The invention has the beneficial effects that:
the method of the present invention has been experimented with JAFFE and CK + data sets, and according to our observations of image recognition and search services, pure image data transmission requires hundreds of milliseconds in addition to the time required to establish a connection. In the process of edge computing, too much data exchange is not carried out between the cloud server and the facial expression recognition based on the edge computing, so that too much network bandwidth is not required to be occupied.
The data processing mode in the edge calculation model can ensure shorter response time and higher reliability, and greatly saves transmission bandwidth and consumption of electric energy at the equipment end.
The invention focuses on data analysis at or near the data generation position, and the data analysis completed at the network edge can collect more client information, shorten the response time, save the network bandwidth, reduce the peak workload of the cloud, not only can not influence the quality of the transmitted image, but also can quickly and timely predict the facial expression correctly.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
FIG. 1 is a flow chart of the present invention for extracting facial action units.
Detailed Description
Preferred embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein.
The invention provides a facial expression recognition method based on edge calculation, which is executed on edge equipment and comprises the following steps:
s1, facial image sample collection: for a plurality of individuals, respectively acquiring eight basic facial expressions including neutral expressions as facial image training samples;
s2, face image sample training step: respectively extracting AU (AU) features of the facial image samples, acquiring SVM (support vector machine) classifiers through training corresponding to basic expressions, and arranging the SVM classifiers in edge equipment;
s3, facial image expression recognition: the method comprises the steps of obtaining a facial image to be recognized, sending the facial image to edge equipment, processing the facial image in the edge equipment, calculating AU (AU) features of the facial image, inputting the AU features into an SVM (support vector machine) classifier, and obtaining facial expressions to be recognized.
The method uses multiple classifications to map the facial expressions, and then classifies the expressions according to the values of AU to identify those expressions, in the invention, for facial geometric features, a local detector is used to simulate the appearance of each face separately, and a shape model is used to perform constraint optimization, CE-C L M is an example of C L M, we mark 68 markers on the face using CE-C L M model, can clearly reflect the expression changes of the various parts of the face (eyes, eyebrows, mouth, nose, etc.), and then can calculate the geometric features of the facial image, by comparing with the facial markers of neutral expression.
Secondly, a data processing mode in the edge computing model can ensure shorter response time and higher reliability, and if most data can be processed on the edge device without being uploaded to a cloud computing center, transmission bandwidth and consumption of device-side electric energy can be greatly saved. In our invention, edge analysis is provided, focusing on data analysis at or near the data generation location, data analysis completed at the network edge can collect more client information, shorten response time, save network bandwidth, reduce peak workload of the cloud, and the like.
In the specific implementation:
as shown in fig. 1, which is a detailed procedure of extracting a facial motion unit according to the present invention, an AU is first mapped to 8 facial expressions using a method of detecting a facial motion unit. For appearance features, we first align the face image to separate out the face regions, and then describe the change in appearance features by the "histogram of oriented gradient" (HOG) method. After obtaining the geometric features and the appearance features, the support vector regression method is used for feature fusion and AU features are obtained.
The method comprises the steps that an SVM classifier is used as a convolution network model, an interested area is extracted from AU characteristics based on estimation of landmark positions, small areas are subjected to contrast normalization convolutional layer, normalization is carried out before relevant operation, then a response graph is input into a convolutional layer with an Re L U unit, input numerical values of various AUs pass through an implicit layer and a full link, finally softmax is used as output, and one expression label is selected as a final result.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.

Claims (6)

1. A facial expression recognition method based on edge calculation, the method being performed on an edge device and comprising the steps of:
s1, facial image sample collection: for a plurality of individuals, respectively acquiring eight basic facial expressions including neutral expressions as facial image training samples;
s2, face image sample training step: respectively extracting AU (AU) features of the facial image samples, acquiring SVM (support vector machine) classifiers through training corresponding to basic expressions, and arranging the SVM classifiers in edge equipment;
s3, facial image expression recognition: the method comprises the steps of obtaining a facial image to be recognized, sending the facial image to edge equipment, processing the facial image in the edge equipment, calculating AU (AU) features of the facial image, inputting the AU features into an SVM (support vector machine) classifier, and obtaining facial expressions to be recognized.
2. The method of claim 1, wherein the AU features include appearance features and geometric features.
3. The method of recognizing facial expressions based on edge calculation as claimed in claim 2, wherein the appearance feature extraction includes: for a face image, a face region is separated and adjusted to a fixed image size, and a histogram of directional gradients of the image is extracted as an appearance feature.
4. The method of claim 2, wherein the geometric feature extraction comprises selecting a neutral expression facial image, labeling 68 feature points on the neutral expression facial image using a CE-C L M model, selecting other basic facial expressions of the individual, labeling 68 feature points on the basic facial image using a CE-C L M model, and calculating to obtain a feature set of each expression facial image, which is a comparison result of the basic facial expression image and the face labels of the neutral expression facial image, and using the feature set of each expression facial image as the geometric feature.
5. The facial expression recognition method based on edge calculation as claimed in claim 1, wherein the facial image expression recognition step comprises: for AU characteristics, preprocessing AU characteristic values, including normalizing the AU characteristic values, squaring the result, and inputting the result into an SVM classifier.
6. The method of claim 1, wherein the edge device is Jetson TX 2.
CN202010230483.XA 2020-03-27 2020-03-27 Facial expression recognition method based on edge calculation Pending CN111414884A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010230483.XA CN111414884A (en) 2020-03-27 2020-03-27 Facial expression recognition method based on edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010230483.XA CN111414884A (en) 2020-03-27 2020-03-27 Facial expression recognition method based on edge calculation

Publications (1)

Publication Number Publication Date
CN111414884A true CN111414884A (en) 2020-07-14

Family

ID=71493361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010230483.XA Pending CN111414884A (en) 2020-03-27 2020-03-27 Facial expression recognition method based on edge calculation

Country Status (1)

Country Link
CN (1) CN111414884A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909057A (en) * 2017-11-30 2018-04-13 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108830262A (en) * 2018-07-25 2018-11-16 上海电力学院 Multi-angle human face expression recognition method under natural conditions
CN109976726A (en) * 2019-03-20 2019-07-05 深圳市赛梅斯凯科技有限公司 Vehicle-mounted Edge intelligence computing architecture, method, system and storage medium
CN110472512A (en) * 2019-07-19 2019-11-19 河海大学 A kind of face state identification method and its device based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909057A (en) * 2017-11-30 2018-04-13 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108830262A (en) * 2018-07-25 2018-11-16 上海电力学院 Multi-angle human face expression recognition method under natural conditions
CN109976726A (en) * 2019-03-20 2019-07-05 深圳市赛梅斯凯科技有限公司 Vehicle-mounted Edge intelligence computing architecture, method, system and storage medium
CN110472512A (en) * 2019-07-19 2019-11-19 河海大学 A kind of face state identification method and its device based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TIANTIAN QIAN,ETC: "Facial Expression Recognition Based on Edge Computing" *

Similar Documents

Publication Publication Date Title
US11379696B2 (en) Pedestrian re-identification method, computer device and readable medium
Sun et al. Discriminative exemplar coding for sign language recognition with kinect
CN110781829A (en) Light-weight deep learning intelligent business hall face recognition method
CN111553419B (en) Image identification method, device, equipment and readable storage medium
CN102902986A (en) Automatic gender identification system and method
CN109145717A (en) A kind of face identification method of on-line study
CN112560829B (en) Crowd quantity determination method, device, equipment and storage medium
CN111199202B (en) Human body action recognition method and recognition device based on circulating attention network
CN106407369A (en) Photo management method and system based on deep learning face recognition
CN112541529A (en) Expression and posture fusion bimodal teaching evaluation method, device and storage medium
CN111914643A (en) Human body action recognition method based on skeleton key point detection
Cheng Real-time mask detection based on SSD-MobileNetV2
Alon et al. Deep-hand: a deep inference vision approach of recognizing a hand sign language using american alphabet
CN112580616B (en) Crowd quantity determination method, device, equipment and storage medium
Boncolmo et al. Gender Identification Using Keras Model Through Detection of Face
CN117198468A (en) Intervention scheme intelligent management system based on behavior recognition and data analysis
Abdul-Ameer et al. Development smart eyeglasses for visually impaired people based on you only look once
CN111753618A (en) Image recognition method and device, computer equipment and computer readable storage medium
Sadiq et al. A robust occlusion-adaptive attention-based deep network for facial landmark detection
CN111414884A (en) Facial expression recognition method based on edge calculation
Bora et al. ISL gesture recognition using multiple feature fusion
Chen et al. A Human Activity Recognition Approach Based on Skeleton Extraction and Image Reconstruction
CN111898473A (en) Driver state real-time monitoring method based on deep learning
Venkateswarlu et al. AI-based Gender Identification using Facial Features
MinYen et al. A study on estimating the accurate head IMU motion from Video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200714

WD01 Invention patent application deemed withdrawn after publication