CN110008819A - A kind of facial expression recognizing method based on figure convolutional neural networks - Google Patents

A kind of facial expression recognizing method based on figure convolutional neural networks Download PDF

Info

Publication number
CN110008819A
CN110008819A CN201910091261.1A CN201910091261A CN110008819A CN 110008819 A CN110008819 A CN 110008819A CN 201910091261 A CN201910091261 A CN 201910091261A CN 110008819 A CN110008819 A CN 110008819A
Authority
CN
China
Prior art keywords
human face
directed graph
image
face expression
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910091261.1A
Other languages
Chinese (zh)
Other versions
CN110008819B (en
Inventor
柴利
吴晨晖
杨君
盛玉霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN201910091261.1A priority Critical patent/CN110008819B/en
Publication of CN110008819A publication Critical patent/CN110008819A/en
Application granted granted Critical
Publication of CN110008819B publication Critical patent/CN110008819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of facial expression recognizing methods based on figure convolutional neural networks.Its technical solution is: firstly, obtaining Facial Expression Image set Image and human face expression tag set Label;Then, the undirected set of graphs Graph of human face expression is obtained according to Facial Expression Image set Image;Further according to human face expression undirected set of graphs Graph and human face expression tag set Label, facial expression recognition classifier is established;Finally, input a Facial Expression Image to be identified, carry out gray processing, extract face parts of images, again by the size normalization of face parts of images to m × m, the facial image after normalization is converted into non-directed graph by facial image after being normalized, non-directed graph is input to the classifier of facial expression recognition, obtains the recognition result of human face expression.The present invention has the characteristics that relatively broad characteristics of image, facial expression recognition rate height can be extracted.

Description

A kind of facial expression recognizing method based on figure convolutional neural networks
Technical field
The invention belongs to facial expression recognition technical fields.More particularly to a kind of face table based on figure convolutional neural networks Feelings recognition methods.
Background technique
Human face expression is not only the movable physiological performance of human psychology, and has indispensable weight in interpersonal communication The property wanted.The method of facial expression recognition has two classes: the recognition methods based on machine learning and the recognition methods based on deep learning.
Influence based on the recognition methods of machine learning vulnerable to feature extraction effect superiority and inferiority.And it is based on deep learning identification side The shortcomings that feature extraction and expression classification are combined together by method, overcome the recognition methods based on machine learning.
In method based on deep learning, convolutional neural networks, which are widely used in, establishes expression classification model. Matsuguetal(Matsugu M,Mori K,Mitari Y,et al.Subject independent facial expression recognition with robust face detection using a convolutional Neural network [J] .Neural Networks, 2003,16 (5-6): 555-559) using convolutional neural networks establish people Face expression classifier is used for the classification of human face expression.Higher discrimination is although obtained using this method, but uses convolution mind The classifier established through network can only extract the relevant information of 8 neighborhoods in image, for the phase between pixel apart from each other The extractability for closing information is poor.Affect accuracy of the classifier to facial expression recognition of foundation.
Summary of the invention
The present invention is directed to overcome prior art defect, and it is an object of the present invention to provide a kind of face table based on figure convolutional neural networks Feelings recognition methods;This method can extract relevant information between relatively broad pixel, and facial expression recognition rate is high.
To achieve the goals above, the technical solution adopted by the present invention comprises the concrete steps that:
Step 1, the gray level image for obtaining the human face expression that one group of size is m × m, form Facial Expression Image set Image, by the corresponding human face expression label composition of the gray level image of every human face expression in Facial Expression Image set Image Human face expression tag set Label;The corresponding human face expression of gray level image of every human face expression therein is labeled as happiness, eats One of frightened, sad, indignation, detest and fear.
Step 2, by the greyscale image transitions of every human face expression in Facial Expression Image set Image at non-directed graph, group At the undirected set of graphs Graph of human face expression.
The greyscale image transitions of the human face expression are comprised the concrete steps that at non-directed graph:
Step 2.1, using each of the gray level image of human face expression pixel as a vertex in non-directed graph.
Step 2.2 sets the distance between two pixels adjacent in the gray level image of human face expression as 1.
Step 2.3, by each of the gray level image of human face expression pixel successively centered on pixel, then into The following operation of row:
First the pixel composition fixation by the Euclidean distance away from central pixel point less than or equal to 2 takes point set Fixed.
The Euclidean distance away from central pixel point is greater than 2 again and the pixel composition less than 4 takes point set Random at random.
Then the vertex of non-directed graph corresponding to central pixel point and fixation are taken into all pixels point institute in point set Fixed The vertex of corresponding non-directed graph is connected, by the vertex of non-directed graph corresponding to central pixel point with take in point set Random at random The vertex for randomly selecting non-directed graph corresponding to p pixel is connected.
Step 2.4, the non-directed graph being converted into adjacency matrix W the i-th row jth column numerical value WijAre as follows:
In formula (1): sijIndicate the ith pixel point v in the gray level image of human face expressioniWith j-th of pixel vjBetween Euclidean distance.
The adjacency matrix W ∈ R of the non-directed graphnxn
Step 3, according to human face expression undirected set of graphs Graph and human face expression tag set Label, establish human face expression Recognition classifier:
Step 3.1, a figure convolutional neural networks contain 6 picture scroll laminations, a full articulamentum and a softmax Layer;Each picture scroll lamination contains the roughened layer of a figure signal wave filtering layer and a figure, determines the input in each picture scroll lamination The number of non-directed graph, the number for exporting non-directed graph and the size using filter, determine the number of nodes M of full articulamentumf; Softmax layers have 6 outputs, respectively correspond 6 kinds of basic facial expressions.
Step 3.2, by the 1st picture scroll lamination can training parameter θ1, the 2nd picture scroll lamination can training parameter θ2, the 3rd A picture scroll lamination can training parameter θ3, the 4th picture scroll lamination can training parameter θ4, the 5th picture scroll lamination can training parameter θ5, the 6th picture scroll lamination can training parameter θ6, full articulamentum can training parameter θfWith softmax layers can training parameter θsoftForm figure convolutional neural networks can training parameter set θ.
Step 3.3, by figure convolutional neural networks can training parameter set θ initialize.
Step 3.4, by figure convolutional neural networks can the training parameter set θ and undirected set of graphs Graph of human face expression put Enter figure convolutional neural networks and carry out propagated forward, obtains human face expression label prediction sets Predict;Human face expression label prediction Set Predict is the human face expression of the prediction of each non-directed graph in the non-directed graph training set Graph by Facial Expression Image Mark result composition.
Step 3.5 calculates between human face expression tag set Label and human face expression label prediction sets Predict Error;Using minimize mean square error cost function method update figure convolutional neural networks can training parameter set θ, then Step 3.4 is executed, the error between human face expression tag set Label and human face expression label prediction sets Predict Less than 0.1.
Step 3.6, figure convolutional neural networks can training parameter set θ and picture scroll product neural network group know at human face expression Other classifier.
One step 4, input Facial Expression Image to be identified, carry out gray processing, extract face parts of images, then by people Facial image of the size normalization of face's partial image to m × m, after being normalized;Then by the facial image after normalization It is converted into non-directed graph, non-directed graph is finally input to the classifier of facial expression recognition, obtains the recognition result of human face expression.
The figure convolutional neural networks carry out the specific steps of propagated forward:
The undirected set of graphs Graph of human face expression is inputted figure convolutional neural networks, the input of s-th of picture scroll lamination by step 1 It is with output relation:
In formula (2) and formula (3):
xs,iIndicate i-th of non-directed graph of s-th of picture scroll lamination input of figure convolutional neural networks;
ys,jIndicate j-th of non-directed graph of s-th of picture scroll lamination output of figure convolutional neural networks;
MinIndicate the number of the non-directed graph of s-th of picture scroll lamination input point of figure convolutional neural networks;
K indicates the filter size of s-th of picture scroll lamination of figure convolutional neural networks;
L indicates the Laplacian Matrix of the non-directed graph of s-th of picture scroll lamination input of figure convolutional neural networks;
Indicate the process that i-th of non-directed graph of input is converted into j-th of non-directed graph of output in s-th of picture scroll lamination In can training parameter vector,
Indicate the mistake that i-th of non-directed graph of input is converted into j-th of non-directed graph of output in s-th of picture scroll lamination In journey can training parameter vectorIn k-th of number;
F () indicates to carry out roughening operation to the non-directed graph in bracket using the roughening algorithm of the figure based on greedy algorithm.
Step 2, by the vertex value of the non-directed graph obtained after 6 picture scroll laminations vector x arranged in columnsfin, by column vector xfinFull articulamentum is inputted, full articulamentum is output and input are as follows:
xfout=fReLUfxfin) (4)
In formula (4):
xfoutIndicate the output vector of full articulamentum;
θfIndicate full articulamentum can training parameter;
Indicate ReLU activation primitive.
Step 3, the output x by full articulamentumfoutSoftmax layers are input to, softmax layers of intermediate vector xsoftmidWith Output vector xsoftoutI-th of numberAre as follows:
xsoftmid=fReLUsoftxfout) (5)
In formula (5)~(6):
θsoftIndicate softmax layers can training parameter;
xsoftmidIndicate softmax layers of intermediate vector;
Indicate softmax layers of intermediate vector xsoftinI-th of number;
Indicate softmax layers of output vector xsoftoutIn i-th of number.
Step 4, softmax layers of output vector xsoftoutIn number a possibility that respectively indicating 6 kinds of basic facial expressions, will Softmax layers of output vector xsoftoutHuman face expression mark of the middle maximum corresponding basic facial expression of number as the non-directed graph Note.
Due to the adoption of the above technical scheme, the present invention has following good effect compared with prior art:
The present invention has used fixed take a little a little to combine with taking at random during facial image is converted into non-directed graph Method.Using a fixed connection that takes and can establish between closer pixel apart, that is, it is connected to the pixel of 8 neighborhoods, use The connection that can be established between pixel apart from each other is taken at random.Fixed take a little a little is combined with random take, so that image In each pixel both had connection with closer pixel, also have connection with farther away pixel so that establish picture scroll product net Network can extract the relevant information between pixel farther out, can also extract the relevant information between nearlyr pixel.
The present invention can preferably extract pixel apart from each other using picture scroll product neural network facial expression classifier Relevant information between point provides more effective human face expression features for facial expression classifier, thus with higher Facial expression recognition rate.
Therefore, the present invention has the characteristics that relevant information between relatively broad pixel, facial expression recognition can be extracted Rate is high.
Detailed description of the invention
Fig. 1 is a Facial Expression Image to be identified;
Fig. 2 is by Facial Expression Image shown in Fig. 1 using the Facial Expression Image after normalized of the invention.
Specific embodiment
Present invention will be further described below with reference to the accompanying drawings and specific embodiments, not to the limit of its protection scope System.
Embodiment 1
A kind of facial expression recognizing method based on figure convolutional neural networks.This method comprises the concrete steps that:
Step 1, the gray level image for obtaining the human face expression that one group of size is 64 × 64 form Facial Expression Image set Image, by the corresponding human face expression label composition of the gray level image of every human face expression in Facial Expression Image set Image Human face expression tag set Label;The corresponding human face expression of gray level image of every human face expression therein is labeled as happiness, eats One of frightened, sad, indignation, detest and fear.
Step 2, by the greyscale image transitions of every human face expression in Facial Expression Image set Image at non-directed graph, group At the undirected set of graphs Graph of human face expression.
The greyscale image transitions of the human face expression are comprised the concrete steps that at non-directed graph:
Step 2.1, using each of the gray level image of human face expression pixel as a vertex in non-directed graph.
Step 2.2 sets the distance between two pixels adjacent in the gray level image of human face expression as 1.
Step 2.3, by each of the gray level image of human face expression pixel successively centered on pixel, then into The following operation of row:
First the pixel composition fixation by the Euclidean distance away from central pixel point less than or equal to 2 takes point set Fixed.
The Euclidean distance away from central pixel point is greater than 2 again and the pixel composition less than 4 takes point set Random at random.
Then the vertex of non-directed graph corresponding to central pixel point and fixation are taken into all pixels point institute in point set Fixed The vertex of corresponding non-directed graph is connected, by the vertex of non-directed graph corresponding to central pixel point with take in point set Random at random The vertex for randomly selecting non-directed graph corresponding to p pixel is connected.
Step 2.4, the non-directed graph being converted into adjacency matrix W the i-th row jth column numerical value WijAre as follows:
In formula (1): sijIndicate the ith pixel point v in the gray level image of human face expressioniWith j-th of pixel vjBetween Euclidean distance.
The adjacency matrix W ∈ R of the non-directed graphnxn
Step 3, according to human face expression undirected set of graphs Graph and human face expression tag set Label, establish human face expression Recognition classifier:
Step 3.1, a figure convolutional neural networks contain 6 picture scroll laminations, a full articulamentum and a softmax Layer.The roughened layer that each picture scroll lamination contains a figure signal wave filtering layer and a figure: it is inputted in first picture scroll lamination undirected The number of figure is 1, and the number for exporting non-directed graph is 32, and filter size is 9;Of non-directed graph is inputted in second picture scroll lamination Number is 32, and the number for exporting non-directed graph is 32, and filter size is 9;The number of input non-directed graph is in third picture scroll lamination 32, the number for exporting non-directed graph is 64, and filter size is 6;The number that non-directed graph is inputted in 4th picture scroll lamination is 64, defeated The number of non-directed graph is 64 out, and filter size is 6;The number that non-directed graph is inputted in 5th picture scroll lamination is 64, exports nothing It is 128 to the number of figure, filter size is 4;The number that non-directed graph is inputted in 6th picture scroll lamination is 128, is exported undirected The number of figure is 128, and filter size is 4.The number of nodes of full articulamentum is 512;Softmax layers have 6 outputs, respectively correspond 6 kinds of basic facial expressions.
Step 3.2, by the 1st picture scroll lamination can training parameter θ1, the 2nd picture scroll lamination can training parameter θ2, the 3rd A picture scroll lamination can training parameter θ3, the 4th picture scroll lamination can training parameter θ4, the 5th picture scroll lamination can training parameter θ5, the 6th picture scroll lamination can training parameter θ6, full articulamentum can training parameter θfWith softmax layers can training parameter θsoftForm figure convolutional neural networks can training parameter set θ.
Step 3.3, by figure convolutional neural networks can training parameter set θ initialize.
Step 3.4, by figure convolutional neural networks can the training parameter set θ and undirected set of graphs Graph of human face expression put Enter figure convolutional neural networks and carry out propagated forward, obtains human face expression label prediction sets Predict;Human face expression label prediction Set Predict is the human face expression of the prediction of each non-directed graph in the non-directed graph training set Graph by Facial Expression Image Mark result composition.
Step 3.5 calculates between human face expression tag set Label and human face expression label prediction sets Predict Error;Using minimize mean square error cost function method update figure convolutional neural networks can training parameter set θ, then Step 3.4 is executed, the error between human face expression tag set Label and human face expression label prediction sets Predict Less than 0.1.
Step 3.6, figure convolutional neural networks can training parameter set θ and picture scroll product neural network group know at human face expression Other classifier.
One step 4, input Facial Expression Image to be identified as shown in Figure 1, carry out gray processing, extract face part Image, then by the size normalization of face parts of images to 64 × 64, the facial image after being normalized, as shown in Figure 2;So The facial image after normalization is converted into non-directed graph afterwards, non-directed graph is finally input to the classifier of facial expression recognition, is obtained The recognition result of the human face expression arrived is startled.
The figure convolutional neural networks carry out the specific steps of propagated forward:
The undirected set of graphs Graph of human face expression is inputted figure convolutional neural networks, the input of s-th of picture scroll lamination by step 1 It is with output relation:
In formula (2) and formula (3):
xs,iIndicate i-th of non-directed graph of s-th of picture scroll lamination input of figure convolutional neural networks;
ys,jIndicate j-th of non-directed graph of s-th of picture scroll lamination output of figure convolutional neural networks;
MinIndicate the number of the non-directed graph of s-th of picture scroll lamination input point of figure convolutional neural networks;
K indicates the filter size of s-th of picture scroll lamination of figure convolutional neural networks;
L indicates the Laplacian Matrix of the non-directed graph of s-th of picture scroll lamination input of figure convolutional neural networks;
Indicate the process that i-th of non-directed graph of input is converted into j-th of non-directed graph of output in s-th of picture scroll lamination In can training parameter vector,
Indicate the mistake that i-th of non-directed graph of input is converted into j-th of non-directed graph of output in s-th of picture scroll lamination In journey can training parameter vectorIn k-th of number;
F () indicates to carry out roughening operation to the non-directed graph in bracket using the roughening algorithm of the figure based on greedy algorithm.
Step 2, by the vertex value of the non-directed graph obtained after 6 picture scroll laminations vector x arranged in columnsfin, by column vector xfinFull articulamentum is inputted, full articulamentum is output and input are as follows:
xfout=fReLUfxfin) (4)
In formula (4):
xfoutIndicate the output vector of full articulamentum;
θfIndicate full articulamentum can training parameter;
Indicate ReLU activation primitive.
Step 3, the output x by full articulamentumfoutSoftmax layers are input to, softmax layers of intermediate vector xsoftmidWith Output vector xsoftoutI-th of numberAre as follows:
xsoftmid=fReLUsoftxfout) (5)
In formula (5)~(6):
θsoftIndicate softmax layers can training parameter;
xsoftmidIndicate softmax layers of intermediate vector;
Indicate softmax layers of intermediate vector xsoftinI-th of number;
Indicate softmax layers of output vector xsoftoutIn i-th of number.
Step 4, softmax layers of output vector xsoftoutIn number a possibility that respectively indicating 6 kinds of basic facial expressions, will Softmax layers of output vector xsoftoutHuman face expression mark of the middle maximum corresponding basic facial expression of number as the non-directed graph Note.

Claims (2)

1. a kind of facial expression recognizing method based on figure convolutional neural networks, it is characterised in that the facial expression recognizing method Comprise the concrete steps that:
Step 1, the gray level image for obtaining the human face expression that one group of size is m × m, form Facial Expression Image set Image, will The corresponding human face expression mark group of the gray level image of every human face expression in Facial Expression Image set Image is at human face expression Tag set Label;The corresponding human face expression of gray level image of every human face expression therein labeled as it is glad, startled, sad, One of indignation, detest and fear;
Step 2, by the greyscale image transitions of every human face expression in Facial Expression Image set Image at non-directed graph, group is adult The undirected set of graphs Graph of face expression;
The greyscale image transitions of the human face expression are comprised the concrete steps that at non-directed graph:
Step 2.1, using each of the gray level image of human face expression pixel as a vertex in non-directed graph;
Step 2.2 sets the distance between two pixels adjacent in the gray level image of human face expression as 1;
Step 2.3, by each of the gray level image of human face expression pixel successively centered on pixel, then carry out such as Lower operation:
First the pixel composition fixation by the Euclidean distance away from central pixel point less than or equal to 2 takes point set Fixed;
The Euclidean distance away from central pixel point is greater than 2 again and the pixel composition less than 4 takes point set Random at random;
Then the vertex of non-directed graph corresponding to central pixel point and fixation are taken in point set Fixed corresponding to all pixels point The vertex of non-directed graph be connected, take in point set Random by the vertex of non-directed graph corresponding to central pixel point and at random random The vertex for choosing non-directed graph corresponding to p pixel is connected;
Step 2.4, the non-directed graph being converted into adjacency matrix W the i-th row jth column numerical value WijAre as follows:
In formula (1): sijIndicate the ith pixel point v in the gray level image of human face expressioniWith j-th of pixel vjBetween it is European Distance;
The adjacency matrix W ∈ R of the non-directed graphnxn
Step 3, according to human face expression undirected set of graphs Graph and human face expression tag set Label, establish facial expression recognition Classifier:
Step 3.1, a figure convolutional neural networks contain 6 picture scroll laminations, a full articulamentum and one softmax layers;Often A picture scroll lamination contains the roughened layer of a figure signal wave filtering layer and a figure, determines the input non-directed graph in each picture scroll lamination Number, export non-directed graph number and using filter size, determine the number of nodes M of full articulamentumf;Softmax layers have 6 A output respectively corresponds 6 kinds of basic facial expressions;
Step 3.2, by the 1st picture scroll lamination can training parameter θ1, the 2nd picture scroll lamination can training parameter θ2, the 3rd figure Convolutional layer can training parameter θ3, the 4th picture scroll lamination can training parameter θ4, the 5th picture scroll lamination can training parameter θ5、 6th picture scroll lamination can training parameter θ6, full articulamentum can training parameter θfWith softmax layers can training parameter θsoft Form figure convolutional neural networks can training parameter set θ;
Step 3.3, by figure convolutional neural networks can training parameter set θ initialize;
Step 3.4, by figure convolutional neural networks can the training parameter set θ and undirected set of graphs Graph of human face expression be put into figure Convolutional neural networks carry out propagated forward, obtain human face expression label prediction sets Predict;Human face expression marks prediction sets Predict is the human face expression label of the prediction of each non-directed graph in the non-directed graph training set Graph by Facial Expression Image As a result it forms;
Error between step 3.5, calculating human face expression tag set Label and human face expression label prediction sets Predict; Using minimize mean square error cost function method update figure convolutional neural networks can training parameter set θ, then execute Step 3.4, until the error between human face expression tag set Label and human face expression label prediction sets Predict is less than 0.1;
Step 3.6, figure convolutional neural networks can training parameter set θ and picture scroll product neural network group at facial expression recognition Classifier;
One step 4, input Facial Expression Image to be identified, carry out gray processing, extract face parts of images, then by face Facial image of the size normalization of partial image to m × m, after being normalized;Then the facial image after normalization is converted At non-directed graph, non-directed graph is finally input to the classifier of facial expression recognition, obtains the recognition result of human face expression.
2. the facial expression recognizing method according to claim 1 based on figure convolutional neural networks, it is characterised in that described The specific steps of figure convolutional neural networks progress propagated forward:
The undirected set of graphs Graph of human face expression is inputted figure convolutional neural networks by step 1, the input of s-th picture scroll lamination and defeated Relationship is out:
In formula (2) and formula (3):
xs,iIndicate i-th of non-directed graph of s-th of picture scroll lamination input of figure convolutional neural networks,
ys,jIndicate j-th of non-directed graph of s-th of picture scroll lamination output of figure convolutional neural networks,
MinIndicate the number of the non-directed graph of s-th of picture scroll lamination input point of figure convolutional neural networks,
K indicates the filter size of s-th of picture scroll lamination of figure convolutional neural networks,
L indicates the Laplacian Matrix of the non-directed graph of s-th of picture scroll lamination input of figure convolutional neural networks,
It indicates during i-th of non-directed graph of input is converted into j-th of the non-directed graph exported in s-th of picture scroll lamination Can training parameter vector,
It indicates during i-th of non-directed graph of input is converted into j-th of the non-directed graph exported in s-th of picture scroll lamination Can training parameter vectorIn k-th of number,
F () indicates to carry out roughening operation to the non-directed graph in bracket using the roughening algorithm of the figure based on greedy algorithm;
Step 2, by the vertex value of the non-directed graph obtained after 6 picture scroll laminations vector x arranged in columnsfin, by column vector xfinIt is defeated Enter full articulamentum, full articulamentum is output and input are as follows:
xfout=fReLUfxfin) (4)
In formula (4):
xfoutIndicate the output vector of full articulamentum,
θfIndicate full articulamentum can training parameter,
Indicate ReLU activation primitive;
Step 3, the output x by full articulamentumfoutSoftmax layers are input to, softmax layers of intermediate vector xsoftmidAnd output Vector xsoftoutI-th of numberAre as follows:
xsoftmid=fReLUsoftxfout) (5)
In formula (5)~(6):
θsoftIndicate softmax layers can training parameter,
xsoftmidIndicate softmax layers of intermediate vector,
Indicate softmax layers of intermediate vector xsoftinI-th of number,
Indicate softmax layers of output vector xsoftoutIn i-th of number;
Step 4, softmax layers of output vector xsoftoutIn number a possibility that respectively indicating 6 kinds of basic facial expressions, by softmax The output vector x of layersoftoutThe middle maximum corresponding basic facial expression of number is marked as the human face expression of the non-directed graph.
CN201910091261.1A 2019-01-30 2019-01-30 Facial expression recognition method based on graph convolution neural network Active CN110008819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910091261.1A CN110008819B (en) 2019-01-30 2019-01-30 Facial expression recognition method based on graph convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910091261.1A CN110008819B (en) 2019-01-30 2019-01-30 Facial expression recognition method based on graph convolution neural network

Publications (2)

Publication Number Publication Date
CN110008819A true CN110008819A (en) 2019-07-12
CN110008819B CN110008819B (en) 2022-11-18

Family

ID=67165589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910091261.1A Active CN110008819B (en) 2019-01-30 2019-01-30 Facial expression recognition method based on graph convolution neural network

Country Status (1)

Country Link
CN (1) CN110008819B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291212A (en) * 2020-01-24 2020-06-16 复旦大学 Zero sample sketch image retrieval method and system based on graph convolution neural network
CN111724289A (en) * 2020-06-24 2020-09-29 山东建筑大学 Environmental protection equipment identification method and system based on time sequence
CN112183314A (en) * 2020-09-27 2021-01-05 哈尔滨工业大学(深圳) Expression information acquisition device and expression identification method and system
CN112801266A (en) * 2020-12-24 2021-05-14 武汉旷视金智科技有限公司 Neural network construction method, device, equipment and medium
CN113255543A (en) * 2021-06-02 2021-08-13 西安电子科技大学 Facial expression recognition method based on graph convolution network
CN115686846A (en) * 2022-10-31 2023-02-03 重庆理工大学 Container cluster online deployment method for fusing graph neural network and reinforcement learning in edge computing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256426A (en) * 2017-12-15 2018-07-06 安徽四创电子股份有限公司 A kind of facial expression recognizing method based on convolutional neural networks
CN108304826A (en) * 2018-03-01 2018-07-20 河海大学 Facial expression recognizing method based on convolutional neural networks
CN109033994A (en) * 2018-07-03 2018-12-18 辽宁工程技术大学 A kind of facial expression recognizing method based on convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256426A (en) * 2017-12-15 2018-07-06 安徽四创电子股份有限公司 A kind of facial expression recognizing method based on convolutional neural networks
CN108304826A (en) * 2018-03-01 2018-07-20 河海大学 Facial expression recognizing method based on convolutional neural networks
CN109033994A (en) * 2018-07-03 2018-12-18 辽宁工程技术大学 A kind of facial expression recognizing method based on convolutional neural networks

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291212A (en) * 2020-01-24 2020-06-16 复旦大学 Zero sample sketch image retrieval method and system based on graph convolution neural network
CN111291212B (en) * 2020-01-24 2022-10-11 复旦大学 Zero sample sketch image retrieval method and system based on graph convolution neural network
CN111724289A (en) * 2020-06-24 2020-09-29 山东建筑大学 Environmental protection equipment identification method and system based on time sequence
CN112183314A (en) * 2020-09-27 2021-01-05 哈尔滨工业大学(深圳) Expression information acquisition device and expression identification method and system
CN112183314B (en) * 2020-09-27 2023-12-12 哈尔滨工业大学(深圳) Expression information acquisition device, expression recognition method and system
CN112801266A (en) * 2020-12-24 2021-05-14 武汉旷视金智科技有限公司 Neural network construction method, device, equipment and medium
CN112801266B (en) * 2020-12-24 2023-10-31 武汉旷视金智科技有限公司 Neural network construction method, device, equipment and medium
CN113255543A (en) * 2021-06-02 2021-08-13 西安电子科技大学 Facial expression recognition method based on graph convolution network
CN115686846A (en) * 2022-10-31 2023-02-03 重庆理工大学 Container cluster online deployment method for fusing graph neural network and reinforcement learning in edge computing
CN115686846B (en) * 2022-10-31 2023-05-02 重庆理工大学 Container cluster online deployment method integrating graph neural network and reinforcement learning in edge calculation

Also Published As

Publication number Publication date
CN110008819B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
CN110008819A (en) A kind of facial expression recognizing method based on figure convolutional neural networks
CN111126472B (en) SSD (solid State disk) -based improved target detection method
CN104182772B (en) A kind of gesture identification method based on deep learning
CN104992223B (en) Intensive Population size estimation method based on deep learning
CN107657249A (en) Method, apparatus, storage medium and the processor that Analysis On Multi-scale Features pedestrian identifies again
CN109214441A (en) A kind of fine granularity model recognition system and method
CN107730458A (en) A kind of fuzzy facial reconstruction method and system based on production confrontation network
CN106960214A (en) Object identification method based on image
CN107451565B (en) Semi-supervised small sample deep learning image mode classification and identification method
CN106529499A (en) Fourier descriptor and gait energy image fusion feature-based gait identification method
CN110378208B (en) Behavior identification method based on deep residual error network
CN105139004A (en) Face expression identification method based on video sequences
CN108875907B (en) Fingerprint identification method and device based on deep learning
CN114582030B (en) Behavior recognition method based on service robot
CN106875007A (en) End-to-end deep neural network is remembered based on convolution shot and long term for voice fraud detection
CN111832484A (en) Loop detection method based on convolution perception hash algorithm
CN104298974A (en) Human body behavior recognition method based on depth video sequence
CN108009481A (en) A kind of training method and device of CNN models, face identification method and device
CN106874913A (en) A kind of vegetable detection method
CN107463881A (en) A kind of character image searching method based on depth enhancing study
CN107545243A (en) Yellow race's face identification method based on depth convolution model
CN116052218B (en) Pedestrian re-identification method
CN112036260A (en) Expression recognition method and system for multi-scale sub-block aggregation in natural environment
CN111898566B (en) Attitude estimation method, attitude estimation device, electronic equipment and storage medium
CN114764941A (en) Expression recognition method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant