CN106570474A - Micro expression recognition method based on 3D convolution neural network - Google Patents

Micro expression recognition method based on 3D convolution neural network Download PDF

Info

Publication number
CN106570474A
CN106570474A CN201610954555.9A CN201610954555A CN106570474A CN 106570474 A CN106570474 A CN 106570474A CN 201610954555 A CN201610954555 A CN 201610954555A CN 106570474 A CN106570474 A CN 106570474A
Authority
CN
China
Prior art keywords
characteristic pattern
layer
micro
convolutional layer
convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610954555.9A
Other languages
Chinese (zh)
Other versions
CN106570474B (en
Inventor
卢官明
杨成
闫静杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201610954555.9A priority Critical patent/CN106570474B/en
Publication of CN106570474A publication Critical patent/CN106570474A/en
Application granted granted Critical
Publication of CN106570474B publication Critical patent/CN106570474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to a micro expression recognition method based on a 3D convolution neural network. Based on a constructed 3D convolution neural network (3D-CNN) model, happiness, disgust, depression, surprise as well as five other micro expressions can be recognized effectively. The designed micro expression recognition method is simple and efficient. There is no need to carry out a series of processes such as feature extraction, feature dimension reduction and classification on sample data. The difficulty of preprocessing is reduced greatly. Through receptive field and weight sharing, the number of parameters needing to be trained by the neural network is reduced, and the complexity of the algorithm is reduced greatly. In addition, in the designed micro expression recognition method, through down-sampling operation of a down-sampling layer, the robustness of the network is enhanced, and image distortion to a certain degree can be tolerated.

Description

A kind of micro- expression recognition method based on 3D convolutional neural networks
Technical field
The present invention relates to a kind of micro- expression recognition method based on 3D convolutional neural networks, belongs to image procossing and knows with pattern Other technical field.
Background technology
Micro- expression is a kind of special facial expression, and it has reacted the real emotion of people's heart.People are with the naked eye very Hardly possible finds micro- expression, and its persistent period is very short, intensity is very weak, about 1/25s-1/5s.Also research worker thinks that it continues Time is less than 450ms.Due to micro- these characteristics for having of expressing one's feelings so as to which detecting a lie, the field such as clinical diagnosises and hearing has The prospect of being widely applied.
In early stage, research worker be all by psychologic mode studying micro- expression, and be all pay attention to it is individual micro- The identification of expression.First training tool METT (Micro Expression Training Tool) of micro- expression is exactly by the heart What neo-confucian Ekman was created in 2002, but its identification peak value, only 40% or so, this does not much reach the requirement of commercialization.
With developing rapidly for computer technology, micro- expression does not use beginning psychology method and is studied, more It is using computer vision, the method for pattern recognition.At home, what early start studied micro- expression is that Chinese Academy of Sciences's psychology is ground Study carefully Fu little Lan team.It has applied for that National Natural Science Foundation of China (NSFC) committee general project " is known towards automatic lie within 2011 Other micro- expression expression study ", becomes the main force of the micro- expression of studies in China, and successfully creates spontaneous micro- expression number It is that micro- Expression Recognition research is made that tremendous contribution according to storehouse CASME and CASMEII.2007, State of Zhao's English et al. extended LBP Three dimensions are arrived, the algorithm LBP_TOP of dynamic texture feature has been proposed, LBP_TOP has been to calculate LBP in three orthogonal planes Value, and rectangular histogram is counted, it has efficient calculating, can be very good to describe dynamic textural characteristics, hereafter, LBP_TOP operators are widely used on micro- human facial feature extraction, have obtained preferable classification results.
The sixties in 20th century, Hubel and Wiesel is in research cat cortex for local sensitivity and the god of set direction Jing has found that its unique network structure can be effectively reduced the complexity of Feedback Neural Network when first, convolution god is proposed then Jing networks (Convolutional Neural Networks, CNN), but due to its a series of defect, hardly result in larger Development.Until 2006, University of Toronto professor Hinton proposes deep learning theory, by the artificial god of many hidden layers Jing networks carry out autonomic learning to sample, and the characteristic for obtaining has in itself essence to portray in sample, is conducive to final Classification.Hereafter, deep learning has obtained extensive concern, and nearly all high-tech company for possessing big data is all set up The deep learning project of oneself, all wants the commanding elevation for capturing depth learning technology.2012, in image classification contest ImageNet On (image recognition data base maximum at present), HINTON G E et al. finally achieve very surprising result using CNN, its As a result relatively original method has been got well a lot (front 5 error rates are reduced to 17% by 25%).Because CNN can be directly from original graph As Self-learning Model feature in data, it is to avoid complicated feature extraction and data reconstruction processes, it has been successfully applied to hand-written Numerous applications such as character recognition, recognition of face, human eye detection, Recognition of License Plate Characters, traffic signal identification.
Although CNN is widely applied to the every field of pattern recognition, computer vision with powerful function, It is limited only to the input of 2D, and this causes its application to obtain great restriction.
The content of the invention
The technical problem to be solved is to provide one kind and there is complex characteristic extraction for traditional micro- Expression Recognition And the problems such as Feature Dimension Reduction, feature is extracted from the dimension of room and time, and 3D convolution is carried out, obtained from multiple successive frames with catching The movable information for arriving, can effectively improve the micro- expression recognition method based on 3D convolutional neural networks of micro- Expression Recognition performance.
The present invention is employed the following technical solutions to solve above-mentioned technical problem:The present invention devises a kind of based on 3D convolution Micro- expression recognition method of neutral net, comprises the steps:
Step 001. carries out Pixel Dimensions normalized for each two field picture in micro- facial expression image sequence to be identified;
Step 002. is respectively directed to each two field picture in micro- facial expression image sequence to be identified, extract gray channel characteristic pattern, Horizontal direction gradient channel characteristic pattern, vertical direction gradient channel characteristic pattern, horizontal direction light stream channel characteristics figure, vertical direction Light stream channel characteristics figure, that is, obtain a characteristic pattern group corresponding to micro- facial expression image sequence to be identified;
Step 003. is using default N1Individual variety classes, each other size identical 3D convolution kernel, for characteristic pattern group difference Convolution operation is carried out, N is obtained1Individual characteristic pattern group, wherein, 3D convolution kernels are corresponding to space dimension and time dimension;
Step 004. is directed to N1Each characteristic pattern in individual characteristic pattern group, be respectively adopted the first preset level direction with it is vertical The sampling window of direction equal proportion carries out dimension-reduction treatment, updates N1The Pixel Dimensions of each characteristic pattern in individual characteristic pattern group;
Step 005. is respectively directed to N1Individual characteristic pattern group, is respectively adopted default N2Individual variety classes, each other size identical 3D Convolution kernel carries out convolution operation, obtains N1*N2Individual characteristic pattern group, wherein, 3D convolution kernels are corresponding to space dimension and time dimension;
Step 006. is directed to N1*N2Each characteristic pattern in individual characteristic pattern group, is respectively adopted the second preset level direction and erects Nogata to the sampling window of equal proportion carries out dimension-reduction treatment, updates N1*N2The Pixel Dimensions of each characteristic pattern in individual characteristic pattern group;
Step 007. is respectively directed to N1*N2Individual characteristic pattern group, is respectively adopted 1 default species, size and characteristic pattern pixel chi Very little identical 2D convolution kernel carries out the convolution operation of space dimension, updates N1*N2Individual characteristic pattern group;
Step 008. obtains N1*N2Each characteristic vector corresponding to individual characteristic pattern group;
Step 009. carries out classification process using nerual network technique for each characteristic vector, wherein, choose maximum defeated Neuron corresponding to going out value, obtains the micro- expression classification corresponding to the neuron, and micro- facial expression image sequence institute as to be identified is right The micro- Expression Recognition result answered.
As a preferred technical solution of the present invention:Based on micro- Expression Recognition described in 3D convolutional neural networks model realizations Method, 3D convolutional neural networks models by input successively include hardwired layer H1 (hardwired layers), convolutional layer C1, Down-sampling layer S1, convolutional layer C2, down-sampling layer S2, convolutional layer C3, full articulamentum, classification layer;After execution of step 001, adopt Operated for each two field picture in micro- facial expression image sequence to be identified with 3D convolutional neural networks model, wherein, by Hardwired layer H1 (hardwired layers) performs the step 002, and convolutional layer C1 performs the step 003, and down-sampling layer S1 is performed The step 004, convolutional layer C2 performs the step 005, and down-sampling layer S2 performs the step 006, and convolutional layer C3 performs institute Step 007 is stated, full articulamentum performs the step 008, and classification layer performs the step 009.
As a preferred technical solution of the present invention:The 3D convolutional Neurals net is directed to using preset model training method The model parameter of network model is trained, after the step 001 has been performed, using the 3D convolutional neural networks moulds after training Type, the step 002 is performed to step 009 for each two field picture in micro- facial expression image sequence to be identified.
As a preferred technical solution of the present invention:The preset model training method is random diagonal Levenberg-Marquardt optimization methods are trained for the model parameter of the 3D convolutional neural networks model.
As a preferred technical solution of the present invention:The convolutional layer C1 as follows, performs the step 003,
Wherein, (x, y, z)C1,jThe pixel value of any one pixel on convolutional layer j-th characteristic pattern of C1 is represented, Output of hardwired layer H1 (hardwired layers) j-th characteristic pattern Jing after convolutional layer C1 carries out 3D convolution is represented,Table Show that convolutional layer C1 carries out the 3D convolution kernels of convolution, (P-1, Q-1, R-1) for j-th characteristic patternC1,jRepresent convolutional layer C1 correspondences The size of the 3D convolution kernels of j-th characteristic pattern, bC1,jRepresent the additivity biasing of convolutional layer C1 j-th characteristic pattern of correspondence, f () table Show activation primitive,Represent any one on hardwired layer H1 (hardwired layers) j-th characteristic pattern Point;
The convolutional layer C2 as follows, performs the step 005,
Wherein, (x, y, z)C2,iThe pixel value of any one pixel on convolutional layer C2 ith feature figures is represented, Output of the down-sampling layer S1 ith features figure Jing after convolutional layer C2 carries out 3D convolution is represented,Represent that convolutional layer C2 is directed to Ith feature figure carries out the 3D convolution kernels of convolution, (P-1, Q-1, R-1)C2,iRepresent the 3D of convolutional layer C2 correspondence ith feature figures The size of convolution kernel, bC2,iThe additivity biasing of convolutional layer C2 correspondence ith feature figures is represented, f () represents activation primitive,Represent any point on down-sampling layer S1 ith feature figures.
As a preferred technical solution of the present invention:The down-sampling layer S1 as follows, performs the step 004,
vS1,m=f (αS1,mdown1(vC1,m)+βS1,m)
Wherein, vS1,mRepresent that down-sampling layer S1 carries out the feature obtained by dimension-reduction treatment for m-th characteristic pattern of convolutional layer C1 Figure (not being characteristic pattern, directly write output just), vC1,mRepresent m-th characteristic pattern of convolutional layer C1, down1() adopts under representing Sample layer S1 functions, αS1,mAnd βS1,mThe property the taken advantage of biasing and additivity biasing of down-sampling layer S1 m-th characteristic pattern of correspondence, f are represented respectively () represents activation primitive;
The down-sampling layer S2 as follows, performs the step 006,
vS2,n=f (αS2,ndown2(vC2,n)+βS2,n)
Wherein, vS2,nRepresent that down-sampling layer S2 carries out the characteristic pattern obtained by down-sampling for n-th characteristic pattern of convolutional layer C2, vC2,nRepresent n-th characteristic pattern of convolutional layer C2, down2() represents down-sampling layer S2 functions, αS2,nAnd βS2,nUnder representing respectively The property the taken advantage of biasing and additivity biasing of sample level S2 n-th characteristic pattern of correspondence, f () represents activation primitive.
As a preferred technical solution of the present invention:The convolutional layer C3 as follows, performs the step 007,
Wherein, (x, y)C3,kThe pixel value of any one pixel on convolutional layer k-th characteristic pattern of C3 is represented,Table Show output of k-th characteristic pattern of down-sampling layer S2 Jing after convolutional layer C3 carries out 2D convolution,Represent that convolutional layer C3 is directed to kth Individual characteristic pattern carries out the 2D convolution kernels of convolution, (P-1, Q-1)C3,kRepresent the 2D convolution kernels of convolutional layer C3 k-th characteristic pattern of correspondence Size, bC3,kThe additivity biasing of convolutional layer C3 k-th characteristic pattern of correspondence is represented, f () represents activation primitive,Represent any point on down-sampling layer k-th characteristic pattern of S2.
A kind of micro- expression recognition method based on 3D convolutional neural networks of the present invention adopts above technical scheme and shows There is technology to compare, with following technique effect:Designed micro- expression recognition method based on 3D convolutional neural networks of the invention, base In 3D convolutional neural networks (3D-CNN) model for being constructed, can effectively identify happiness, detest, it is oppressive, surprised and Other micro- expressions of 5 class, and designed micro- expression recognition method is simple, efficiently, it is not necessary to sample data is carried out feature extraction, A series of processes such as Feature Dimension Reduction, classification, greatly reduce the difficulty of pretreatment, and shared by receptive field and weights, subtract Lack the number of the parameter of neutral net needs training, greatly reduce the complexity of algorithm, moreover, designed micro- expression In recognition methodss, operated by the down-sampling of down-sampling layer, enhance the robustness of network, can tolerate that image is a certain degree of abnormal Become.
Description of the drawings
Fig. 1 is the schematic diagram of the designed micro- expression recognition method based on 3D convolutional neural networks of the present invention;
Fig. 2 is 3D convolutional neural networks framves in the designed micro- expression recognition method based on 3D convolutional neural networks of the present invention Composition.
Specific embodiment
The specific embodiment of the present invention is described in further detail with reference to Figure of description.
As depicted in figs. 1 and 2, a kind of micro- expression recognition method based on 3D convolutional neural networks of present invention design, in reality In the middle of the application process of border, micro- expression recognition method, 3D convolution god are realized based on 3D convolutional neural networks models (3D-CNN) Jing network modeies (3D-CNN) include successively hardwired layer H1 (hardwired layers), convolutional layer C1, down-sampling by input Layer S1, convolutional layer C2, down-sampling layer S2, convolutional layer C3, full articulamentum, classification layer (Softmax classification layers);For following set The step of meter 001, to step 009, is directed to described initially with random diagonal Levenberg-Marquardt optimization methods The model parameter of 3D convolutional neural networks models (3D-CNN) is trained, and then after the step 001 has been performed, adopts 3D convolutional neural networks models (3D-CNN) after training, holds for each two field picture in micro- facial expression image sequence to be identified The row step 002 is to step 009;Wherein, the step 002, convolutional layer C1 are performed by hardwired layer H1 (hardwired layers) The step 003 is performed, down-sampling layer S1 performs the step 004, and convolutional layer C2 performs the step 005, down-sampling layer S2 The step 006 is performed, convolutional layer C3 performs the step 007, and full articulamentum performs the step 008, layer of classifying (Softmax classification layers) performs the step 009, in practical application, specifically includes following steps:
Step 001. carries out Pixel Dimensions normalized for each two field picture in micro- facial expression image sequence to be identified.
Step 002. is respectively directed to each frame in micro- facial expression image sequence to be identified by hardwired layer H1 (hardwired layers) Image, extracts gray channel characteristic pattern, horizontal direction gradient channel characteristic pattern, vertical direction gradient channel characteristic pattern, level side To light stream channel characteristics figure, vertical direction light stream channel characteristics figure, that is, obtain corresponding to micro- facial expression image sequence to be identified Individual characteristic pattern group.
Step 003. is by convolutional layer C1 using default N1Individual variety classes, each other size identical 3D convolution kernel, for spy Levying figure group carries out respectively convolution operation, obtains N1Individual characteristic pattern group, wherein, 3D convolution kernels are corresponding to space dimension and time dimension.
Above-mentioned convolutional layer C1 as follows, performs the step 003:
Wherein, (x, y, z)C1,jThe pixel value of any one pixel on convolutional layer j-th characteristic pattern of C1 is represented, Output of hardwired layer H1 (hardwired layers) j-th characteristic pattern Jing after convolutional layer C1 carries out 3D convolution is represented,Table Show that convolutional layer C1 carries out the 3D convolution kernels of convolution, (P-1, Q-1, R-1) for j-th characteristic patternC1,jRepresent convolutional layer C1 correspondences The size of the 3D convolution kernels of j-th characteristic pattern, bC1,jRepresent the additivity biasing of convolutional layer C1 j-th characteristic pattern of correspondence, f () table Show activation primitive,Represent any one on hardwired layer H1 (hardwired layers) j-th characteristic pattern Point.
Step 004. is directed to N by down-sampling layer S11Each characteristic pattern in individual characteristic pattern group, is respectively adopted the first default water Square carry out dimension-reduction treatment to the sampling window with vertical direction equal proportion, update N1The picture of each characteristic pattern in individual characteristic pattern group Plain size.
Above-mentioned down-sampling layer S1 as follows, performs the step 004,
vS1,m=f (αS1,mdown1(vC1,m)+βS1,m)
Wherein, vS1,mRepresent that down-sampling layer S1 carries out the feature obtained by dimension-reduction treatment for m-th characteristic pattern of convolutional layer C1 Figure (not being characteristic pattern, directly write output just), vC1,mRepresent m-th characteristic pattern of convolutional layer C1, down1() adopts under representing Sample layer S1 functions, αS1,mAnd βS1,mThe property the taken advantage of biasing and additivity biasing of down-sampling layer S1 m-th characteristic pattern of correspondence, f are represented respectively () represents activation primitive.
Step 005. is respectively directed to N by convolutional layer C21Individual characteristic pattern group, is respectively adopted default N2It is individual variety classes, big each other Little identical 3D convolution kernel carries out convolution operation, obtains N1*N2Individual characteristic pattern group, wherein, 3D convolution kernels corresponding to space dimension and when Between tie up.
Above-mentioned convolutional layer C2 as follows, performs the step 005,
Wherein, (x, y, z)C2,iThe pixel value of any one pixel on convolutional layer C2 ith feature figures is represented, Output of the down-sampling layer S1 ith features figure Jing after convolutional layer C2 carries out 3D convolution is represented,Represent that convolutional layer C2 is directed to Ith feature figure carries out the 3D convolution kernels of convolution, (P-1, Q-1, R-1)C2,iRepresent the 3D of convolutional layer C2 correspondence ith feature figures The size of convolution kernel, bC2,iThe additivity biasing of convolutional layer C2 correspondence ith feature figures is represented, f () represents activation primitive,Represent any point on down-sampling layer S1 ith feature figures.
Step 006. is directed to N by down-sampling layer S21*N2Each characteristic pattern in individual characteristic pattern group, is respectively adopted second and presets Horizontal direction carries out dimension-reduction treatment with the sampling window of vertical direction equal proportion, updates N1*N2Each characteristic pattern in individual characteristic pattern group Pixel Dimensions.
Above-mentioned down-sampling layer S2 as follows, performs the step 006,
vS2,n=f (αS2,ndown2(vC2,n)+βS2,n)
Wherein, vS2,nRepresent that down-sampling layer S2 carries out the characteristic pattern obtained by down-sampling for n-th characteristic pattern of convolutional layer C2, vC2,nRepresent n-th characteristic pattern of convolutional layer C2, down2() represents down-sampling layer S2 functions, αS2,nAnd βS2,nUnder representing respectively The property the taken advantage of biasing and additivity biasing of sample level S2 n-th characteristic pattern of correspondence, f () represents activation primitive.
Step 007. is respectively directed to N by convolutional layer C31*N2Individual characteristic pattern group, is respectively adopted 1 default species, size with spy Levying the equivalently-sized 2D convolution kernels of image element carries out the convolution operation of space dimension, updates N1*N2Individual characteristic pattern group.
Above-mentioned convolutional layer C3 as follows, performs the step 007,
Wherein, (x, y)C3,kThe pixel value of any one pixel on convolutional layer k-th characteristic pattern of C3 is represented,Table Show output of k-th characteristic pattern of down-sampling layer S2 Jing after convolutional layer C3 carries out 2D convolution,Represent that convolutional layer C3 is directed to kth Individual characteristic pattern carries out the 2D convolution kernels of convolution, (P-1, Q-1)C3,kRepresent the 2D convolution kernels of convolutional layer C3 k-th characteristic pattern of correspondence Size, bC3,kThe additivity biasing of convolutional layer C3 k-th characteristic pattern of correspondence is represented, f () represents activation primitive,Represent any point on down-sampling layer k-th characteristic pattern of S2.
Step 008. obtains N by full articulamentum1*N2Each characteristic vector corresponding to individual characteristic pattern group.
Step 009. is carried out point using nerual network technique by classification layer (Softmax classify layer) for each characteristic vector Class process, wherein, neuron corresponding to selection maximum output value obtains the micro- expression classification corresponding to the neuron, as treats Recognize the micro- Expression Recognition result corresponding to micro- facial expression image sequence.
The designed micro- expression recognition method based on 3D convolutional neural networks of above-mentioned technical proposal, based on the 3D for being constructed Convolutional neural networks (3D-CNN) model, can effectively identify happiness, detest, oppressive, surprised and other micro- expressions of 5 class, And designed micro- expression recognition method is simple, efficient, it is not necessary to carry out feature extraction, Feature Dimension Reduction, classification to sample data Etc. a series of processes, the difficulty of pretreatment is greatly reduced, and it is shared by receptive field and weights, and reducing neutral net needs The number of the parameter to be trained, greatly reduces the complexity of algorithm, moreover, in designed micro- expression recognition method, leads to The down-sampling operation of down-sampling layer is crossed, the robustness of network is enhanced, a certain degree of distortion of image can be tolerated.
As shown in Fig. 2 by the micro- expression recognition method designed by the present invention based on 3D convolutional neural networks, being applied to reality In the middle of application process, initially with random diagonal Levenberg-Marquardt optimization methods for 3D convolution god The model parameter of Jing network modeies (3D-CNN) is trained, and then concrete steps perform as follows:
Step 001. carries out Pixel Dimensions normalized for each two field picture in micro- facial expression image sequence to be identified, makes The size of each two field picture is all 60*40 pixels, and micro- facial expression image sequence to be identified is 7 two field pictures.
Step 002. is respectively directed to each frame in micro- facial expression image sequence to be identified by hardwired layer H1 (hardwired layers) Image, extracts gray channel characteristic pattern, horizontal direction gradient channel characteristic pattern, vertical direction gradient channel characteristic pattern, level side To light stream channel characteristics figure, vertical direction light stream channel characteristics figure, that is, obtain corresponding to micro- facial expression image sequence to be identified Individual characteristic pattern group, and because the Optic flow information in horizontally and vertically direction needs the image of two continuous frames to calculate, so in hardwired Layer H1 (hardwired layers) characteristic pattern number is 7*3+6*2=33.
Step 003. is by convolutional layer C1 using 2 variety classeses, each other size is mutually all the 3D convolution kernels of 7*7*3 (7*7 is Space dimension, 3 is time dimension), convolution operation is carried out respectively for characteristic pattern group by formula, 2 characteristic pattern groups are obtained, wherein, respectively The characteristic pattern number that individual characteristic pattern group is included is 23=(7-3+1) * 3+ (6-3+1) * 2, and characteristic pattern size is 54x34=(60-7+ 1)*(40-7+1)。
Step 004., for each characteristic pattern in 2 characteristic pattern groups, is respectively adopted 2*2's by down-sampling layer S1 by formula Sampling window carries out dimension-reduction treatment, updates the Pixel Dimensions of each characteristic pattern in 2 characteristic pattern groups, can thus obtain identical number The mesh characteristic pattern that still spatial resolution is reduced, characteristic pattern size is 27*17=(52/2) * (34/2) after down-sampling.
Step 005. is respectively directed to 2 characteristic pattern groups by convolutional layer C2, and default 3 variety classeses, big each other are respectively adopted Little phase is all the 3D convolution kernels (7*6 is space dimension, and 3 is time dimension) of 7*6*3, and by formula convolution operation is carried out, and obtains 6 features Figure group, wherein, the characteristic pattern number that each characteristic pattern group is included is 13=(7-3+1) -3+1) * 3+ ((6-3+1) -3+1) * 2, it is special Figure size is levied for 21*12=(27-7+1) * (17-6+1).
Step 006., for each characteristic pattern in 6 characteristic pattern groups, is respectively adopted the sample window of 3*3 by down-sampling layer S2 Mouthful, dimension-reduction treatment is carried out by formula, the Pixel Dimensions of each characteristic pattern in 6 characteristic pattern groups are updated, wherein, feature after down-sampling Figure size is 7*4=(21/3) * (12/3), can thus obtain the characteristic pattern that same number but spatial resolution are reduced.
Step 007. is respectively directed to 6 characteristic pattern groups by convolutional layer C3, and the 2D convolution kernels of 7*4 are respectively adopted, and enters by formula The convolution operation of row space dimension, updates 6 characteristic pattern groups, thus, the characteristic pattern of output is just reduced to the size of 1x1.
Each characteristic vector of step 008. by corresponding to full articulamentum obtains 6 characteristic pattern groups, that is, finally give one The characteristic vector of 128 dimensions.
Step 009. is carried out point using nerual network technique by classification layer (Softmax classify layer) for each characteristic vector Class process, each neuron exports the numerical value of a value between 0~1 in layer of classifying (Softmax classification layers), its reaction Input sample belongs to such probability, wherein, choose maximum output value corresponding to neuron, obtain corresponding to the neuron Micro- expression classification, the micro- Expression Recognition result corresponding to micro- facial expression image sequence as to be identified.
Embodiments of the present invention are explained in detail above in conjunction with accompanying drawing, but the present invention is not limited to above-mentioned enforcement Mode, in the ken that those of ordinary skill in the art possess, can be with the premise of without departing from present inventive concept Make a variety of changes.

Claims (7)

1. a kind of micro- expression recognition method based on 3D convolutional neural networks, it is characterised in that comprise the steps:
Step 001. carries out Pixel Dimensions normalized for each two field picture in micro- facial expression image sequence to be identified;
Step 002. is respectively directed to each two field picture in micro- facial expression image sequence to be identified, extracts gray channel characteristic pattern, level Direction gradient channel characteristics figure, vertical direction gradient channel characteristic pattern, horizontal direction light stream channel characteristics figure, vertical direction light stream Channel characteristics figure, that is, obtain a characteristic pattern group corresponding to micro- facial expression image sequence to be identified;
Step 003. is using default N1Individual variety classes, each other size identical 3D convolution kernel, are rolled up respectively for characteristic pattern group Product operation, obtains N1Individual characteristic pattern group, wherein, 3D convolution kernels are corresponding to space dimension and time dimension;
Step 004. is directed to N1Each characteristic pattern in individual characteristic pattern group, is respectively adopted the first preset level direction and vertical direction The sampling window of equal proportion carries out dimension-reduction treatment, updates N1The Pixel Dimensions of each characteristic pattern in individual characteristic pattern group;
Step 005. is respectively directed to N1Individual characteristic pattern group, is respectively adopted default N2Individual variety classes, each other size identical 3D convolution Core carries out convolution operation, obtains N1*N2Individual characteristic pattern group, wherein, 3D convolution kernels are corresponding to space dimension and time dimension;
Step 006. is directed to N1*N2Each characteristic pattern in individual characteristic pattern group, is respectively adopted the second preset level direction and vertical side Dimension-reduction treatment is carried out to the sampling window of equal proportion, N is updated1*N2The Pixel Dimensions of each characteristic pattern in individual characteristic pattern group;
Step 007. is respectively directed to N1*N2Individual characteristic pattern group, is respectively adopted 1 default species, size with characteristic pattern Pixel Dimensions phase Same 2D convolution kernels carry out the convolution operation of space dimension, update N1*N2Individual characteristic pattern group;
Step 008. obtains N1*N2Each characteristic vector corresponding to individual characteristic pattern group;
Step 009. carries out classification process using nerual network technique for each characteristic vector, wherein, choose maximum output value Corresponding neuron, obtains the micro- expression classification corresponding to the neuron, corresponding to micro- facial expression image sequence as to be identified Micro- Expression Recognition result.
2. a kind of micro- expression recognition method based on 3D convolutional neural networks according to claim 1, it is characterised in that:It is based on Micro- expression recognition method described in 3D convolutional neural networks model realizations, 3D convolutional neural networks models are wrapped successively by input Include hardwired layer H1 (hardwired layers), convolutional layer C1, down-sampling layer S1, convolutional layer C2, down-sampling layer S2, convolutional layer C3, complete Articulamentum, classification layer;After execution of step 001, using 3D convolutional neural networks model for micro- expression figure to be identified Each two field picture in as sequence is operated, wherein, the step 002, convolution are performed by hardwired layer H1 (hardwired layers) Layer C1 performs the step 003, and down-sampling layer S1 performs the step 004, and convolutional layer C2 performs the step 005, down-sampling Layer S2 performs the step 006, and convolutional layer C3 performs the step 007, and full articulamentum performs the step 008, and classification layer is held The row step 009.
3. a kind of micro- expression recognition method based on 3D convolutional neural networks according to claim 2, it is characterised in that:Using Preset model training method is trained for the model parameter of the 3D convolutional neural networks model, is performing the step After 001, using the 3D convolutional neural networks models after training, for each frame figure in micro- facial expression image sequence to be identified As performing the step 002 to step 009.
4. a kind of micro- expression recognition method based on 3D convolutional neural networks according to claim 3, it is characterised in that:It is described Preset model training method is that random diagonal Levenberg-Marquardt optimization methods are directed to the 3D convolutional Neurals The model parameter of network model is trained.
5. a kind of micro- expression recognition method based on 3D convolutional neural networks according to claim 2, it is characterised in that:It is described Convolutional layer C1 as follows, performs the step 003,
v C 1 , j ( x , y , z ) C 1 , j = f ( Σ p C 1 , j = 0 P C 1 , j - 1 Σ q C 1 , j = 0 Q C 1 , j - 1 Σ r C 1 , j = 0 R C 1 , j - 1 w C 1 , j ( p , q , r ) C 1 , j w H 1 ( x C 1 , j + p C 1 , j ) ( y C 1 , j + q C 1 , j ) ( z C 1 , j + r C 1 , j ) + b C 1 , j )
Wherein, (x, y, z)C1,jThe pixel value of any one pixel on convolutional layer j-th characteristic pattern of C1 is represented,Represent Output of hardwired layer H1 (hardwired layers) j-th characteristic pattern Jing after convolutional layer C1 carries out 3D convolution,Represent volume Lamination C1 carries out the 3D convolution kernels of convolution, (P-1, Q-1, R-1) for j-th characteristic patternC1,jRepresent that convolutional layer C1 is corresponding j-th The size of the 3D convolution kernels of characteristic pattern, bC1,jThe additivity biasing of convolutional layer C1 j-th characteristic pattern of correspondence is represented, f () is represented and swashed Function living,Represent any point on hardwired layer H1 (hardwired layers) j-th characteristic pattern;
The convolutional layer C2 as follows, performs the step 005,
v C 2 , i ( x , y , z ) C 2 , i = f ( Σ p C 2 , i = 0 P C 2 , i - 1 Σ q C 2 , i = 0 Q C 2 , i - 1 Σ r C 2 , i = 0 R C 2 , i - 1 w C 2 , i ( p , q , r ) C 2 , i v S 1 ( x C 2 , i + p C 2 , i ) ( y C 2 , i + q C 2 , i ) ( z C 2 , i + r C 2 , i ) + b C 2 , i )
Wherein, (x, y, z)C2,iThe pixel value of any one pixel on convolutional layer C2 ith feature figures is represented,Represent Output of the down-sampling layer S1 ith features figure Jing after convolutional layer C2 carries out 3D convolution,Represent that convolutional layer C2 is directed to i-th Individual characteristic pattern carries out the 3D convolution kernels of convolution, (P-1, Q-1, R-1)C2,iRepresent 3D volume of convolutional layer C2 correspondence ith feature figures The size of product core, bC2,iThe additivity biasing of convolutional layer C2 correspondence ith feature figures is represented, f () represents activation primitive,Represent any point on down-sampling layer S1 ith feature figures.
6. a kind of micro- expression recognition method based on 3D convolutional neural networks according to claim 2, it is characterised in that:It is described Down-sampling layer S1 as follows, performs the step 004,
vS1,m=f (αS1,mdown1(vC1,m)+βS1,m)
Wherein, vS1,mRepresent that down-sampling layer S1 carries out characteristic pattern obtained by dimension-reduction treatment (no for m-th characteristic pattern of convolutional layer C1 It is characteristic pattern, directly writes output just), vC1,mRepresent m-th characteristic pattern of convolutional layer C1, down1() represents down-sampling layer S1 Function, αS1,mAnd βS1,mThe property the taken advantage of biasing and additivity biasing of down-sampling layer S1 m-th characteristic pattern of correspondence are represented respectively, and f () is represented Activation primitive;
The down-sampling layer S2 as follows, performs the step 006,
vS2,n=f (αS2,ndown2(vC2,n)+βS2,n)
Wherein, vS2,nRepresent that down-sampling layer S2 carries out the characteristic pattern obtained by down-sampling, v for n-th characteristic pattern of convolutional layer C2C2,n Represent n-th characteristic pattern of convolutional layer C2, down2() represents down-sampling layer S2 functions, αS2,nAnd βS2,nDown-sampling is represented respectively The property the taken advantage of biasing and additivity biasing of layer S2 n-th characteristic pattern of correspondence, f () represents activation primitive.
7. a kind of micro- expression recognition method based on 3D convolutional neural networks according to claim 2, it is characterised in that:It is described Convolutional layer C3 as follows, performs the step 007,
v C 3 , k ( x , y ) C 3 , k = f ( Σ p C 3 , k = 0 P C 3 , k - 1 Σ q C 3 , k = 0 Q C 3 , k - 1 w C 3 , k ( p , q ) C 3 , k v S 2 ( x C 3 , k + p C 3 , k ) ( y C 3 , k + q C 3 , k i ) + b C 3 , k )
Wherein, (x, y)C3,kThe pixel value of any one pixel on convolutional layer k-th characteristic pattern of C3 is represented,Under expression Output of k-th characteristic pattern of sample level S2 Jing after convolutional layer C3 carries out 2D convolution,Represent that convolutional layer C3 is special for k-th Levying figure carries out the 2D convolution kernels of convolution, (P-1, Q-1)C3,kRepresent that convolutional layer C3 corresponds to the big of the 2D convolution kernels of k-th characteristic pattern It is little, bC3,kThe additivity biasing of convolutional layer C3 k-th characteristic pattern of correspondence is represented, f () represents activation primitive,Represent any point on down-sampling layer k-th characteristic pattern of S2.
CN201610954555.9A 2016-10-27 2016-10-27 A kind of micro- expression recognition method based on 3D convolutional neural networks Active CN106570474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610954555.9A CN106570474B (en) 2016-10-27 2016-10-27 A kind of micro- expression recognition method based on 3D convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610954555.9A CN106570474B (en) 2016-10-27 2016-10-27 A kind of micro- expression recognition method based on 3D convolutional neural networks

Publications (2)

Publication Number Publication Date
CN106570474A true CN106570474A (en) 2017-04-19
CN106570474B CN106570474B (en) 2019-06-28

Family

ID=58535272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610954555.9A Active CN106570474B (en) 2016-10-27 2016-10-27 A kind of micro- expression recognition method based on 3D convolutional neural networks

Country Status (1)

Country Link
CN (1) CN106570474B (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107242876A (en) * 2017-04-20 2017-10-13 合肥工业大学 A kind of computer vision methods for state of mind auxiliary diagnosis
CN107273876A (en) * 2017-07-18 2017-10-20 山东大学 A kind of micro- expression automatic identifying method of ' the grand micro- transformation models of to ' based on deep learning
CN107291232A (en) * 2017-06-20 2017-10-24 深圳市泽科科技有限公司 A kind of somatic sensation television game exchange method and system based on deep learning and big data
CN107316004A (en) * 2017-06-06 2017-11-03 西北工业大学 Space Target Recognition based on deep learning
CN107316015A (en) * 2017-06-19 2017-11-03 南京邮电大学 A kind of facial expression recognition method of high accuracy based on depth space-time characteristic
CN107330393A (en) * 2017-06-27 2017-11-07 南京邮电大学 A kind of neonatal pain expression recognition method based on video analysis
CN107679526A (en) * 2017-11-14 2018-02-09 北京科技大学 A kind of micro- expression recognition method of face
CN107977634A (en) * 2017-12-06 2018-05-01 北京飞搜科技有限公司 A kind of expression recognition method, device and equipment for video
CN108062416A (en) * 2018-01-04 2018-05-22 百度在线网络技术(北京)有限公司 For generating the method and apparatus of label on map
CN108319900A (en) * 2018-01-16 2018-07-24 南京信息工程大学 A kind of basic facial expression sorting technique
CN108388537A (en) * 2018-03-06 2018-08-10 上海熠知电子科技有限公司 A kind of convolutional neural networks accelerator and method
CN108596069A (en) * 2018-04-18 2018-09-28 南京邮电大学 Neonatal pain expression recognition method and system based on depth 3D residual error networks
CN108764207A (en) * 2018-06-07 2018-11-06 厦门大学 A kind of facial expression recognizing method based on multitask convolutional neural networks
CN109034143A (en) * 2018-11-01 2018-12-18 云南大学 The micro- expression recognition method of face based on video amplifier and deep learning
CN109215665A (en) * 2018-07-20 2019-01-15 广东工业大学 A kind of method for recognizing sound-groove based on 3D convolutional neural networks
CN109271930A (en) * 2018-09-14 2019-01-25 广州杰赛科技股份有限公司 Micro- expression recognition method, device and storage medium
CN109389045A (en) * 2018-09-10 2019-02-26 广州杰赛科技股份有限公司 Micro- expression recognition method and device based on mixing space-time convolution model
CN109559535A (en) * 2018-11-22 2019-04-02 深圳市博远交通设施有限公司 A kind of dynamic sound-light coordination traffic signal system of integration recognition of face
CN109784312A (en) * 2019-02-18 2019-05-21 深圳锐取信息技术股份有限公司 Teaching Management Method and device
CN109977925A (en) * 2019-04-22 2019-07-05 北京字节跳动网络技术有限公司 Expression determines method, apparatus and electronic equipment
CN110059593A (en) * 2019-04-01 2019-07-26 华侨大学 A kind of human facial expression recognition method based on feedback convolutional neural networks
CN110188706A (en) * 2019-06-03 2019-08-30 南京邮电大学 Neural network training method and detection method based on facial expression in the video for generating confrontation network
CN110287801A (en) * 2019-05-29 2019-09-27 中国电子科技集团公司电子科学研究院 A kind of micro- Expression Recognition algorithm
CN110532900A (en) * 2019-08-09 2019-12-03 西安电子科技大学 Facial expression recognizing method based on U-Net and LS-CNN
WO2020103700A1 (en) * 2018-11-21 2020-05-28 腾讯科技(深圳)有限公司 Image recognition method based on micro facial expressions, apparatus and related device
CN111767842A (en) * 2020-06-29 2020-10-13 杭州电子科技大学 Micro-expression type distinguishing method based on transfer learning and self-encoder data enhancement
CN111967344A (en) * 2020-07-28 2020-11-20 南京信息工程大学 Refined feature fusion method for face forgery video detection
CN112183333A (en) * 2020-09-27 2021-01-05 苏州工业职业技术学院 Human screen interaction method, system and device based on micro-expressions
CN112580555A (en) * 2020-12-25 2021-03-30 中国科学技术大学 Spontaneous micro-expression recognition method
CN112784804A (en) * 2021-02-03 2021-05-11 杭州电子科技大学 Micro-expression recognition method based on neural network sensitivity analysis
CN113033324A (en) * 2021-03-03 2021-06-25 广东省地质环境监测总站 Geological disaster precursor factor identification method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020602A (en) * 2012-10-12 2013-04-03 北京建筑工程学院 Face recognition method based on neural network
CN103258204A (en) * 2012-02-21 2013-08-21 中国科学院心理研究所 Automatic micro-expression recognition method based on Gabor features and edge orientation histogram (EOH) features
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258204A (en) * 2012-02-21 2013-08-21 中国科学院心理研究所 Automatic micro-expression recognition method based on Gabor features and edge orientation histogram (EOH) features
CN103020602A (en) * 2012-10-12 2013-04-03 北京建筑工程学院 Face recognition method based on neural network
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107242876A (en) * 2017-04-20 2017-10-13 合肥工业大学 A kind of computer vision methods for state of mind auxiliary diagnosis
CN107316004A (en) * 2017-06-06 2017-11-03 西北工业大学 Space Target Recognition based on deep learning
CN107316015A (en) * 2017-06-19 2017-11-03 南京邮电大学 A kind of facial expression recognition method of high accuracy based on depth space-time characteristic
CN107316015B (en) * 2017-06-19 2020-06-30 南京邮电大学 High-precision facial expression recognition method based on deep space-time characteristics
CN107291232A (en) * 2017-06-20 2017-10-24 深圳市泽科科技有限公司 A kind of somatic sensation television game exchange method and system based on deep learning and big data
CN107330393A (en) * 2017-06-27 2017-11-07 南京邮电大学 A kind of neonatal pain expression recognition method based on video analysis
CN107273876A (en) * 2017-07-18 2017-10-20 山东大学 A kind of micro- expression automatic identifying method of ' the grand micro- transformation models of to ' based on deep learning
CN107273876B (en) * 2017-07-18 2019-09-10 山东大学 A kind of micro- expression automatic identifying method of ' the macro micro- transformation model of to ' based on deep learning
CN107679526A (en) * 2017-11-14 2018-02-09 北京科技大学 A kind of micro- expression recognition method of face
CN107679526B (en) * 2017-11-14 2020-06-12 北京科技大学 Human face micro-expression recognition method
CN107977634A (en) * 2017-12-06 2018-05-01 北京飞搜科技有限公司 A kind of expression recognition method, device and equipment for video
CN108062416A (en) * 2018-01-04 2018-05-22 百度在线网络技术(北京)有限公司 For generating the method and apparatus of label on map
CN108062416B (en) * 2018-01-04 2019-10-29 百度在线网络技术(北京)有限公司 Method and apparatus for generating label on map
CN108319900A (en) * 2018-01-16 2018-07-24 南京信息工程大学 A kind of basic facial expression sorting technique
CN108388537A (en) * 2018-03-06 2018-08-10 上海熠知电子科技有限公司 A kind of convolutional neural networks accelerator and method
CN108596069A (en) * 2018-04-18 2018-09-28 南京邮电大学 Neonatal pain expression recognition method and system based on depth 3D residual error networks
CN108764207A (en) * 2018-06-07 2018-11-06 厦门大学 A kind of facial expression recognizing method based on multitask convolutional neural networks
CN108764207B (en) * 2018-06-07 2021-10-19 厦门大学 Face expression recognition method based on multitask convolutional neural network
CN109215665A (en) * 2018-07-20 2019-01-15 广东工业大学 A kind of method for recognizing sound-groove based on 3D convolutional neural networks
CN109389045A (en) * 2018-09-10 2019-02-26 广州杰赛科技股份有限公司 Micro- expression recognition method and device based on mixing space-time convolution model
CN109271930A (en) * 2018-09-14 2019-01-25 广州杰赛科技股份有限公司 Micro- expression recognition method, device and storage medium
CN109271930B (en) * 2018-09-14 2020-11-13 广州杰赛科技股份有限公司 Micro-expression recognition method, device and storage medium
CN109034143A (en) * 2018-11-01 2018-12-18 云南大学 The micro- expression recognition method of face based on video amplifier and deep learning
WO2020103700A1 (en) * 2018-11-21 2020-05-28 腾讯科技(深圳)有限公司 Image recognition method based on micro facial expressions, apparatus and related device
CN109559535A (en) * 2018-11-22 2019-04-02 深圳市博远交通设施有限公司 A kind of dynamic sound-light coordination traffic signal system of integration recognition of face
CN109784312A (en) * 2019-02-18 2019-05-21 深圳锐取信息技术股份有限公司 Teaching Management Method and device
CN110059593B (en) * 2019-04-01 2022-09-30 华侨大学 Facial expression recognition method based on feedback convolutional neural network
CN110059593A (en) * 2019-04-01 2019-07-26 华侨大学 A kind of human facial expression recognition method based on feedback convolutional neural networks
CN109977925A (en) * 2019-04-22 2019-07-05 北京字节跳动网络技术有限公司 Expression determines method, apparatus and electronic equipment
CN109977925B (en) * 2019-04-22 2020-11-27 北京字节跳动网络技术有限公司 Expression determination method and device and electronic equipment
CN110287801A (en) * 2019-05-29 2019-09-27 中国电子科技集团公司电子科学研究院 A kind of micro- Expression Recognition algorithm
CN110188706A (en) * 2019-06-03 2019-08-30 南京邮电大学 Neural network training method and detection method based on facial expression in the video for generating confrontation network
CN110188706B (en) * 2019-06-03 2022-04-19 南京邮电大学 Neural network training method and detection method based on character expression in video for generating confrontation network
CN110532900B (en) * 2019-08-09 2021-07-27 西安电子科技大学 Facial expression recognition method based on U-Net and LS-CNN
CN110532900A (en) * 2019-08-09 2019-12-03 西安电子科技大学 Facial expression recognizing method based on U-Net and LS-CNN
CN111767842A (en) * 2020-06-29 2020-10-13 杭州电子科技大学 Micro-expression type distinguishing method based on transfer learning and self-encoder data enhancement
CN111767842B (en) * 2020-06-29 2024-02-06 杭州电子科技大学 Micro-expression type discrimination method based on transfer learning and self-encoder data enhancement
CN111967344A (en) * 2020-07-28 2020-11-20 南京信息工程大学 Refined feature fusion method for face forgery video detection
CN111967344B (en) * 2020-07-28 2023-06-20 南京信息工程大学 Face fake video detection oriented refinement feature fusion method
CN112183333B (en) * 2020-09-27 2021-12-10 苏州工业职业技术学院 Human screen interaction method, system and device based on micro-expressions
CN112183333A (en) * 2020-09-27 2021-01-05 苏州工业职业技术学院 Human screen interaction method, system and device based on micro-expressions
CN112580555B (en) * 2020-12-25 2022-09-30 中国科学技术大学 Spontaneous micro-expression recognition method
CN112580555A (en) * 2020-12-25 2021-03-30 中国科学技术大学 Spontaneous micro-expression recognition method
CN112784804A (en) * 2021-02-03 2021-05-11 杭州电子科技大学 Micro-expression recognition method based on neural network sensitivity analysis
CN112784804B (en) * 2021-02-03 2024-03-19 杭州电子科技大学 Micro expression recognition method based on neural network sensitivity analysis
CN113033324A (en) * 2021-03-03 2021-06-25 广东省地质环境监测总站 Geological disaster precursor factor identification method and device, electronic equipment and storage medium
CN113033324B (en) * 2021-03-03 2024-03-08 广东省地质环境监测总站 Geological disaster precursor factor identification method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN106570474B (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN106570474A (en) Micro expression recognition method based on 3D convolution neural network
CN107844795B (en) Convolutional neural networks feature extracting method based on principal component analysis
CN107729819A (en) A kind of face mask method based on sparse full convolutional neural networks
CN106778604B (en) Pedestrian re-identification method based on matching convolutional neural network
CN107016406A (en) The pest and disease damage image generating method of network is resisted based on production
CN108961245A (en) Picture quality classification method based on binary channels depth parallel-convolution network
CN108510012A (en) A kind of target rapid detection method based on Analysis On Multi-scale Features figure
CN108830252A (en) A kind of convolutional neural networks human motion recognition method of amalgamation of global space-time characteristic
CN110348376A (en) A kind of pedestrian's real-time detection method neural network based
CN107292813A (en) A kind of multi-pose Face generation method based on generation confrontation network
CN105894045A (en) Vehicle type recognition method with deep network model based on spatial pyramid pooling
CN107945153A (en) A kind of road surface crack detection method based on deep learning
CN107506695A (en) Video monitoring equipment failure automatic detection method
CN107749052A (en) Image defogging method and system based on deep learning neutral net
CN110532900A (en) Facial expression recognizing method based on U-Net and LS-CNN
CN108090403A (en) A kind of face dynamic identifying method and system based on 3D convolutional neural networks
CN105469100A (en) Deep learning-based skin biopsy image pathological characteristic recognition method
CN109359681A (en) A kind of field crop pest and disease disasters recognition methods based on the full convolutional neural networks of improvement
CN103208097B (en) Filtering method is worked in coordination with in the principal component analysis of the multi-direction morphosis grouping of image
CN109359527B (en) Hair region extraction method and system based on neural network
CN109753864A (en) A kind of face identification method based on caffe deep learning frame
CN107944428A (en) A kind of indoor scene semanteme marking method based on super-pixel collection
CN109508675A (en) A kind of pedestrian detection method for complex scene
CN103646255A (en) Face detection method based on Gabor characteristics and extreme learning machine
CN108985252A (en) The image classification method of improved pulse deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No. 66, New Model Road, Gulou District, Nanjing City, Jiangsu Province, 210000

Applicant after: Nanjing Post & Telecommunication Univ.

Address before: 210000 Wenyuan Road, Yadong New District, Nanjing City, Jiangsu Province

Applicant before: Nanjing Post & Telecommunication Univ.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20170419

Assignee: Nanjing causal Artificial Intelligence Research Institute Co., Ltd

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: X2019320000168

Denomination of invention: Micro expression recognition method based on 3D convolution neural network

Granted publication date: 20190628

License type: Common License

Record date: 20191028

EE01 Entry into force of recordation of patent licensing contract