CN107292256A - Depth convolved wavelets neutral net expression recognition method based on secondary task - Google Patents
Depth convolved wavelets neutral net expression recognition method based on secondary task Download PDFInfo
- Publication number
- CN107292256A CN107292256A CN201710446076.0A CN201710446076A CN107292256A CN 107292256 A CN107292256 A CN 107292256A CN 201710446076 A CN201710446076 A CN 201710446076A CN 107292256 A CN107292256 A CN 107292256A
- Authority
- CN
- China
- Prior art keywords
- layer
- image
- expression
- network
- frequency sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of depth convolved wavelets neutral net expression recognition method based on secondary task, the problem of existing feature selecting operator can not efficiently learn expressive features, can not extract more images expression information classification features is solved.The present invention is embodied as:Build depth convolved wavelets neutral net;Set up human face expression collection and corresponding expression sensitizing range image set;Facial Expression Image is inputted to network;Train depth convolved wavelets neutral net;Network error backpropagation;Update each convolution kernel of network and bias vector;Expression sensitizing range image is inputted to the network trained;Learn the weighting proportion of secondary task;Obtain network global classification label;Recognition correct rate is counted according to global label.The present invention has taken into account the abstract and detailed information of facial expression image, influence power of the enhancing expression sensitizing range in expressive features study, hence it is evident that improve the accuracy of Expression Recognition, can be applied to the Expression Recognition to Facial Expression Image.
Description
Technical field
The invention belongs to technical field of image processing, Computer Vision Recognition is related generally to, is specifically that one kind is based on auxiliary
The depth convolved wavelets neutral net expression recognition method of business.It can be applied in expression recognition learn expressive features
And classification.
Background technology
Expression recognition is image procossing and a cutting edge technology in computer vision field.It is by image procossing
To the committed step of graphical analysis, the quality of segmentation result directly influences subsequent graphical analysis, the problems such as understanding and solve.
The purpose of expression recognition is to study the encoding model of human face expression, study and the feature representation mode for extracting human face expression,
Realize that computer is automatically synthesized to human face expression, track and recognize.
At present, to the technical research of expression recognition mainly around feature extraction and sorting algorithm the two aspect exhibitions
Open.Facial expression recognizing method based on deep learning network has been studied librarian use, particularly deep learning in recent years
The depth convolutional neural networks that being good in network handles two dimensional image are even more to be applied to Expression Recognition field by researcher, still
What depth convolutional neural networks in general sense were focused on is the abstract mapping between low layer to high level to image, to obtain height
The feature representation mode of level, but have ignored the texture and detailed information of facial expression image when obtaining advanced features expression-form.And
And, usually used depth network is usually single task depth network, protrusion that can not be effective when to the feature learning of expression
Main contributions power of the expression sensitizing range to feature representation.
In existing Expression Recognition technology, mainly first feature selecting and then the method classified again, but in spy
Levy in selection step, existing feature selecting operator can not efficiently be learnt to expressive features so that follow-up classifies not
To preferable result.In addition, Lv Yadan et al. employs depth autoencoder network as grader, feature selecting is not also avoided
The step for, thus cause final classification effect promoting little.
The content of the invention
The present invention is directed to above-mentioned the deficiencies in the prior art, proposes a kind of depth convolved wavelets neutral net based on secondary task
Expression recognition method.
The present invention is a kind of depth convolved wavelets neutral net expression recognition method based on secondary task, it is characterised in that
Including having the following steps:
(1) one is built by three convolutional layers, two pond layers, a multi-scale transform layer, a full articulamentum, one
The depth convolved wavelets network of softmax output layers;The biasing weight matrix of network convolutional layer is initialized as 0 matrix, network
What activation primitive was selected is Sigmoid functions;
(2) Facial Expression Image collection and expression sensitizing range image set are set up, expression sensitizing range image set is by face table
Feelings image set cuts out looks and face position is obtained, and regard a part of image in Facial Expression Image data set as network
Training image collection, remaining image is used as test chart image set;
(3) a width training image is input in depth convolved wavelets network, the size of input picture is 96*96;
(4) first layer of depth convolved wavelets network is convolutional layer, and the convolutional layer is to the input human face expression training of each width
Image does convolution operation, and the number of selection convolution kernel is Q1, convolution kernel size is 7*7:
(4a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5];
(4b) each convolution kernel carries out convolution operation to Facial Expression Image, obtains Q1Characteristic pattern after individual convolution, often
The characteristic pattern size of individual convolution kernel is 90*90;
(5) second layer of network is pond layer, the Q that the pond layer obtains last layer convolutional layer1Individual characteristic pattern is as defeated
Enter, and carry out pondization operation:
The pond method that the pond layer is used is that the selection of maximum is carried out in nonoverlapping 2*2 regions, obtains the pond
Change the Q of layer1Characteristic pattern size is 45*45 behind individual characteristic pattern, pond;
(6) third layer of network is convolutional layer, the Q that last layer pond layer is obtained1Individual characteristic pattern as input, carry out
Convolution operation, the convolution kernel number of convolutional layer selection is Q2, convolution kernel size is 6*6:
(6a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5];;
(6b) each convolution kernel is to this Q1Individual characteristic pattern carries out convolution operation, then by Q1Knot after individual characteristic pattern convolution
Fruit carries out average evaluation with bias matrix after activation primitive filtering, obtains the characteristic pattern of the convolution kernel, the spy of each convolution kernel
Figure size is levied for 40*40;
(7) the 4th layer of network is pond layer, the Q that the pond layer obtains last layer convolutional layer2Individual characteristic pattern is as defeated
Enter, and carry out pondization operation:
The pond method that the pond layer is used is that the selection of maximum is carried out in nonoverlapping 2*2 regions, obtains the pond
Change the Q of layer2Characteristic pattern size is 20*20 behind individual characteristic pattern, pond;
(8) layer 5 of network is convolutional layer, the Q that last layer pond layer is obtained2Individual characteristic pattern is rolled up as input
Product operation, the convolution kernel number of convolutional layer selection is Q3, convolution kernel size is 5*5:
(8a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5];
(8b) each convolution kernel is to this Q2Individual characteristic pattern carries out convolution operation, then by Q2Knot after individual characteristic pattern convolution
Fruit carries out average evaluation with bias matrix after activation primitive filtering, obtains the characteristic spectrum of the convolution kernel, each characteristic pattern
Size is 16*16;
(9) layer 6 of network is small echo pond layer, the Q that the small echo pond layer obtains last layer convolutional layer3Individual feature
Figure does one layer of wavelet decomposition as input:
The wavelet basis function of use is " haar " function, for each characteristic pattern, obtain a 8*8 low frequency sub-band and
Three high-frequency sub-band correspondence positions are taken maximum by three 8*8 high-frequency sub-band, are fused into a new high-frequency sub-band;
(10) layer 7 of network is full articulamentum, the Q that network layer 6 small echo pond layer is obtained3Individual 8*8 low frequencies
Band and Q3Individual 8*8 high-frequency sub-band forms the full articulamentum characteristic vector of one 128 dimension as input;
(11) in units of randomly selected n width Facial Expression Image, repeat step (3) to step (10) obtains n width figures
As respective 128 dimensional feature vector;
(12) the 8th layer of network is Softmax output layers, regard the characteristic vector of the n of acquisition 128 dimension as input, instruction
Practice a probability distribution Softmax grader for being output as 7 classes, obtain tag along sort;
(13) tag along sort of Softmax output layers carries out error calculation with true tag, according to BP back-propagation algorithms,
Update a weight matrix;
(14) repetition training step (3) is to (13), until weight matrix is updated m times, obtains the depth convolution trained
Wavelet neural network;
(15) Facial Expression Image test set is brought into the depth convolved wavelets neutral net trained to obtain in output layer
Tag along sort z1, then sensitizing range image set that test data set expressed one's feelings accordingly bring the depth convolved wavelets nerve trained into
Network obtains tag along sort z2 in output layer, and two tag along sorts are obtained final classification in the way of z3=z1+ λ * z2
Label, wherein λ represent the weighting proportion of secondary task;
(16) according to the tag along sort z3 of test set, expression recognition accuracy is exported, the depth based on secondary task is completed
Spend convolved wavelets neutral net expression recognition.
The present invention utilizes secondary task depth convolved wavelets neural network learning expressive features, it is not necessary to first carry out feature choosing
Select, can either preferably learn the abstract and local detailed information of human face expression, expression sensitizing range is improved again to network
Influence power during expressive features is extracted, so as to significantly improve the accuracy of expression recognition result.
The present invention has advantages below compared with prior art:
First, due to the present invention take into account expression sensitizing range depth convolutional neural networks learn expressive features in it is special
Sign ability, trains a main task study DCNN network to be expressed one's feelings again to obtain sharing feature weight matrix, then quick first
Sensillary area domain eyes eyebrow posture and face posture Local map are merged, and branch line task are estimated as a secondary task, with shared
The mapping of feature weight matrix obtains the classification results of secondary task estimation, finally by secondary task estimation classification results optimization main task
The classification performance of habit, improves generalization ability of the depth convolutional network in Expression Recognition;
Second, due to present invention, avoiding pond layer in general convolutional neural networks because simple down-sampling is operated, meeting
The feature and the output of full articulamentum that lost part last layer convolutional layer learns out have only been lacked very comprising abstracted information
The shortcoming of the local feature of many shallow-layers, combines multi-scale wavelet transformation and depth convolutional neural networks framework, this network one side
Face ensure that the feature that convolutional layer is learnt can effectively carry out complete characterization transmission in pond layer, again can be in full articulamentum
The expression local feature obtained during middle extension shallow-layer study, and then enable whole network structure is more excellent to expressive features to retouch
State, and significantly improve recognition result.
Accompanying drawing table explanation
Fig. 1 is a part of image in the raw data base that the present invention is used;
Fig. 2 is the FB(flow block) of the present invention;
Fig. 3 is the schematic network structure of the present invention, and wherein Fig. 3 (a) is depth convolved wavelets neutral net of the invention
Structure chart, Fig. 3 (b) for secondary task of the present invention depth convolved wavelets neutral net structure chart;
Fig. 4 is the part expression sensitizing range image of the present invention.
Embodiment
The present invention is described in detail below in conjunction with the accompanying drawings:
Embodiment 1
Expression recognition is an indispensable part in machine learning research, continuous in current man-machine interaction
The society of popularization has very wide application value, to face face in man-machine interface such as mobile terminal, personal computer
The real-time automatic identification of expression;Retrieve in video in some cases and realize facial expression, be tracked and recognize.Face
The breakthrough of expression recognition method is to intelligence computation, and class brain research field also has great reference significance.
In existing Expression Recognition technology, mainly first feature selecting and then the method classified again, but in spy
Levy in selection step, existing feature selecting operator can not efficiently be learnt to expressive features so that follow-up classifies not
To preferable result.In addition, using method of the depth network as grader, feature selecting also not being avoided, causes classifying quality
Lifting is limited.
The present invention expands research and discovery for above-mentioned present situation, proposes a kind of depth convolved wavelets god based on secondary task
Through network expression recognition method, referring to Fig. 2, the present invention realizes expression recognition, including has the following steps:
(1) one is built by three convolutional layers, two pond layers, a multi-scale transform layer, a full articulamentum, one
The depth convolved wavelets network of softmax output layers;The biasing weight matrix of network convolutional layer is initialized as 0 matrix, network
What activation primitive was selected is Sigmoid functions.The depth convolved wavelets neutral net that the present invention is built is from input layer to output layer
It is followed successively by:Input layer, the first convolutional layer, the first pond layer, the second convolutional layer, the second pond layer, the 3rd convolutional layer, multiple dimensioned change
Change layer, full articulamentum and softmax output layer, multi-scale transform layer therein is small echo pond layer, and depth convolution is integrally formed
Wavelet neural network.
(2) Facial Expression Image collection and expression sensitizing range image set are set up, expression sensitizing range image set is by face table
Feelings image set cuts out looks and face position is obtained, and regard a part of image in Facial Expression Image data set as network
Training image collection, remaining image is used as test chart image set.For example, Facial Expression Image data set has 20000 in this example
Sample, 15000 width image therein is as training image collection, and remaining 5000 width image is as training image collection, and expression is sensitive
It with Facial Expression Image data set is corresponding that the quantity of administrative division map image set, which is,.
(3) a width training image is input in depth convolved wavelets network, the size of input picture is 96*96.
Directly training image is inputted in network in this example, it is not necessary to do other image preprocessings, such as remove complex background or light
According to influence etc., simplify the program and process of image recognition.
(4) first layer of depth convolved wavelets network is convolutional layer, and the convolutional layer is to the input human face expression training of each width
Image does convolution operation, and the number of selection convolution kernel is Q1, convolution kernel size is 7*7.According to computing environment and software and hardware condition
Select the number Q that convolution kernel is selected in the number of convolution kernel, this example1It is taken as 4.
(4a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5].This hair
Bright middle convolution kernel initial weight is that nearly zero number is to accelerate the convergence rate at networking.
(4b) each convolution kernel carries out convolution operation to Facial Expression Image, obtains Q1Characteristic pattern after individual convolution, often
The characteristic pattern size of individual convolution kernel is 90*90.The characteristic pattern size of convolution kernel is determined by convolution kernel size in the present invention.
The biasing weight matrix of (4c) convolutional layer is initially set to 0 matrix;It is one-dimensional vector that weight matrix is biased in this example,
The number Q of dimension and convolution kernel1It is identical.
What the activation primitive of (4d) network was selected is Sigmoid functions.Sigmoid function formulas such as following formula in the present invention:
Wherein, f (x) is the activation value of function, and x is the input of activation primitive, and what x was represented in a network is convolutional layer convolution
The value that result is added with biasing weights afterwards, e is natural logrithm.
(5) second layer of network is pond layer, and the pond layer is by Q that last layer convolutional layer is that the first convolutional layer is obtained1It is individual
Characteristic pattern as the pond layer input, and carry out pondization operation:
The pond method that the pond layer is used is that the selection of maximum is carried out in nonoverlapping 2*2 regions, obtains the pond
Change the Q of layer1Characteristic pattern size is 45*45 behind individual characteristic pattern, pond.
(6) third layer of network is convolutional layer, the Q that last layer pond layer is obtained1Individual characteristic pattern as input, carry out
Convolution operation, the convolution kernel number of convolutional layer selection is Q2, convolution kernel size is 6*6.The number Q that convolution kernel is selected in this example2
It is taken as 6.
(6a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5];
(6b) each convolution kernel is to this Q1Individual characteristic pattern carries out convolution operation, then by Q1Knot after individual characteristic pattern convolution
Fruit carries out average evaluation with bias matrix after activation primitive filtering, obtains the characteristic pattern of the convolution kernel, the spy of each convolution kernel
Figure size is levied for 40*40;
The biasing weight matrix of (6c) convolutional layer is initially set to 0 matrix.In this example bias weight matrix be it is one-dimensional to
The number Q of amount, dimension and convolution kernel2It is identical.
What the activation primitive of (6d) network was selected is Sigmoid functions.
(7) the 4th layer of network is pond layer, the Q that the pond layer obtains last layer convolutional layer2Individual characteristic pattern is as defeated
Enter, and carry out pondization operation:
The pond method that the pond layer is used is that the selection of maximum is carried out in nonoverlapping 2*2 regions, obtains the pond
Change the Q of layer2Characteristic pattern size is 20*20 behind individual characteristic pattern, pond;
(8) layer 5 of network is convolutional layer, the Q that last layer pond layer is obtained2Individual characteristic pattern is rolled up as input
Product operation, the convolution kernel number of convolutional layer selection is Q3, convolution kernel size is 5*5.The number Q that convolution kernel is selected in this example3Take
For 12.
(8a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5];
(8b) each convolution kernel is to this Q2Individual characteristic pattern carries out convolution operation, then by Q2Knot after individual characteristic pattern convolution
Fruit carries out average evaluation with bias matrix after activation primitive filtering, obtains the characteristic spectrum of the convolution kernel, each characteristic pattern
Size is 16*16;
The biasing weight matrix of (8c) convolutional layer is initially set to 0 matrix;
What the activation primitive of (8d) network was selected is Sigmoid functions.
(9) layer 6 of network is small echo pond layer, the Q that the small echo pond layer obtains last layer convolutional layer3Individual feature
Figure does one layer of wavelet decomposition as input:
The wavelet basis function of use is " haar " function, for each characteristic pattern, obtain a 8*8 low frequency sub-band and
Three high-frequency sub-band correspondence positions are taken maximum by three 8*8 high-frequency sub-band, are fused into a new high-frequency sub-band.
(10) layer 7 of network is full articulamentum, the Q that network layer 6 small echo pond layer is obtained3Individual 8*8 low frequencies
Band and Q3Individual 8*8 high-frequency sub-band forms the full articulamentum characteristic vector of one 128 dimension as input.
(11) in units of randomly selected n width Facial Expression Image, repeat step (3) to step (10) obtains n width figures
As respective 128 dimensional feature vector.
(12) the 8th layer of network is Softmax output layers, regard the characteristic vector of the n of acquisition 128 dimension as input, instruction
Practice a probability distribution Softmax grader for being output as 7 classes, obtain tag along sort.
(13) tag along sort of Softmax output layers carries out error calculation with true tag, according to BP back-propagation algorithms,
Update a weight matrix.The weight matrix updated in this example includes the value of convolution kernel and biases the value of weight vector.
(14) repetition training step (3) is to (13), until weight matrix is updated m times.M is update times in the present invention,
Determined by the convergence rate of image scale and network, obtain the depth convolved wavelets neutral net trained.
(15) Facial Expression Image test set is brought into the above-mentioned depth convolved wavelets neutral net trained in output layer
Tag along sort z1 is obtained, then sensitizing range image set that test data set expressed one's feelings accordingly brings the depth convolved wavelets trained into
Neutral net obtains tag along sort z2 in output layer, and two tag along sorts are obtained final in the way of z3=z1+ λ * z2
Tag along sort, wherein λ represent the weighting proportion of secondary task.
(16) according to the tag along sort z3 of test set, expression recognition accuracy is exported, the depth based on secondary task is completed
Spend convolved wavelets neutral net expression recognition.
The present invention has taken into account pathognomonic feature energy of the expression sensitizing range in depth convolutional neural networks learn expressive features
Power, trains a main task to learn DCNN networks to obtain sharing feature weight matrix, then will express one's feelings sensitizing range again first
Eyes eyebrow posture and face posture Local map are merged, and are estimated branch line task as a secondary task, are weighed with sharing feature
Value matrix mapping obtains the classification results of secondary task estimation, and secondary task is finally estimated to point of classification results optimization main task study
Class performance, improves generalization ability of the depth convolutional network in Expression Recognition.
Embodiment 2
Depth convolved wavelets neutral net expression recognition method be the same as Example 1 based on secondary task, building described in step (2)
Vertical Facial Expression Image collection and expression sensitizing range image set, are carried out in accordance with the following steps:
2.1 Facial Expression Image collection are obtained as follows:
The appropriate number of original image with label is randomly choosed out from JAFFE facial expression images storehouse, what the present invention was used
Image in JAFFE facial expression images storehouse, has 213 images as shown in Figure 1, in image library, includes seven classes expression, difference
It is:Anger, it is sad, it is glad, it is tranquil, detest, it is in surprise, frightened.Original image size size is 256*256, referring to Fig. 1, in Fig. 1
The image of the part difference expression of four people is enumerated, what first row was represented is angry expression, and what second row was represented is to detest
Expression, what the 3rd row represented is the expression of scaring, and what the 4th row represented is glad expression, and what the 5th row represented is tranquil
Expression.By upset, the mode of rotation and slider bar selection image block extends original image, first by flipped image, then
Image is rotated by multiple low-angles again, is finally slided up and down and selected using picture centre as basic point by slider bar again
Facial expression image.The present invention carries out human face region with the method that Adaboost algorithm is combined using haar features to expanded images
Recognize and Facial Expression Image is zoomed in and out, finally obtain the Facial Expression Image collection of tens thousand of magnitude samples.
2.2 expression sensitizing range image sets are obtained as follows:
Expression sensitizing range refers to the region at several positions sensitive to expression in human face region, including eyes brow region
With face region;The Facial Expression Image collection obtained in step 2.1 is cut, left and right two is obtained using suitable crop box
Individual eyebrow eyes image block and a face position image block is obtained, three image blocks are carried out with splicing and obtains an expression
Sensitizing range image, finally obtains the expression sensitizing range image set of identical tens thousand of magnitude samples.Referring to Fig. 4, enumerated in Fig. 4
The sensitizing range image of one of seven kinds of expressions of people.
2.3 make the label file of Facial Expression Image collection, single image according to the original tag in JAFFE facial expression images storehouse
Label be 1*k dimension binary set, k dimensions represent expression classification and are divided into k classes, k=2,3,4,5,6......, k value
According to actual expression classification problem it needs to be determined that.Label vector belongs to the expression class that this dimension is represented for 1 dimension table diagram picture
Not, the value of other dimensions is 0, such as the first dimension represents the expression classification of happiness in 5 class expression classifications, then single image is such as
Fruit is glad image, and its label vector is [1,0,0,0,0].Facial expression image data set and sensitizing range picture number in the present invention
According to collection because being mutually corresponding, label file can be shared.
Embodiment 3
Depth convolved wavelets neutral net expression recognition method be the same as Example 1-2 based on secondary task, described in step (9)
Small echo pond layer obtains low frequency sub-band and high-frequency sub-band, referring to Fig. 3 (a), transforms traditional down-sampling pond layer in Fig. 3 (a)
For small echo pond layer, the information loss that simple down-sampling is caused on the one hand is avoided, high-frequency information is on the other hand remained, strengthens
The local messages of expressive features.Carry out in accordance with the following steps:
9.1 characteristic spectrums for obtaining last layer convolutional layer carry out one layer of down-sampling wavelet decomposition, the wavelet basis letter of selection
Number is Haar functions, and each characteristic pattern obtains a low frequency sub-band by one layer of down-sampling wavelet decomposition, a level point to
High-frequency sub-band, one vertical point to high-frequency sub-band, a high-frequency sub-band comprising horizontal direction and vertical direction.The present invention
The hierachy number of middle wavelet decomposition can be determined according to requirement of the network in practical application to size.
Three high-frequency sub-bands are fused into a new high-frequency sub-band by 9.2 according to the following formula:
xWH=Maxf (0, xHH, xHL, xLH)
Wherein, xHH, xHL, xLHRepresent three high-frequency sub-bands that one layer of wavelet decomposition is obtained, xWHRepresent high frequency after fusion
Band, defined function Maxf (A, B) represents to take higher value to matrix A and matrix B relevant position;
9.3 using the high-frequency sub-band after the low frequency sub-band of acquisition and fusion as the full articulamentum of next layer input.
Small echo pond layer in the present invention avoids the simple down-sampling operation of pond layer in general convolutional neural networks and lost
Break one's promise the shortcoming of breath, lose less low frequency sub-band using wavelet transformation information and replace pond result, and detailed information will be included
High-frequency sub-band be input to together in full articulamentum so that the characteristic vector of full articulamentum obtains extension by all kinds of means, enhances
The ga s safety degree of characteristic vector.
Embodiment 4
Depth convolved wavelets neutral net expression recognition method be the same as Example 1-3 based on secondary task, step (10) is described
Full articulamentum characteristic vector, carry out in accordance with the following steps:
10.1 ask for low frequency sub-band matrix according to the following formula;
xL=Maxf (0, W1·xLL1+W2·xLL2+W3·xLL3+……+Wn·xLLn)
Wherein, xLRepresent global low frequency sub-band matrix, xLLnRepresent the low frequency sub-band of each one layer of wavelet decomposition of characteristic pattern, Wn
Represent the superposition weight of each characteristic pattern low frequency sub-band.Superposition weights W in the present inventionnIt can determine based on experience value, Huo Zheshe
Count the determination of other modes of learning.
10.2 ask for high-frequency sub-band matrix according to the following formula:
xH=Maxf (0, xWH1, xWH2... xWHn)
Wherein, xHRepresent global high-frequency sub-band matrix, xWHnRepresent three high frequency of each one layer of wavelet decomposition of characteristic pattern
With the new high-frequency sub-band after fusion;
10.3 by global low frequency sub-band xLWith global high-frequency sub-band xHThe vector of a 1*v dimension is drawn into by row, and carries out head
Tail is connected, and obtains the characteristic vector of full articulamentum, and size is tieed up for 1*2v, and v value is to be multiplied by the length of subband matrix with wide value
Obtain.The characteristic vector of full articulamentum in this example is to be spliced by low frequency and high-frequency sub-band by row stretching and head and the tail
, specially 1*128 dimensions, wherein low frequency sub-band and high-frequency sub-band are tieed up by the vector dimension of row stretching for 1*64.
Embodiment 5
Depth convolved wavelets neutral net expression recognition method be the same as Example 1-4 based on secondary task, step (15) is described
Secondary task weighting proportion λ, referring to Fig. 3 (b), secondary task is increased to the depth convolved wavelets neutral net that trains in Fig. 3 (b)
Amendment, is learnt to obtain weighting proportion λ, carried out in accordance with the following steps in a network using sensitizing range image set:
15.1 initialization λ=0, M Facial Expression Image of random selection and corresponding sensitizing range image are used as weights λ's
Learning sample;
The 15.2 depth convolved wavelets neutral net for training, learning sample is brought network into and classified according to the following formula
Label:
z3=z1+λz2
Wherein, z1Represent the output label that Facial Expression Image is brought into after network, z2Represent corresponding sensitizing range picture strip
Enter the output label after network, z3Represent the global label of network;
15.3 according to global label z3With the error size of true tag, λ value is updated according to the following formula:
λ=λ+▽ λ
Wherein, ▽ λ=0.05, counts the λ value corresponding to the minimum tag error of each learning sample.Global mark in this example
The error of label and true tag is the numerical difference of the expression classification dimension belonging to determine.
λ value corresponding to the minimum tag error of M learning sample is asked for desired value by 15.4, and λ is as complete for the desired value
The secondary task weighting proportion λ value of office.Desired value in this example, which is calculated, to be obtained with direct by the way of being averaging.
A more detailed example is given below, the present invention is further described
Embodiment 6
Depth convolved wavelets neutral net expression recognition method be the same as Example 1-5 based on secondary task, referring to the drawings 3, sheet
That invents comprises the following steps that:
Step 1:The foundation of Facial Expression Image collection
200 original graphs for carrying label are randomly choosed out from the JAFFE expression datas storehouse for including 213 images
Picture, as shown in Figure 1, size is 256*256 to the original image that the present invention is selected.Then overturn by left and right original by 200
Image spreading is into 2 times of 400 width image, and then the mode to 1 degree, 2 degree, 3 degree, 4 degree, 5 degree of image left rotation and right rotation obtains 10 times
The extension of 4000 width, finally using 128*128 rectangle frame, by basic point of picture centre, 5 pixel point coordinates enter above and below progress
Line slip is cut, and is then carried out human face region identification with the method that Adaboost algorithm is combined using haar features and is contracted
The experiment facial image that size is 96*96 is put into, the Facial Expression Image collection of 40000 sample numbers is finally had, and make
Good corresponding label file, the label of single image is the binary set of a 1*7 dimension, and numerical value belongs to for 1 dimension table diagram picture
In the expression classification representated by the dimension, other dimensions are 0.
Step 2:The foundation of expression sensitizing range image set
Expression sensitive image in the present invention refers to the region at several positions sensitive to expression in human face region, including eye
Eyeball brow region and face region, as shown in Figure 4.The human face region image obtained in step 1 is cut, using 48*48
Crop box obtain two eyebrow eyes image blocks and a face position image block obtained using 48*96 crop box,
Three image blocks are carried out with splicing and obtains an expression sensitizing range image, the expression sensitizing range of 40000 sample numbers is finally had
Area image collection, label file can be with sharing in step 1.
Step 3:Network training
(1) one is built by three convolutional layers, two pond layers, a multi-scale transform layer, a full articulamentum, one
The depth network of softmax output layers;
(2) input facial expression image is into depth network, and the size of input picture is 96*96;
(3) first layer of network is convolutional layer, and the convolutional layer does convolution operation, selection volume to each width expression original image
The number of product core is 6, and the size of convolution kernel is 7*7:
(3a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5];
(3b) each convolution kernel carries out convolution operation to Facial Expression Image, obtains the characteristic pattern after 6 convolution, each
The characteristic pattern size of convolution kernel is 90*90;
The biasing weight matrix of (3c) convolutional layer is initially set to 0 matrix;
What the activation primitive of (3d) network was selected is Sigmoid functions;
(4) second layer of network is pond layer, and 6 characteristic patterns that the pond layer obtains last layer convolutional layer are as defeated
Enter, and carry out pondization operation:
The pond method that the pond layer is used is that the selection of maximum is carried out in nonoverlapping 2*2 regions, obtains the pond
Change 6 characteristic patterns of layer, size is 45*45;
(5) third layer of network be convolutional layer, 6 characteristic patterns that last layer pond layer is obtained as input, rolled up
Product operation, the convolution kernel number of convolutional layer selection is 12, and the size of convolution kernel is 6*6:
(5a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5];;
(5b) each convolution kernel carries out convolution operation to this 6 characteristic patterns, then by the result after 6 characteristic pattern convolution
Average evaluation is carried out after activation primitive filtering with bias matrix, the characteristic pattern of the convolution kernel, the feature of each convolution kernel is obtained
Figure size is 40*40;
The biasing weight matrix of (5c) convolutional layer is initially set to 0 matrix;
What the activation primitive of (5d) network was selected is Sigmoid functions.
(6) the 4th layer of network is pond layer, and 12 characteristic patterns that the pond layer obtains last layer convolutional layer are as defeated
Enter, and carry out pondization operation:
The pond method that the pond layer is used is that the selection of maximum is carried out in nonoverlapping 2*2 regions, obtains the pond
Change 12 characteristic patterns of layer, size is 20*20.
(7) layer 5 of network is convolutional layer, and 12 characteristic patterns that last layer pond layer is obtained are rolled up as input
Product operation, the convolution kernel number of convolutional layer selection is 12, and size is 5*5:
(7a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5];
(7b) each convolution kernel carries out convolution operation to this 12 characteristic patterns, then by the knot after 12 characteristic pattern convolution
Fruit carries out average evaluation with bias matrix after activation primitive filtering, obtains the characteristic spectrum of the convolution kernel, each characteristic pattern
Size is 16*16;
The biasing weight matrix of (7c) convolutional layer is initially set to 0 matrix;
What the activation primitive of (7d) network was selected is Sigmoid functions.
(8) layer 6 of network is small echo pond layer, 12 features that the small echo pond layer obtains last layer convolutional layer
Figure does one layer of wavelet decomposition as input:
The wavelet basis function of use is " haar " function, for each characteristic pattern, obtain a 8*8 low frequency sub-band and
Three high-frequency sub-band correspondence positions are taken maximum by three 8*8 high-frequency sub-band, are fused into a new high-frequency sub-band.
(9) layer 7 of network is full articulamentum, 12 8*8 low frequency sub-bands and 12 that last layer wavelet transformation layer is obtained
Individual 8*8 high-frequency sub-band forms the full articulamentum characteristic vector of one 128 dimension as input.Full articulamentum is by 12 in the present invention
Individual 8*8 low frequency sub-bands first carry out relevant position maximizing, and the vector of a 1*64 dimension is then drawn into by row, and high-frequency sub-band is pressed
Same operation obtains the vector of another 1*64 dimensions, to order of two vectors by low frequency sub-band vector sum high-frequency sub-band vector
Join end to end and obtain 1*128 Global Vector.
(10) in units of randomly selected 50 width facial expression image, it is each that repeat step (2) to step (9) obtains 50 width images
From 128 dimensional feature vectors.
(11) the 8th layer of network is Softmax output layers, using the characteristic vector of 50 128 dimensions of acquisition as input,
Training one is output as the Softmax graders of 7 class probability distribution, obtains tag along sort;
(12) tag along sort of Softmax output layers carries out error calculation with true tag, according to BP back-propagation algorithms,
Update the value of the convolution kernel of each layer and bias the value of weight vector.The weights of depth convolved wavelets neutral net are more in the present invention
New Learning Step is set to 0.05.
(13) repetition training step (2) is to (12), until weight matrix is updated 200 times.Weighed in inventive network training
The setting of value update times can be determined according to the convergence rate of network.
Step 4:Secondary task learns
Bring human face expression test data set into the above-mentioned network that trains and obtain tag along sort z1, then by test data set
Corresponding expression sensitizing range brings acquisition tag along sort z2 in the above-mentioned network trained into, two tag along sorts according to z3=
Z1+0.65*z2 mode obtains final tag along sort, then calculates z3 to whole test data set.
Step 5:Recognition result is counted
The accuracy correctly recognized is calculated according to the z3 in step 4.
Present invention, avoiding pond layer in general convolutional neural networks because simple down-sampling is operated, on meeting lost part
Feature and the output of full articulamentum that one layer of convolutional layer learns out have only lacked the office of many shallow-layers comprising abstracted information
The shortcoming of portion's feature, combines multi-scale wavelet transformation and depth convolutional neural networks framework, and on the one hand this network ensure that volume
The feature that lamination is learnt can effectively carry out complete characterization transmission in pond layer, can extend shallow-layer in full articulamentum again
The expression local feature obtained during study, and then enable the description more excellent to expressive features of whole network structure, and substantially carry
High recognition result.
The technique effect of the present invention is verified and illustrated again below by simulation result:
Embodiment 7
Depth convolved wavelets neutral net expression recognition method be the same as Example 1-6 based on secondary task, with reference to the knowledge of subordinate list 1
Other Comparative result is further analyzed to the effect of the present invention.
Emulation experiment condition
The present invention hardware test platform be:Processor is Inter Core CPU i3, and dominant frequency is 3.20GHz, internal memory
4G, software platform is:The bit manipulation system of 7 Ultimates of Windows 64 and Matlab R2013b.The input picture of inventive network
Size is all 96*96, and form is TIFF.
Emulation content
The emulation content of the present invention includes:Emulation experiment and the recognition result system of existing expression recognition technology
Meter;It is simple to use a six layer depth convolutional neural networks in the case of not additional small echo pond layer and secondary task study
Carry out the emulation experiment and recognition result statistics of expression recognition;The complete use depth proposed by the present invention based on secondary task
Spend the experiment simulation and recognition result statistics of convolved wavelets neutral net expression recognition method;To pair of each Simulation results
Than and analysis.
Analysis of simulation result
Table 1 is that the recognition effect of the inventive method and existing expression recognition technology is contrasted.The data of the table of comparisons 1 can be with
Know, the method that Shan C and Jabid T are divided into several sub-regions with image, every sub-regions are contributed expression according to it
The height of value is multiplied by a weight, and the size of weight represents sign capacity of water of the region to expression.Taskeed et al. x^
The methods of 2 distributions initialize weights, in the new local facial descriptor of the use local direction pattern (LDP) of its proposition, so
The framework of LDP+SVM algorithms is utilized afterwards, obtains the result of average recognition rate 85.4%.In addition, Shishir et al. is utilized
The algorithm that Gabor feature is combined with study vector quantization (LVQ), the image conversion interface at one with 34 image benchmark points
On Gabor filtering are carried out to this 34 datum marks, as a result discrimination is 87.51%.Nectarios et al. proposes to be based on Gabor
Characteristic vector is obtained with algorithm that Log-Gabor wave filter convolution is combined, 86.1% and 85.72% identification is as a result obtained
Rate.Using FP and depth autoencoder network algorithm in Lv Yadan, Feng Zhiyong et al. achievement, 90.47% knowledge is obtained
Not other rate, the present invention only simple one six layer depth convolutional neural networks study expressive features of use, are adding small in addition
Obtained in the case of ripple pond layer and secondary task study during the algorithm of one softmax grader of training 90.56 discrimination.
Obtained during the overall depth convolved wavelets neutral net expression recognition method based on secondary task for using the present invention to provide
It is 92.91% to obtain recognition correct rate.
The recognition effect of the invention with existing facial expression recognizing method of table 1. is contrasted
It can also be seen that the method for the present invention can be very good to take into account the expressive features of Facial Expression Image from table 1
The local information with the overall situation, and influence power of the expression sensitizing range to expression recognition is strengthened by secondary task, so that
Improve the discrimination of human face expression.
In brief, the depth convolved wavelets neutral net expression recognition method disclosed by the invention based on secondary task, solution
Feature selecting operator can not efficiently learn expressive features, can not extract comprising more images in existing Expression Recognition technology of having determined
The problem of expression information characteristic of division.The present invention step of realizing be:Build depth convolved wavelets neutral net;Set up face table
Feelings image set and expression sensitizing range image set;Facial Expression Image is inputted to network;Train depth convolved wavelets neutral net;
Network error backpropagation;Depth convolved wavelets neural network parameter collection is updated, that is, updates each convolution kernel of network and bias vector;
Expression sensitizing range image is inputted to the network trained;Learn the weighting proportion of secondary task;Network is obtained according to weighting proportion
Global classification label;Recognition correct rate is counted according to global label.The present invention has taken into account that facial expression image is abstract and detailed information, increases
Strong influence power of the expression sensitizing range in expressive features study, hence it is evident that improve the accuracy of Expression Recognition, can be applied to
To the Expression Recognition of Facial Expression Image.
Claims (5)
1. a kind of depth convolved wavelets neutral net expression recognition method based on secondary task, it is characterised in that include as follows
Step:
(1) one is built by three convolutional layers, two pond layers, a multi-scale transform layer, a full articulamentum, one
The depth convolved wavelets network of softmax output layers;The biasing weight matrix of network convolutional layer is initialized as 0 matrix, network
What activation primitive was selected is Sigmoid functions;
(2) Facial Expression Image collection and expression sensitizing range image set are set up, expression sensitizing range image set is by human face expression figure
Image set cuts out looks and face position and obtained, using a part of image in Facial Expression Image data set as network training
Image set, remaining image is used as test chart image set;
(3) a width training image is input in depth convolved wavelets network, the size of input picture is 96*96;
(4) first layer of depth convolved wavelets network is convolutional layer, and the convolutional layer inputs human face expression training image to each width
Convolution operation is done, the number of selection convolution kernel is Q1, convolution kernel size is 7*7:
(4a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5];
(4b) each convolution kernel carries out convolution operation to Facial Expression Image, obtains Q1Characteristic pattern after individual convolution, each convolution
The characteristic pattern size of core is 90*90;
(5) second layer of network is pond layer, the Q that the pond layer obtains last layer convolutional layer1Individual characteristic pattern as input, and
Carry out pondization operation:
The pond method that the pond layer is used is that the selection of maximum is carried out in nonoverlapping 2*2 regions, obtains the pond layer
Q1Characteristic pattern size is 45*45 behind individual characteristic pattern, pond;
(6) third layer of network is convolutional layer, the Q that last layer pond layer is obtained1Individual characteristic pattern as input, carry out convolution behaviour
Make, the convolution kernel number of convolutional layer selection is Q2, convolution kernel size is 6*6:
(6a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5];;
(6b) each convolution kernel is to this Q1Individual characteristic pattern carries out convolution operation, then by Q1Result after individual characteristic pattern convolution with
Bias matrix carries out average evaluation after activation primitive filtering, obtains the characteristic pattern of the convolution kernel, the characteristic pattern of each convolution kernel
Size is 40*40;
(7) the 4th layer of network is pond layer, the Q that the pond layer obtains last layer convolutional layer2Individual characteristic pattern as input, and
Carry out pondization operation:
The pond method that the pond layer is used is that the selection of maximum is carried out in nonoverlapping 2*2 regions, obtains the pond layer
Q2Characteristic pattern size is 20*20 behind individual characteristic pattern, pond;
(8) layer 5 of network is convolutional layer, the Q that last layer pond layer is obtained2Individual characteristic pattern carries out convolution behaviour as input
Make, the convolution kernel number of convolutional layer selection is Q3, convolution kernel size is 5*5:
(8a) uses the method for random initializtion to configure the weights of convolution kernel for nearly zero number between [- 0.5,0.5];
(8b) each convolution kernel is to this Q2Individual characteristic pattern carries out convolution operation, then by Q2Result after individual characteristic pattern convolution with
Bias matrix carries out average evaluation after activation primitive filtering, obtains the characteristic spectrum of the convolution kernel, the size of each characteristic pattern
For 16*16;
(9) layer 6 of network is small echo pond layer, the Q that the small echo pond layer obtains last layer convolutional layer3Individual characteristic pattern conduct
Input, and do one layer of wavelet decomposition:
The wavelet basis function of use is " haar " function, for each characteristic pattern, obtain 8*8 low frequency sub-band and three
Three high-frequency sub-band correspondence positions are taken maximum by 8*8 high-frequency sub-band, are fused into a new high-frequency sub-band;
(10) layer 7 of network is full articulamentum, the Q that network layer 6 small echo pond layer is obtained3Individual 8*8 low frequency sub-bands and Q3
Individual 8*8 high-frequency sub-band forms the full articulamentum characteristic vector of one 128 dimension as input;
(11) in units of randomly selected n width Facial Expression Image, it is each that repeat step (3) to step (10) obtains n width images
From 128 dimensional feature vectors;
(12) the 8th layer of network is Softmax output layers, regard the characteristic vector of the n of acquisition 128 dimension as input, training one
The individual probability distribution Softmax graders for being output as 7 classes, obtain tag along sort;
(13) tag along sort of Softmax output layers carries out error calculation with true tag, according to BP back-propagation algorithms, updates
Weight matrix;
(14) repetition training step (3) is to (13), until weight matrix is updated m times, obtains the depth convolved wavelets trained
Neutral net;
(15) Facial Expression Image test set is brought into the depth convolved wavelets neutral net trained in output layer to be classified
Label z1, then sensitizing range image set that test data set expressed one's feelings accordingly bring the depth convolved wavelets neutral net trained into
Tag along sort z2 is obtained in output layer, two tag along sorts are obtained final tag along sort in the way of z3=z1+ λ * z2,
Wherein λ represents the weighting proportion of secondary task;
(16) according to the tag along sort z3 of test set, expression recognition accuracy is exported, the depth volume based on secondary task is completed
Product wavelet neural network expression recognition.
2. the depth convolved wavelets neutral net expression recognition method based on secondary task according to claim 1, its feature exists
In, described in step (2) set up Facial Expression Image collection and expression sensitizing range image set, carry out in accordance with the following steps:
2.1 Facial Expression Image collection are obtained as follows:
The original image that suitable quantity carries label is randomly choosed out from facial expression image storehouse, by upset, rotation and slider bar choosing
Take the mode of image block to extend original image, expanded images are entered with the method that Adaboost algorithm is combined using haar features
The identification of row human face region is simultaneously scaled to the Facial Expression Image that size is 96*96, the final people for obtaining ten thousand magnitude samples
Face facial expression image collection;
2.2 expression sensitizing range image sets are obtained as follows:
Expression sensitizing range refers to the region at several positions sensitive to expression in human face region, including eyes brow region and mouth
Bar region;The Facial Expression Image collection of acquisition is cut, the two eyebrow eyes images in left and right are obtained using crop box
Three image blocks are carried out splicing and obtain an expression sensitizing range image by block and a face position image block, final to obtain
The expression sensitizing range image set of identical ten thousand magnitudes sample;
2.3 make the label file of Facial Expression Image collection according to the original tag in facial expression image storehouse, and the label of single image is
The binary set of one 1*k dimension, the expression of k dimension table diagram pictures is divided into k classes, and label vector belongs to this for 1 dimension table diagram picture
The expression classification that dimension is represented, the value of other dimensions is 0, the label text of facial expression image data set and sensitizing range image data set
Part can be shared.
3. the depth convolved wavelets neutral net expression recognition method based on secondary task according to claim 1, its feature exists
In the small echo pond layer described in step (9) obtains low frequency sub-band and high-frequency sub-band, carries out in accordance with the following steps:
9.1 characteristic spectrums for obtaining last layer convolutional layer carry out one layer of down-sampling wavelet decomposition, and the wavelet basis function of selection is
Haar functions, each characteristic pattern obtains a low frequency sub-band and three high-frequency sub-bands by one layer of down-sampling wavelet decomposition;
Three high-frequency sub-bands are fused into a new high-frequency sub-band by 9.2 according to the following formula:
xWH=Maxf (0, xHH, xHL, xLH)
Wherein, xHH, xHL, xLHRepresent three high-frequency sub-bands that one layer of wavelet decomposition is obtained, xWHThe high-frequency sub-band after fusion is represented,
Defined function Maxf (A, B) is that matrix A and matrix B relevant position take higher value;
9.3 using the high-frequency sub-band after the low frequency sub-band of acquisition and fusion as the full articulamentum of next layer input.
4. the depth convolved wavelets neutral net expression recognition method based on secondary task according to claim 1, its feature exists
In the full articulamentum characteristic vector described in step (10) is carried out in accordance with the following steps:
10.1 ask for low frequency sub-band matrix according to the following formula;
xL=Maxf (0, W1·xLL1+W2·xLL2+W3·xLL3+……+Wn·xLLn)
Wherein, xLRepresent global low frequency sub-band matrix, xLLN represents the low frequency sub-band of each one layer of wavelet decomposition of characteristic pattern, WnRepresent
The superposition weight of each characteristic pattern low frequency sub-band;
10.2 ask for high-frequency sub-band matrix according to the following formula:
xH=Maxf (0, xWH1, xWH2... xWHn)
Wherein, xHRepresent global high-frequency sub-band matrix, xWHnRepresent that three high-frequency sub-bands of each one layer of wavelet decomposition of characteristic pattern are melted
New high-frequency sub-band after conjunction;
10.3 by global low frequency sub-band xLWith global high-frequency sub-band xH1*v vector is drawn into by row, and carries out head and the tail phase
Even, the characteristic vector of full articulamentum is obtained, size is tieed up for 1*2v.
5. the depth convolved wavelets neutral net expression recognition method based on secondary task according to claim 1, its feature exists
In the secondary task weighting proportion λ described in step (15) is carried out in accordance with the following steps:
15.1 initialization λ=0, randomly choose M Facial Expression Image and corresponding sensitizing range image as weights λ study
Sample;
The 15.2 depth convolved wavelets neutral net for training, learning sample brings network into and obtains contingency table according to the following formula
Label:
z3=z1+λz2
Wherein, z1Represent the output label that Facial Expression Image is brought into after network, z2Represent that corresponding sensitizing range picture strip networks
Output label after network, z3Represent the global label of network;
15.3 according to global label z3With the error size of true tag, λ value is updated according to the following formula:
λ=λ+▽ λ
Wherein, ▽ λ=0.05, counts the λ value corresponding to the minimum tag error of each learning sample;
λ value corresponding to the minimum tag error of M learning sample is asked for desired value by 15.4, and desired value λ is as follow-up auxiliary
Task weights proportion λ value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710446076.0A CN107292256B (en) | 2017-06-14 | 2017-06-14 | Auxiliary task-based deep convolution wavelet neural network expression recognition method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710446076.0A CN107292256B (en) | 2017-06-14 | 2017-06-14 | Auxiliary task-based deep convolution wavelet neural network expression recognition method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107292256A true CN107292256A (en) | 2017-10-24 |
CN107292256B CN107292256B (en) | 2019-12-24 |
Family
ID=60096459
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710446076.0A Active CN107292256B (en) | 2017-06-14 | 2017-06-14 | Auxiliary task-based deep convolution wavelet neural network expression recognition method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107292256B (en) |
Cited By (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729872A (en) * | 2017-11-02 | 2018-02-23 | 北方工业大学 | Facial expression recognition method and device based on deep learning |
CN107977677A (en) * | 2017-11-27 | 2018-05-01 | 深圳市唯特视科技有限公司 | A kind of multi-tag pixel classifications method in the reconstruction applied to extensive city |
CN108021910A (en) * | 2018-01-04 | 2018-05-11 | 青岛农业大学 | The analysis method of Pseudocarps based on spectrum recognition and deep learning |
CN108038466A (en) * | 2017-12-26 | 2018-05-15 | 河海大学 | Multichannel human eye closure recognition methods based on convolutional neural networks |
CN108062416A (en) * | 2018-01-04 | 2018-05-22 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of label on map |
CN108090513A (en) * | 2017-12-19 | 2018-05-29 | 天津科技大学 | Multi-biological characteristic blending algorithm based on particle cluster algorithm and typical correlation fractal dimension |
CN108122001A (en) * | 2017-12-13 | 2018-06-05 | 北京小米移动软件有限公司 | Image-recognizing method and device |
CN108171176A (en) * | 2017-12-29 | 2018-06-15 | 中车工业研究院有限公司 | A kind of subway driver's emotion identification method and device based on deep learning |
CN108229341A (en) * | 2017-12-15 | 2018-06-29 | 北京市商汤科技开发有限公司 | Sorting technique and device, electronic equipment, computer storage media, program |
CN108304788A (en) * | 2018-01-18 | 2018-07-20 | 陕西炬云信息科技有限公司 | Face identification method based on deep neural network |
CN108363969A (en) * | 2018-02-02 | 2018-08-03 | 南京邮电大学 | A kind of evaluation neonatal pain method based on mobile terminal |
CN108520213A (en) * | 2018-03-28 | 2018-09-11 | 五邑大学 | A kind of face beauty prediction technique based on multiple dimensioned depth |
CN108805866A (en) * | 2018-05-23 | 2018-11-13 | 兰州理工大学 | The image method for viewing points detecting known based on quaternion wavelet transformed depth visual sense |
CN109543526A (en) * | 2018-10-19 | 2019-03-29 | 谢飞 | True and false facial paralysis identifying system based on depth difference opposite sex feature |
CN109580629A (en) * | 2018-08-24 | 2019-04-05 | 绍兴文理学院 | Crankshaft thrust collar intelligent detecting method and system |
CN109615574A (en) * | 2018-12-13 | 2019-04-12 | 济南大学 | Chinese medicine recognition methods and system based on GPU and double scale image feature comparisons |
CN109635709A (en) * | 2018-12-06 | 2019-04-16 | 中山大学 | A kind of facial expression recognizing method based on the study of significant expression shape change region aids |
CN109657554A (en) * | 2018-11-21 | 2019-04-19 | 腾讯科技(深圳)有限公司 | A kind of image-recognizing method based on micro- expression, device and relevant device |
CN109815924A (en) * | 2019-01-29 | 2019-05-28 | 成都旷视金智科技有限公司 | Expression recognition method, apparatus and system |
CN109840459A (en) * | 2017-11-29 | 2019-06-04 | 深圳Tcl新技术有限公司 | A kind of facial expression classification method, apparatus and storage medium |
CN109919171A (en) * | 2018-12-21 | 2019-06-21 | 广东电网有限责任公司 | A kind of Infrared image recognition based on wavelet neural network |
CN109934173A (en) * | 2019-03-14 | 2019-06-25 | 腾讯科技(深圳)有限公司 | Expression recognition method, device and electronic equipment |
CN109949264A (en) * | 2017-12-20 | 2019-06-28 | 深圳先进技术研究院 | A kind of image quality evaluating method, equipment and storage equipment |
CN110119702A (en) * | 2019-04-30 | 2019-08-13 | 西安理工大学 | Facial expression recognizing method based on deep learning priori |
CN110174948A (en) * | 2019-05-27 | 2019-08-27 | 湖南师范大学 | A kind of language intelligence assistant learning system and method based on wavelet neural network |
CN110210380A (en) * | 2019-05-30 | 2019-09-06 | 盐城工学院 | The analysis method of personality is generated based on Expression Recognition and psychology test |
CN110298212A (en) * | 2018-03-21 | 2019-10-01 | 腾讯科技(深圳)有限公司 | Model training method, Emotion identification method, expression display methods and relevant device |
CN110333088A (en) * | 2019-04-19 | 2019-10-15 | 北京化工大学 | Agglomerate detection method, system, device and medium |
CN110399821A (en) * | 2019-07-17 | 2019-11-01 | 上海师范大学 | Customer satisfaction acquisition methods based on facial expression recognition |
CN110414394A (en) * | 2019-07-16 | 2019-11-05 | 公安部第一研究所 | A kind of face blocks face image method and the model for face occlusion detection |
CN110427892A (en) * | 2019-08-06 | 2019-11-08 | 河海大学常州校区 | CNN human face expression characteristic point positioning method based on the fusion of depth layer auto-correlation |
CN110717423A (en) * | 2019-09-26 | 2020-01-21 | 安徽建筑大学 | Training method and device for emotion recognition model of facial expression of old people |
CN110889332A (en) * | 2019-10-30 | 2020-03-17 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Lie detection method based on micro expression in interview |
CN111126364A (en) * | 2020-03-30 | 2020-05-08 | 北京建筑大学 | Expression recognition method based on packet convolutional neural network |
CN111144348A (en) * | 2019-12-30 | 2020-05-12 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111178312A (en) * | 2020-01-02 | 2020-05-19 | 西北工业大学 | Face expression recognition method based on multi-task feature learning network |
CN111191704A (en) * | 2019-12-24 | 2020-05-22 | 天津师范大学 | Foundation cloud classification method based on task graph convolutional network |
CN111222624A (en) * | 2018-11-26 | 2020-06-02 | 深圳云天励飞技术有限公司 | Parallel computing method and device |
CN111291670A (en) * | 2020-01-23 | 2020-06-16 | 天津大学 | Small target facial expression recognition method based on attention mechanism and network integration |
CN111382795A (en) * | 2020-03-09 | 2020-07-07 | 交叉信息核心技术研究院(西安)有限公司 | Image classification processing method of neural network based on frequency domain wavelet base processing |
CN111401116A (en) * | 2019-08-13 | 2020-07-10 | 南京邮电大学 | Bimodal emotion recognition method based on enhanced convolution and space-time L STM network |
CN111401147A (en) * | 2020-02-26 | 2020-07-10 | 中国平安人寿保险股份有限公司 | Intelligent analysis method and device based on video behavior data and storage medium |
CN111465941A (en) * | 2017-11-21 | 2020-07-28 | 国立研究开发法人理化学研究所 | Sorting device, sorting method, program, and information recording medium |
CN111488764A (en) * | 2019-01-26 | 2020-08-04 | 天津大学青岛海洋技术研究院 | Face recognition algorithm for ToF image sensor |
CN111652171A (en) * | 2020-06-09 | 2020-09-11 | 电子科技大学 | Construction method of facial expression recognition model based on double branch network |
CN112132058A (en) * | 2020-09-25 | 2020-12-25 | 山东大学 | Head posture estimation method based on multi-level image feature refining learning, implementation system and storage medium thereof |
CN112380995A (en) * | 2020-11-16 | 2021-02-19 | 华南理工大学 | Face recognition method and system based on deep feature learning in sparse representation domain |
CN112699938A (en) * | 2020-12-30 | 2021-04-23 | 北京邮电大学 | Classification method and device based on graph convolution network model |
CN113095356A (en) * | 2021-03-03 | 2021-07-09 | 北京邮电大学 | Light weight type neural network and image processing method and device |
CN114445899A (en) * | 2022-01-30 | 2022-05-06 | 中国农业银行股份有限公司 | Expression recognition method, device, equipment and storage medium |
WO2022115996A1 (en) * | 2020-12-01 | 2022-06-09 | 华为技术有限公司 | Image processing method and device |
CN114743251A (en) * | 2022-05-23 | 2022-07-12 | 西北大学 | Game character facial expression recognition method based on shared integrated convolutional neural network |
WO2024039332A1 (en) * | 2022-08-15 | 2024-02-22 | Aselsan Elektroni̇k Sanayi̇ Ve Ti̇caret Anoni̇m Şi̇rketi̇ | Partial reconstruction method based on sub-band components of jpeg2000 compressed images |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101872424A (en) * | 2010-07-01 | 2010-10-27 | 重庆大学 | Facial expression recognizing method based on Gabor transform optimal channel blur fusion |
CN105139395A (en) * | 2015-08-19 | 2015-12-09 | 西安电子科技大学 | SAR image segmentation method based on wavelet pooling convolutional neural networks |
CN106056088A (en) * | 2016-06-03 | 2016-10-26 | 西安电子科技大学 | Single-sample face recognition method based on self-adaptive virtual sample generation criterion |
-
2017
- 2017-06-14 CN CN201710446076.0A patent/CN107292256B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101872424A (en) * | 2010-07-01 | 2010-10-27 | 重庆大学 | Facial expression recognizing method based on Gabor transform optimal channel blur fusion |
CN105139395A (en) * | 2015-08-19 | 2015-12-09 | 西安电子科技大学 | SAR image segmentation method based on wavelet pooling convolutional neural networks |
CN106056088A (en) * | 2016-06-03 | 2016-10-26 | 西安电子科技大学 | Single-sample face recognition method based on self-adaptive virtual sample generation criterion |
Non-Patent Citations (1)
Title |
---|
SHISHIR BASHYAL: "Recognition of facial expressions using Gabor wavelets and learning vector quantization", 《ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE》 * |
Cited By (83)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729872A (en) * | 2017-11-02 | 2018-02-23 | 北方工业大学 | Facial expression recognition method and device based on deep learning |
CN111465941A (en) * | 2017-11-21 | 2020-07-28 | 国立研究开发法人理化学研究所 | Sorting device, sorting method, program, and information recording medium |
CN107977677A (en) * | 2017-11-27 | 2018-05-01 | 深圳市唯特视科技有限公司 | A kind of multi-tag pixel classifications method in the reconstruction applied to extensive city |
CN109840459A (en) * | 2017-11-29 | 2019-06-04 | 深圳Tcl新技术有限公司 | A kind of facial expression classification method, apparatus and storage medium |
CN108122001B (en) * | 2017-12-13 | 2022-03-11 | 北京小米移动软件有限公司 | Image recognition method and device |
CN108122001A (en) * | 2017-12-13 | 2018-06-05 | 北京小米移动软件有限公司 | Image-recognizing method and device |
CN108229341A (en) * | 2017-12-15 | 2018-06-29 | 北京市商汤科技开发有限公司 | Sorting technique and device, electronic equipment, computer storage media, program |
CN108090513A (en) * | 2017-12-19 | 2018-05-29 | 天津科技大学 | Multi-biological characteristic blending algorithm based on particle cluster algorithm and typical correlation fractal dimension |
CN109949264A (en) * | 2017-12-20 | 2019-06-28 | 深圳先进技术研究院 | A kind of image quality evaluating method, equipment and storage equipment |
CN108038466A (en) * | 2017-12-26 | 2018-05-15 | 河海大学 | Multichannel human eye closure recognition methods based on convolutional neural networks |
CN108038466B (en) * | 2017-12-26 | 2021-11-16 | 河海大学 | Multi-channel human eye closure recognition method based on convolutional neural network |
CN108171176A (en) * | 2017-12-29 | 2018-06-15 | 中车工业研究院有限公司 | A kind of subway driver's emotion identification method and device based on deep learning |
CN108171176B (en) * | 2017-12-29 | 2020-04-24 | 中车工业研究院有限公司 | Subway driver emotion identification method and device based on deep learning |
CN108062416B (en) * | 2018-01-04 | 2019-10-29 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating label on map |
CN108021910A (en) * | 2018-01-04 | 2018-05-11 | 青岛农业大学 | The analysis method of Pseudocarps based on spectrum recognition and deep learning |
CN108062416A (en) * | 2018-01-04 | 2018-05-22 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of label on map |
CN108304788A (en) * | 2018-01-18 | 2018-07-20 | 陕西炬云信息科技有限公司 | Face identification method based on deep neural network |
CN108304788B (en) * | 2018-01-18 | 2022-06-14 | 陕西炬云信息科技有限公司 | Face recognition method based on deep neural network |
CN108363969A (en) * | 2018-02-02 | 2018-08-03 | 南京邮电大学 | A kind of evaluation neonatal pain method based on mobile terminal |
CN110298212A (en) * | 2018-03-21 | 2019-10-01 | 腾讯科技(深圳)有限公司 | Model training method, Emotion identification method, expression display methods and relevant device |
CN108520213B (en) * | 2018-03-28 | 2021-10-19 | 五邑大学 | Face beauty prediction method based on multi-scale depth |
CN108520213A (en) * | 2018-03-28 | 2018-09-11 | 五邑大学 | A kind of face beauty prediction technique based on multiple dimensioned depth |
CN108805866A (en) * | 2018-05-23 | 2018-11-13 | 兰州理工大学 | The image method for viewing points detecting known based on quaternion wavelet transformed depth visual sense |
CN108805866B (en) * | 2018-05-23 | 2022-03-25 | 兰州理工大学 | Image fixation point detection method based on quaternion wavelet transform depth vision perception |
CN109580629A (en) * | 2018-08-24 | 2019-04-05 | 绍兴文理学院 | Crankshaft thrust collar intelligent detecting method and system |
CN109543526A (en) * | 2018-10-19 | 2019-03-29 | 谢飞 | True and false facial paralysis identifying system based on depth difference opposite sex feature |
CN109657554A (en) * | 2018-11-21 | 2019-04-19 | 腾讯科技(深圳)有限公司 | A kind of image-recognizing method based on micro- expression, device and relevant device |
WO2020103700A1 (en) * | 2018-11-21 | 2020-05-28 | 腾讯科技(深圳)有限公司 | Image recognition method based on micro facial expressions, apparatus and related device |
CN111222624B (en) * | 2018-11-26 | 2022-04-29 | 深圳云天励飞技术股份有限公司 | Parallel computing method and device |
CN111222624A (en) * | 2018-11-26 | 2020-06-02 | 深圳云天励飞技术有限公司 | Parallel computing method and device |
CN109635709B (en) * | 2018-12-06 | 2022-09-23 | 中山大学 | Facial expression recognition method based on significant expression change area assisted learning |
CN109635709A (en) * | 2018-12-06 | 2019-04-16 | 中山大学 | A kind of facial expression recognizing method based on the study of significant expression shape change region aids |
CN109615574B (en) * | 2018-12-13 | 2022-09-23 | 济南大学 | Traditional Chinese medicine identification method and system based on GPU and dual-scale image feature comparison |
CN109615574A (en) * | 2018-12-13 | 2019-04-12 | 济南大学 | Chinese medicine recognition methods and system based on GPU and double scale image feature comparisons |
CN109919171A (en) * | 2018-12-21 | 2019-06-21 | 广东电网有限责任公司 | A kind of Infrared image recognition based on wavelet neural network |
CN111488764B (en) * | 2019-01-26 | 2024-04-30 | 天津大学青岛海洋技术研究院 | Face recognition method for ToF image sensor |
CN111488764A (en) * | 2019-01-26 | 2020-08-04 | 天津大学青岛海洋技术研究院 | Face recognition algorithm for ToF image sensor |
CN109815924A (en) * | 2019-01-29 | 2019-05-28 | 成都旷视金智科技有限公司 | Expression recognition method, apparatus and system |
WO2020182121A1 (en) * | 2019-03-14 | 2020-09-17 | 腾讯科技(深圳)有限公司 | Expression recognition method and related device |
CN109934173A (en) * | 2019-03-14 | 2019-06-25 | 腾讯科技(深圳)有限公司 | Expression recognition method, device and electronic equipment |
CN109934173B (en) * | 2019-03-14 | 2023-11-21 | 腾讯科技(深圳)有限公司 | Expression recognition method and device and electronic equipment |
CN110333088A (en) * | 2019-04-19 | 2019-10-15 | 北京化工大学 | Agglomerate detection method, system, device and medium |
CN110119702A (en) * | 2019-04-30 | 2019-08-13 | 西安理工大学 | Facial expression recognizing method based on deep learning priori |
CN110119702B (en) * | 2019-04-30 | 2022-12-06 | 西安理工大学 | Facial expression recognition method based on deep learning prior |
CN110174948A (en) * | 2019-05-27 | 2019-08-27 | 湖南师范大学 | A kind of language intelligence assistant learning system and method based on wavelet neural network |
CN110210380A (en) * | 2019-05-30 | 2019-09-06 | 盐城工学院 | The analysis method of personality is generated based on Expression Recognition and psychology test |
CN110210380B (en) * | 2019-05-30 | 2023-07-25 | 盐城工学院 | Analysis method for generating character based on expression recognition and psychological test |
CN110414394B (en) * | 2019-07-16 | 2022-12-13 | 公安部第一研究所 | Facial occlusion face image reconstruction method and model for face occlusion detection |
CN110414394A (en) * | 2019-07-16 | 2019-11-05 | 公安部第一研究所 | A kind of face blocks face image method and the model for face occlusion detection |
CN110399821A (en) * | 2019-07-17 | 2019-11-01 | 上海师范大学 | Customer satisfaction acquisition methods based on facial expression recognition |
CN110427892A (en) * | 2019-08-06 | 2019-11-08 | 河海大学常州校区 | CNN human face expression characteristic point positioning method based on the fusion of depth layer auto-correlation |
CN110427892B (en) * | 2019-08-06 | 2022-09-09 | 河海大学常州校区 | CNN face expression feature point positioning method based on depth-layer autocorrelation fusion |
CN111401116A (en) * | 2019-08-13 | 2020-07-10 | 南京邮电大学 | Bimodal emotion recognition method based on enhanced convolution and space-time L STM network |
CN111401116B (en) * | 2019-08-13 | 2022-08-26 | 南京邮电大学 | Bimodal emotion recognition method based on enhanced convolution and space-time LSTM network |
CN110717423A (en) * | 2019-09-26 | 2020-01-21 | 安徽建筑大学 | Training method and device for emotion recognition model of facial expression of old people |
CN110889332A (en) * | 2019-10-30 | 2020-03-17 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Lie detection method based on micro expression in interview |
CN111191704B (en) * | 2019-12-24 | 2023-05-02 | 天津师范大学 | Foundation cloud classification method based on task graph convolutional network |
CN111191704A (en) * | 2019-12-24 | 2020-05-22 | 天津师范大学 | Foundation cloud classification method based on task graph convolutional network |
CN111144348A (en) * | 2019-12-30 | 2020-05-12 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111178312B (en) * | 2020-01-02 | 2023-03-24 | 西北工业大学 | Face expression recognition method based on multi-task feature learning network |
CN111178312A (en) * | 2020-01-02 | 2020-05-19 | 西北工业大学 | Face expression recognition method based on multi-task feature learning network |
CN111291670B (en) * | 2020-01-23 | 2023-04-07 | 天津大学 | Small target facial expression recognition method based on attention mechanism and network integration |
CN111291670A (en) * | 2020-01-23 | 2020-06-16 | 天津大学 | Small target facial expression recognition method based on attention mechanism and network integration |
CN111401147B (en) * | 2020-02-26 | 2024-06-04 | 中国平安人寿保险股份有限公司 | Intelligent analysis method, device and storage medium based on video behavior data |
CN111401147A (en) * | 2020-02-26 | 2020-07-10 | 中国平安人寿保险股份有限公司 | Intelligent analysis method and device based on video behavior data and storage medium |
CN111382795B (en) * | 2020-03-09 | 2023-05-05 | 交叉信息核心技术研究院(西安)有限公司 | Image classification processing method of neural network based on frequency domain wavelet base processing |
CN111382795A (en) * | 2020-03-09 | 2020-07-07 | 交叉信息核心技术研究院(西安)有限公司 | Image classification processing method of neural network based on frequency domain wavelet base processing |
CN111126364A (en) * | 2020-03-30 | 2020-05-08 | 北京建筑大学 | Expression recognition method based on packet convolutional neural network |
CN111652171A (en) * | 2020-06-09 | 2020-09-11 | 电子科技大学 | Construction method of facial expression recognition model based on double branch network |
CN111652171B (en) * | 2020-06-09 | 2022-08-05 | 电子科技大学 | Construction method of facial expression recognition model based on double branch network |
CN112132058A (en) * | 2020-09-25 | 2020-12-25 | 山东大学 | Head posture estimation method based on multi-level image feature refining learning, implementation system and storage medium thereof |
CN112132058B (en) * | 2020-09-25 | 2022-12-27 | 山东大学 | Head posture estimation method, implementation system thereof and storage medium |
CN112380995B (en) * | 2020-11-16 | 2023-09-12 | 华南理工大学 | Face recognition method and system based on deep feature learning in sparse representation domain |
CN112380995A (en) * | 2020-11-16 | 2021-02-19 | 华南理工大学 | Face recognition method and system based on deep feature learning in sparse representation domain |
WO2022115996A1 (en) * | 2020-12-01 | 2022-06-09 | 华为技术有限公司 | Image processing method and device |
CN112699938A (en) * | 2020-12-30 | 2021-04-23 | 北京邮电大学 | Classification method and device based on graph convolution network model |
CN112699938B (en) * | 2020-12-30 | 2024-01-05 | 北京邮电大学 | Classification method and device based on graph convolution network model |
CN113095356A (en) * | 2021-03-03 | 2021-07-09 | 北京邮电大学 | Light weight type neural network and image processing method and device |
CN113095356B (en) * | 2021-03-03 | 2023-10-31 | 北京邮电大学 | Light-weight neural network system and image processing method and device |
CN114445899A (en) * | 2022-01-30 | 2022-05-06 | 中国农业银行股份有限公司 | Expression recognition method, device, equipment and storage medium |
CN114743251B (en) * | 2022-05-23 | 2024-02-27 | 西北大学 | Drama character facial expression recognition method based on shared integrated convolutional neural network |
CN114743251A (en) * | 2022-05-23 | 2022-07-12 | 西北大学 | Game character facial expression recognition method based on shared integrated convolutional neural network |
WO2024039332A1 (en) * | 2022-08-15 | 2024-02-22 | Aselsan Elektroni̇k Sanayi̇ Ve Ti̇caret Anoni̇m Şi̇rketi̇ | Partial reconstruction method based on sub-band components of jpeg2000 compressed images |
Also Published As
Publication number | Publication date |
---|---|
CN107292256B (en) | 2019-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107292256A (en) | Depth convolved wavelets neutral net expression recognition method based on secondary task | |
CN106778821B (en) | Classification of Polarimetric SAR Image method based on SLIC and improved CNN | |
CN104182772B (en) | A kind of gesture identification method based on deep learning | |
CN105184309B (en) | Classification of Polarimetric SAR Image based on CNN and SVM | |
CN104217214B (en) | RGB D personage's Activity recognition methods based on configurable convolutional neural networks | |
CN106023065A (en) | Tensor hyperspectral image spectrum-space dimensionality reduction method based on deep convolutional neural network | |
CN104537393B (en) | A kind of traffic sign recognition method based on multiresolution convolutional neural networks | |
Sinha et al. | Optimization of convolutional neural network parameters for image classification | |
Yan et al. | Multi-attributes gait identification by convolutional neural networks | |
CN106326899A (en) | Tobacco leaf grading method based on hyperspectral image and deep learning algorithm | |
CN107229904A (en) | A kind of object detection and recognition method based on deep learning | |
CN109902806A (en) | Method is determined based on the noise image object boundary frame of convolutional neural networks | |
CN108062543A (en) | A kind of face recognition method and device | |
CN108764471A (en) | The neural network cross-layer pruning method of feature based redundancy analysis | |
CN107506740A (en) | A kind of Human bodys' response method based on Three dimensional convolution neutral net and transfer learning model | |
CN107871136A (en) | The image-recognizing method of convolutional neural networks based on openness random pool | |
CN106570477A (en) | Vehicle model recognition model construction method based on depth learning and vehicle model recognition method based on depth learning | |
CN107808132A (en) | A kind of scene image classification method for merging topic model | |
CN108734719A (en) | Background automatic division method before a kind of lepidopterous insects image based on full convolutional neural networks | |
CN107145889A (en) | Target identification method based on double CNN networks with RoI ponds | |
CN107292250A (en) | A kind of gait recognition method based on deep neural network | |
CN109785344A (en) | The remote sensing image segmentation method of binary channel residual error network based on feature recalibration | |
CN105335716A (en) | Improved UDN joint-feature extraction-based pedestrian detection method | |
CN107506786A (en) | A kind of attributive classification recognition methods based on deep learning | |
CN109344699A (en) | Winter jujube disease recognition method based on depth of seam division convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |