CN107451537A - Face identification method based on deep learning multilayer Non-negative Matrix Factorization - Google Patents

Face identification method based on deep learning multilayer Non-negative Matrix Factorization Download PDF

Info

Publication number
CN107451537A
CN107451537A CN201710568578.0A CN201710568578A CN107451537A CN 107451537 A CN107451537 A CN 107451537A CN 201710568578 A CN201710568578 A CN 201710568578A CN 107451537 A CN107451537 A CN 107451537A
Authority
CN
China
Prior art keywords
mrow
matrix
msub
obtains
test sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710568578.0A
Other languages
Chinese (zh)
Other versions
CN107451537B (en
Inventor
同鸣
李明阳
陈逸然
席圣男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710568578.0A priority Critical patent/CN107451537B/en
Publication of CN107451537A publication Critical patent/CN107451537A/en
Application granted granted Critical
Publication of CN107451537B publication Critical patent/CN107451537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention discloses a kind of face identification method based on deep learning multilayer Non-negative Matrix Factorization, mainly solves the problems, such as that existing face recognition technology discrimination under complicated cosmetic variation is low.Its technical scheme is:1. utilize the characteristic of VGG Face extraction training samples and each channel data of test sample;2. the characteristic of each channel data of pair training sample repeats the characteristic extraction procedure of L normalization, nonlinear transformation and matrix decomposition respectively, low-rank robust features are obtained;3. build K nearest neighbor classifier;4. the characteristic of each channel data of test sample is projected respectively, projection coefficient vector is obtained;5. projection coefficient vector is input into K nearest neighbor classifier to be classified;6. integrating the classification results of K nearest neighbor classifier, the recognition result of test sample is obtained.The present invention improves the face identification rate under complicated cosmetic variation, can be applied to identity authentication and information security field.

Description

Face identification method based on deep learning multilayer Non-negative Matrix Factorization
Technical field
The invention belongs to technical field of image processing, more particularly to facial image recognition method, identity authentication can be applied to And information security field.
Background technology
With the continuous development of human society, recognition of face has extensive in multiple fields such as security, finance, E-Government Using raising recognition of face performance is advantageous to expand the application of recognition of face.Currently the main research on recognition of face is Extract effective, robust and the more grader of the feature of distinctive and design with more preferable classification capacity.Selection more robust, The grader of feature and design with good classification ability for more having taste is the key for improving recognition of face robustness.
Non-negative Matrix Factorization is a kind of feature extracting method that matrix decomposition is carried out under nonnegativity restrictions, has good number According to the ability of expression, the dimension of data characteristics can be greatly lowered, and its resolution characteristic is in accordance with the body directly perceived of human visual perception Test, decomposition result has interpretable and clear and definite physical significance.Basic Non-negative Matrix Factorization NMF is directly by original coefficient matrix point Solve as basic matrix and coefficient matrix, and require that basic matrix and coefficient matrix are all non-negative, this shows Non-negative Matrix Factorization NMF Only exist additive combination.Therefore, Non-negative Matrix Factorization NMF can be regarded as a model represented based on part, using the teaching of the invention it is possible to provide The partial structurtes of data are observed, but in some cases, NMF algorithms can also provide global characteristics, cause classification performance to be limited.
Deep learning is a new research direction of character representation in machine learning field, in recent years speech recognition, The progress of making a breakthrough property in the application of the multiclass such as computer vision, deep learning form what is be more abstracted by combining low-level image feature High level represents or feature.In deep learning model, there are more nonlinear transformation layers, there is stronger generalization ability.But In practical application, head pose, the cosmetic variation caused by factor such as illuminate, block and can cause the hydraulic performance decline of deep learning, arriving So far without good solution.
The content of the invention
Prior art shortcoming in view of the above, it is non-based on deep learning multilayer it is an object of the invention to provide one kind The face identification method that negative matrix decomposes, to obtain the profound low-rank robust features for having more identification, improve complicated outward appearance and become Face identification rate under changing.
The key problem in technology for realizing the present invention is on the basis of deep learning, introduces a kind of new multilayer nonnegative matrix point Solution, to be improved to existing deep learning method.Specifically, the present invention is special by the sample obtained to deep learning Sign carries out multiple Non-negative Matrix Factorization, and the low-rank character representation of more taste is obtained with this, so as to improve face identification rate, its Step includes as follows:
(1) each channel data of training sample is input in VGG-Face depth convolutional neural networks, trained The characteristic X (k) of each channel data of sample, wherein, k=1,2 ..., K, K be training sample port number;
(2) the characteristic X (k) obtained to step (1) be normalized respectively, the spy of nonlinear transformation and matrix decomposition Extraction process is levied, obtains coefficient matrix H (k);
(3) characteristic extraction procedure in step (2) is repeated L times, obtains low-rank robust features hj(k), wherein, j=1, 2 ..., n, n be training sample sum;
(4) the low-rank robust features h obtained according to step (3)j(k) K nearest neighbor classifier, is constructed;
(5) each channel data of test sample is input in VGG-Face depth convolutional neural networks, tested The characteristic Y (k) of each channel data of sample;
(6) the characteristic Y (k) obtained according to step (5) carries out projection process, obtains projection coefficient vector
(7) the projection coefficient vector obtained step (6)It is input in K nearest neighbor classifier, obtains test specimens The classification results of this each passage, wherein, i=1,2 ..., e, e be test sample sum;
(8) classification results for each passage of test sample that combining step (7) obtains, obtain the classification knot of test sample Fruit.
The present invention compared with prior art, has the following advantages that:
1) present invention combines multilayer Non-negative Matrix Factorization on the basis of deep learning, can obtain more taste Character representation;
2) present invention further increases the face knowledge under complicated cosmetic variation by the classification results of the different passages of synthesis Not rate.
Brief description of the drawings
Fig. 1 is the implementation process figure of the present invention.
Embodiment
Reference picture 1, the recognition of face step of the invention based on deep learning multilayer Non-negative Matrix Factorization are as follows:
Step 1, the characteristic X (k) of each channel data of training sample is obtained.
(1a) obtains human face data collection VtrainAs training dataset, the training sample sum that the training data is concentrated is n, The categorical measure of the training dataset is c, and each training sample that the training data is concentrated is divided into K region, each region As 1 channel data of training sample, training sample includes K channel data altogether;
(1b) according to training dataset, under a linux operating system, using Caffe deep learning frameworks to VGG-Face Depth convolutional neural networks parameter is finely adjusted;
Training data is concentrated each channel data of each training sample to be input to VGG-Face depth convolution god by (1c) Through in network, obtaining training the characteristic X (k) of each channel data, wherein, k=1,2 ..., K;K is the logical of training sample Road number.
Step 2, according to characteristic X (k), coefficient matrix H (k) is obtained.
Characteristic X (k) is normalized respectively, the characteristic extraction procedure of nonlinear transformation and matrix decomposition, obtained Coefficient matrix H (k);
(2a) characteristic X (k) is normalized using L2 norms;
(2b) uses the result after sigmoid function pairs step (2a) normalized to carry out nonlinear transformation, is become Result B (k) after changing;
(2c) enters row matrix using soft-constraint Non-negative Matrix Factorization to the result B (k) after nonlinear transformation in step (2b) Decompose, obtain B (k) ≈ Z (k) A (k) F (k), wherein, B (k) be m × n rank matrixes, and Z (k) is the basic matrix of m × φ ranks, A (k) For the companion matrix of φ × c ranks, F (k) is the prediction label matrix of c × n ranks, and m is primitive character dimension, and φ is to decompose dimension, c For classification number, n is training sample sum;
(2c1) random initializtion basic matrix Z(1)(k), companion matrix A(1)And prediction label matrix F (k)(1)(k) it is used as and changes For the result after 1 time, wherein, basic matrix Z(1)(k) arbitrary element in meets For basic matrix Z(1) (k) pth row q column elements;Companion matrix A(1)(k) arbitrary element in meets For companion matrix A(1)(k) α row β column elements;Prediction label matrix F(1)(k) arbitrary element in meets To be pre- Mark label matrix F(1)(k) γ rowsColumn element;P=1,2 ..., m, q=1,2 ..., φ, α=1,2 ..., φ, β= 1,2 ..., c, γ=1,2 ..., c,
(2c2) according to equation below, to the element Z in basic matrix Zp,qIt is updated:
Wherein, t is iterations, and t=2 ..., iter, iter are maximum iteration, and T is matrix transposition, For the non-normalized basic matrix Z obtained after iteration t times(t)′(k) pth row q column elements;
(2c3) is to the basic matrix Z that is obtained in step (2c2)(t)' (k) is normalized, and obtains the group moment of iteration t times Battle array Z(t)(k);
(2c4) according to equation below, to the elements A in companion matrix A (k)α,β(k) it is updated:
Wherein,For the companion matrix A obtained after iteration t times(t)(k) α row β column elements;A(t)(k) it is iteration The companion matrix obtained after t times;
(2c5) according to equation below, to the element in prediction label matrix F (k)It is updated:
Wherein,For t rear prediction label matrix F of iteration(t)(k) γ rowsColumn element;F(t)(k) it is iteration t Prediction label matrix after secondary;λ is regularization coefficient;For the γ rows of pre-defined local label Matrix C (k)Row Element;
(2c6) judges whether iterations t reaches maximum iteration iter:If it is, stop iteration, by the i-th ter The basic matrix Z that secondary iteration obtains(iter)(k), companion matrix A(iter)And prediction label matrix F (k)(iter)(k), as final Basic matrix Z (k), companion matrix A (k) and prediction label matrix F (k);Otherwise, return to step (2c2);
(2d) is according to the companion matrix A (k) and prediction label square obtained after soft-constraint Non-negative Matrix Factorization in step (2c) Battle array F (k), obtains coefficient matrix:H (k)=A (k) F (k).
Step 3, the low-rank robust features h (k) of training sample is obtained.
Characteristic extraction procedure in repeat step 2, obtain the low-rank Shandong of each channel characteristics data X (k) of training sample Rod feature h (k);
(3a) is handled the characteristic X (k) of each passage of training sample according to step 2, obtains the 1st layer of group moment Battle array Z1And the 1st layer coefficients matrix H (k)1(k);
The 1st layer coefficients matrix H that (3b) obtains according to step 2 to step (3a)1(k) handled, obtain the 2nd layer of group moment Battle array Z2And the 2nd layer coefficients matrix H (k)2(k);
(3c) continues to repeat same steps according to step (3a) and (3b), according to l-1 layer coefficients matrix Hsl-1(k), obtain L layer basic matrixs ZlAnd l layer coefficients matrix Hs (k)l(k), until number of repetition l=L, L layer basic matrixs Z is obtainedLAnd the (k) L layer coefficients matrix HsL(k), wherein, l=2 ..., L, L are the number of plies of multilayer Non-negative Matrix Factorization;
The L layer coefficients matrix Hs that (3d) obtains according to step (3c)L(k) the low-rank Shandong of each passage of training sample, is obtained Rod feature hj(k), wherein, j=1,2 ..., n.
Step 4, the low-rank robust features h obtained according to step 3j(k) K nearest neighbor classifier, is constructed.
The result that (4a) obtains from step 3, choose the low-rank robust features h of each k-th of passage of training samplej(k), Form a characteristic set;
The characteristic set that (4b) obtains according to step (4a), form a nearest neighbor classifier;
(4c) is directed to different passage repeat step (4a) and (4b), obtains K nearest neighbor classifier.
Step 5, the characteristic Y (k) of each channel data of test sample is obtained.
(5a) is obtained and training data set attribute identical human face data collection VtestAs test data set, the test data The test sample sum of concentration is e, and the categorical measure of the test data set is c, each test sample that the test data is concentrated K channel data is divided into according to step (1a);
(5b) is configured according to step (1b) to the parameter of VGG-Face depth convolutional neural networks;
Each channel data of test sample is input in VGG-Face depth convolutional neural networks by (5c), is tested The characteristic Y (k) of each channel data of sample.
Step 6, the characteristic Y (k) of each channel data of test sample step 5 obtained is projected respectively, Export projection coefficient vector
(6a) the characteristic Y (k) of test sample is normalized, the projection of nonlinear transformation and projective transformation Processing procedure, obtain the 1st layer of projection matrix
(6a1) the characteristic Y (k) of test sample is normalized using L2 norms;
(6a2) uses the result obtained in Sigmoid function pairs step (6a1) after normalized to carry out non-linear change Change, obtain the transformation results f (Y (k)) after nonlinear transformation, wherein, f () represents non-linear using the progress of Sigmoid functions Conversion;
The 1st layer of base that (6a3) obtains the result f (Y (k)) after step (6a2) nonlinear transformation in step (3a) respectively Matrix Z1(k) projective transformation is carried out on, obtains the 1st layer of projection matrix:Wherein,Represent broad sense Inverse operation;
The 1st layer of projection matrix that (6b) obtains according to step (6a)With the 2nd layer of basic matrix Z2(k) mutually existed together Reason process, obtain the 2nd layer of projection matrix
(6c) continues to repeat same steps according to step (6a) and (6b), according to l-1 layer projection matrixesWith l Layer basic matrix Zl(k) l layer projection matrixes are obtainedUntil number of repetition l=L, L layers are obtained Projection matrixWherein, l=2 ..., L;
The L layer projection matrixes that (6d) obtains according to step (6c)Obtain the projection coefficient of each test sample to AmountWherein, i=1,2 ..., e.
Step 7, projection coefficient vector step 6 obtainedIt is input in K nearest neighbor classifier, obtains test specimens The classification results of this each passage.
(7a) calculates the low-rank robust features h of training samplej(k) it is vectorial with the projection coefficient of test sampleBetween Low-dimensional Euclidean distanceObtain distance setWherein, j=1,2 ..., N, i ∈ { 1,2 ..., e }, | | | |2Represent 2 norms;
The distance set that (7b) obtains according to step (7a)By minimum value in distance set Classification results of the classification of corresponding the ξ training sample as i-th of test sample on k-th of nearest neighbor classifier, its In, ξ ∈ { 1,2 ..., n };
(7c) classifies to K passage of each test sample respectively according to step (7a) and (7b), obtains each survey Classification results of the sample sheet on K nearest neighbor classifier.
Step 8, the classification results for each passage of test sample that combining step 7 obtains, final point of test sample is obtained Class result.
Classification results of each test sample that (8a) obtains according to step 7 on K nearest neighbor classifier, are counted respectively The test sample number CN correctly to be classified on each nearest neighbor classifierk, calculate the discrimination of each nearest neighbor classifier:
Wherein, CNkFor the test sample number correctly classified on k-th of nearest neighbor classifier, okIt is nearest for k-th The discrimination of adjacent grader;
The discrimination for the K nearest neighbor classifier that (8b) obtains according to step (8a) calculates K nearest neighbor classifier respectively Linear weight factor alphak
The linear weight factor alpha that (8c) obtains according to step (8b)k, calculate K channel projection coefficient vector of test sampleWith K passage low-rank robust features h of training samplej(k) Weighted distance between:
, obtain Weighted distance set { d1i,d2i,...,dji,...,dni};
Weighted distance set { the d that (8d) obtains according to step (8c)1i,d2i,...,dji,...,dni, by Weighted distance collection Minimum value d in conjunctionωiClassification results of the classification of corresponding the ω training sample as test sample, wherein, ω ∈ 1, 2,...,n}。
Above description is only example of the present invention, does not form any limitation of the invention, it is clear that for this , all may be without departing substantially from the principle of the invention, structure after present invention and principle has been understood for the professional in field In the case of, the various modifications and variations in form and details are carried out, but these modifications and variations based on inventive concept are still Within the claims of the present invention.

Claims (7)

1. based on the face identification method of deep learning multilayer Non-negative Matrix Factorization, including:
(1) each channel data of training sample is input in VGG-Face depth convolutional neural networks, obtains training sample The characteristic X (k) of each channel data, wherein, k=1,2 ..., port number that K, K are training sample;
(2) the characteristic X (k) obtained to step (1) is normalized respectively, the feature of nonlinear transformation and matrix decomposition carries Process is taken, obtains coefficient matrix H (k);
(3) characteristic extraction procedure in step (2) is repeated L times, obtains low-rank robust features hj(k), wherein, j=1,2 ..., N, n are training sample sum;
(4) the low-rank robust features h obtained according to step (3)j(k) K nearest neighbor classifier, is constructed;
(5) each channel data of test sample is input in VGG-Face depth convolutional neural networks, obtains test sample The characteristic Y (k) of each channel data;
(6) the characteristic Y (k) obtained according to step (5) carries out projection process, obtains projection coefficient vector
(7) the projection coefficient vector obtained step (6)It is input in K nearest neighbor classifier, it is every obtains test sample The classification results of individual passage, wherein, i=1,2 ..., e, e be test sample sum;
(8) classification results for each passage of test sample that combining step (7) obtains, obtain the classification results of test sample.
2. according to the method for claim 1, wherein the step (2) realizes that step is as follows:
(2a) characteristic X (k) is normalized using L2 norms;
(2b) uses the result after sigmoid function pairs step (2a) normalized to carry out nonlinear transformation, after obtaining conversion Result B (k);
(2c) carries out matrix decomposition using soft-constraint Non-negative Matrix Factorization to the result B (k) after nonlinear transformation in step (2b), Obtain B (k) ≈ Z (k) A (k) F (k), wherein, B (k) is m × n rank matrixes, and Z (k) is the basic matrix of m × φ ranks, A (k) be φ × The companion matrix of c ranks, F (k) are the prediction label matrix of c × n ranks, and m is primitive character dimension, and for φ to decompose dimension, c is classification Number, n are training sample sum;
(2d) is according to the companion matrix A (k) and prediction label matrix F obtained after soft-constraint Non-negative Matrix Factorization in step (2c) (k) coefficient matrix, is obtained:H (k)=A (k) F (k).
3. soft-constraint Non-negative Matrix Factorization according to the method for claim 2, is wherein used in step (2c) to step (2b) Result B (k) after middle nonlinear transformation carries out matrix decomposition, carries out as follows:
(2c1) random initializtion basic matrix Z(1)(k), companion matrix A(1)And prediction label matrix F (k)(1)(k) it is used as iteration 1 time Result afterwards, wherein, basic matrix Z(1)(k) arbitrary element in meets For basic matrix Z(1)(k) Pth row q column elements;Companion matrix A(1)(k) arbitrary element in meets To aid in matrix A(1) (k) α row β column elements;Prediction label matrix F(1)(k) arbitrary element in meets For prediction Label matrix F(1)(k) γ rowsColumn element;P=1,2 ..., m, q=1,2 ..., φ, α=1,2 ..., φ, β=1, 2 ..., c, γ=1,2 ..., c,
(2c2) according to equation below, to the element Z in basic matrix Zp,qIt is updated:
<mrow> <msubsup> <mi>Z</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>&amp;prime;</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>Z</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mfrac> <msubsup> <mrow> <mo>(</mo> <mi>B</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> <msup> <mi>F</mi> <mi>T</mi> </msup> <mo>(</mo> <mi>k</mi> <mo>)</mo> <msup> <mi>A</mi> <mi>T</mi> </msup> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mo>)</mo> </mrow> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <msubsup> <mrow> <mo>(</mo> <mi>Z</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mi>A</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mi>F</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> <msup> <mi>F</mi> <mi>T</mi> </msup> <mo>(</mo> <mi>k</mi> <mo>)</mo> <msup> <mi>A</mi> <mi>T</mi> </msup> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mo>)</mo> </mrow> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </mfrac> <mo>,</mo> </mrow>
Wherein, t is iterations, and t=2 ..., iter, iter are maximum iteration, and T is matrix transposition,For repeatedly For the non-normalized basic matrix Z obtained after t times(t)' (k) pth row q column elements;
(2c3) is to the basic matrix Z that is obtained in step (2c2)(t)' (k) is normalized, and obtains the basic matrix Z of iteration t times(t)(k);
(2c4) according to equation below, to the elements A in companion matrix A (k)α,β(k) it is updated:
<mrow> <msubsup> <mi>A</mi> <mrow> <mi>&amp;alpha;</mi> <mo>,</mo> <mi>&amp;beta;</mi> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>A</mi> <mrow> <mi>&amp;alpha;</mi> <mo>,</mo> <mi>&amp;beta;</mi> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mfrac> <msubsup> <mrow> <mo>(</mo> <msup> <mi>Z</mi> <mi>T</mi> </msup> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mi>B</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> <msup> <mi>F</mi> <mi>T</mi> </msup> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mo>)</mo> </mrow> <mrow> <mi>&amp;alpha;</mi> <mo>,</mo> <mi>&amp;beta;</mi> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <msubsup> <mrow> <mo>(</mo> <msup> <mi>Z</mi> <mi>T</mi> </msup> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mi>Z</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mi>A</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mi>F</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> <msup> <mi>F</mi> <mi>T</mi> </msup> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mo>)</mo> </mrow> <mrow> <mi>&amp;alpha;</mi> <mo>,</mo> <mi>&amp;beta;</mi> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </mfrac> <mo>,</mo> </mrow>
Wherein,For the companion matrix A obtained after iteration t times(t)(k) α row β column elements;A(t)(k) after for iteration t times Obtained companion matrix;
(2c5) according to equation below, to the element in prediction label matrix F (k)It is updated:
Wherein,For t rear prediction label matrix F of iteration(t)(k) γ rowsColumn element;F(t)(k) after for iteration t times Prediction label matrix;λ is regularization coefficient;For the γ rows of pre-defined local label Matrix C (k)Column element;
(2c6) judges whether iterations t reaches maximum iteration iter:If it is, stop iteration, by i-th ter times repeatedly The basic matrix Z that generation obtains(iter)(k), companion matrix A(iter)And prediction label matrix F (k)(iter)(k), as final group moment Battle array Z (k), companion matrix A (k) and prediction label matrix F (k);Otherwise, return to step (2c2).
4. according to the method for claim 1, wherein the step (3) realizes that step is as follows:
(3a) is handled the characteristic X (k) of each passage of training sample according to step (2), obtains the 1st layer of basic matrix Z1And the 1st layer coefficients matrix H (k)1(k), wherein, k=1,2 ..., K;
The 1st layer coefficients matrix H that (3b) obtains according to step (2) to step (3a)1(k) handled, obtain the 2nd layer of basic matrix Z2And the 2nd layer coefficients matrix H (k)2(k);
(3c) continues to repeat same steps according to step (3a) and (3b), according to l-1 layer coefficients matrix Hsl-1(k) l, is obtained Layer basic matrix ZlAnd l layer coefficients matrix Hs (k)l(k), until number of repetition l=L, L layer basic matrixs Z is obtainedLAnd L layers (k) Coefficient matrix HL(k), wherein, l=2 ..., L, L are the number of plies of multilayer Non-negative Matrix Factorization;
The L layer coefficients matrix Hs that (3d) obtains according to step (3c)L(k) the low-rank robust for, obtaining each passage of training sample is special Levy hj(k), wherein, j=1,2 ..., n.
5. according to the method for claim 1, wherein the step (6) realizes that step is as follows:
(6a) the characteristic Y (k) of test sample is normalized, the projection process of nonlinear transformation and projective transformation Process, obtain projection matrixWherein, k ∈ { 1,2 ..., K }, K are sample channel number;
(6a1) the characteristic Y (k) of test sample is normalized using L2 norms;
(6a2) uses the result obtained in Sigmoid function pairs step (6a1) after normalized to carry out nonlinear transformation, obtains Transformation results f (Y (k)) after to nonlinear transformation, wherein, f () represents to carry out nonlinear transformation using Sigmoid functions;
The 1st layer of basic matrix Z that (6a3) obtains the result f (Y (k)) after step (6a2) nonlinear transformation in step (3a) respectively1 (k) projective transformation is carried out on, obtains the 1st layer of projection matrix:Wherein,Represent broad sense inverse operation;
The 1st layer of projection matrix that (6b) obtains according to step (6a)With the 2nd layer of basic matrix Z2(k) same treatment mistake is carried out Journey, obtain the 2nd layer of projection matrix
(6c) continues to repeat same steps according to step (6a) and (6b), according to l-1 layer projection matrixesWith l layer bases Matrix Zl(k) l layer projection matrixes are obtainedUntil number of repetition l=L, the projection of L layers is obtained MatrixWherein, l=2 ..., the number of plies that L, L are multilayer Non-negative Matrix Factorization;
The L layer projection matrixes that (6d) obtains according to step (6c)Obtain the projection coefficient vector of each test sampleWherein, i=1,2 ..., e.
6. according to the method for claim 1, wherein the step (7), is carried out as follows:
(7a) calculates the low-rank robust features h of training samplej(k) it is vectorial with the projection coefficient of test sampleBetween low-dimensional Euclidean distanceObtain distance setWherein, j=1,2 ..., n, k ∈ { 1,2 ..., K }, i ∈ { 1,2 ..., e }, | | | |2Represent 2 norms;
The distance set that (7b) obtains according to step (7a)By minimum value in distance setIt is corresponding The ξ training sample classification results of the classification as i-th of test sample on k-th of nearest neighbor classifier, wherein, ξ ∈{1,2,...,n};
(7c) classifies to K passage of each test sample respectively according to step (7a) and (7b), obtains each test specimens Originally the classification results on K nearest neighbor classifier.
7. according to the method for claim 1, wherein the step (8), is carried out as follows:
Classification results of each test sample that (8a) obtains according to step (7) on K nearest neighbor classifier, statistics is every respectively The test sample number CN correctly to be classified on individual nearest neighbor classifierk, calculate the discrimination of each nearest neighbor classifier:
<mrow> <msub> <mi>o</mi> <mi>k</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>CN</mi> <mi>k</mi> </msub> </mrow> <mi>e</mi> </mfrac> <mo>,</mo> </mrow>
Wherein, CNkFor the test sample number correctly classified on k-th of nearest neighbor classifier, okFor k-th of arest neighbors point The discrimination of class device;
The discrimination for the K nearest neighbor classifier that (8b) obtains according to step (8a) calculates the line of K nearest neighbor classifier respectively Property weight coefficient αk
<mrow> <msub> <mi>&amp;alpha;</mi> <mi>k</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>o</mi> <mi>k</mi> </msub> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>o</mi> <mi>k</mi> </msub> </mrow> </mfrac> <mo>;</mo> </mrow>
The linear weight factor alpha that (8c) obtains according to step (8b)k, calculate K channel projection coefficient vector of test sample With K passage low-rank robust features h of training samplej(k) Weighted distance between:
<mrow> <msub> <mi>d</mi> <mrow> <mi>j</mi> <mi>i</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>&amp;alpha;</mi> <mn>1</mn> </msub> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>h</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>h</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>&amp;alpha;</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>h</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>h</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> <mo>+</mo> <mo>...</mo> <mo>+</mo> <msub> <mi>&amp;alpha;</mi> <mi>k</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>h</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>h</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> <mo>+</mo> <mo>...</mo> <mo>+</mo> <msub> <mi>&amp;alpha;</mi> <mi>K</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>h</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>K</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>h</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mi>K</mi> <mo>)</mo> </mrow> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> <mo>,</mo> </mrow>
Obtain Weighted distance set { d1i,d2i,...,dji,...,dni, wherein, j=1,2 ..., n, i ∈ { 1,2 ..., e };
Weighted distance set { the d that (8d) obtains according to step (8c)1i,d2i,...,dji,...,dni, by Weighted distance set Minimum value dωiClassification results of the classification of corresponding the ω training sample as test sample, wherein, ω ∈ 1,2 ..., n}。
CN201710568578.0A 2017-07-13 2017-07-13 Face recognition method based on deep learning multi-layer non-negative matrix decomposition Active CN107451537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710568578.0A CN107451537B (en) 2017-07-13 2017-07-13 Face recognition method based on deep learning multi-layer non-negative matrix decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710568578.0A CN107451537B (en) 2017-07-13 2017-07-13 Face recognition method based on deep learning multi-layer non-negative matrix decomposition

Publications (2)

Publication Number Publication Date
CN107451537A true CN107451537A (en) 2017-12-08
CN107451537B CN107451537B (en) 2020-07-10

Family

ID=60488656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710568578.0A Active CN107451537B (en) 2017-07-13 2017-07-13 Face recognition method based on deep learning multi-layer non-negative matrix decomposition

Country Status (1)

Country Link
CN (1) CN107451537B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256569A (en) * 2018-01-12 2018-07-06 电子科技大学 A kind of object identifying method under complex background and the computer technology used

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254328A (en) * 2011-05-17 2011-11-23 西安电子科技大学 Video motion characteristic extracting method based on local sparse constraint non-negative matrix factorization
CN103345624A (en) * 2013-07-15 2013-10-09 武汉大学 Weighing characteristic face recognition method for multichannel pulse coupling neural network
US20150242180A1 (en) * 2014-02-21 2015-08-27 Adobe Systems Incorporated Non-negative Matrix Factorization Regularized by Recurrent Neural Networks for Audio Processing
CN105469034A (en) * 2015-11-17 2016-04-06 西安电子科技大学 Face recognition method based on weighted diagnostic sparseness constraint nonnegative matrix decomposition
CN106355138A (en) * 2016-08-18 2017-01-25 电子科技大学 Face recognition method based on deep learning and key features extraction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254328A (en) * 2011-05-17 2011-11-23 西安电子科技大学 Video motion characteristic extracting method based on local sparse constraint non-negative matrix factorization
CN103345624A (en) * 2013-07-15 2013-10-09 武汉大学 Weighing characteristic face recognition method for multichannel pulse coupling neural network
US20150242180A1 (en) * 2014-02-21 2015-08-27 Adobe Systems Incorporated Non-negative Matrix Factorization Regularized by Recurrent Neural Networks for Audio Processing
CN105469034A (en) * 2015-11-17 2016-04-06 西安电子科技大学 Face recognition method based on weighted diagnostic sparseness constraint nonnegative matrix decomposition
CN106355138A (en) * 2016-08-18 2017-01-25 电子科技大学 Face recognition method based on deep learning and key features extraction

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
余化鹏等;: ""基于深度迁移学习的人脸识别方法研究"", 《成都大学学报( 自然科学版)》 *
同鸣: "正交指数约束的平滑非负矩阵分解方法及应用", 《系统工程与电子技术》 *
曲省卫;: ""深度非负矩阵分解算法研究"", 《中国硕士学位论文全文数据库 信息科技辑》 *
熊培;: ""基于NMF与BP神经网络的人脸识别方法研究"", 《中国硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256569A (en) * 2018-01-12 2018-07-06 电子科技大学 A kind of object identifying method under complex background and the computer technology used
CN108256569B (en) * 2018-01-12 2022-03-18 电子科技大学 Object identification method under complex background and used computer technology

Also Published As

Publication number Publication date
CN107451537B (en) 2020-07-10

Similar Documents

Publication Publication Date Title
Bashivan et al. Learning representations from EEG with deep recurrent-convolutional neural networks
Thai et al. Image classification using support vector machine and artificial neural network
CN103955707B (en) A kind of large nuber of images categorizing system based on depth level feature learning
CN104268593B (en) The face identification method of many rarefaction representations under a kind of Small Sample Size
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
CN107506740A (en) A kind of Human bodys&#39; response method based on Three dimensional convolution neutral net and transfer learning model
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
CN106529447A (en) Small-sample face recognition method
CN106682569A (en) Fast traffic signboard recognition method based on convolution neural network
CN104850837B (en) The recognition methods of handwriting
CN108121975A (en) A kind of face identification method combined initial data and generate data
CN104517274B (en) Human face portrait synthetic method based on greedy search
Alfarra et al. On the decision boundaries of neural networks: A tropical geometry perspective
CN104298974A (en) Human body behavior recognition method based on depth video sequence
CN110728324A (en) Depth complex value full convolution neural network-based polarimetric SAR image classification method
CN107301382A (en) The Activity recognition method of lower depth Non-negative Matrix Factorization is constrained based on Time Dependent
CN103164689A (en) Face recognition method and face recognition system
CN107480636A (en) Face identification method, system and storage medium based on core Non-negative Matrix Factorization
CN104715266B (en) The image characteristic extracting method being combined based on SRC DP with LDA
CN106529586A (en) Image classification method based on supplemented text characteristic
Dong et al. Feature extraction through contourlet subband clustering for texture classification
CN108491863A (en) Color image processing method based on Non-negative Matrix Factorization and convolutional neural networks
CN106570183A (en) Color picture retrieval and classification method
Gao et al. A novel face feature descriptor using adaptively weighted extended LBP pyramid
Thakur et al. Machine learning based saliency algorithm for image forgery classification and localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant