CN107085700A - A kind of face identification method being combined based on rarefaction representation with neural networks with single hidden layer technology - Google Patents

A kind of face identification method being combined based on rarefaction representation with neural networks with single hidden layer technology Download PDF

Info

Publication number
CN107085700A
CN107085700A CN201710028035.XA CN201710028035A CN107085700A CN 107085700 A CN107085700 A CN 107085700A CN 201710028035 A CN201710028035 A CN 201710028035A CN 107085700 A CN107085700 A CN 107085700A
Authority
CN
China
Prior art keywords
mrow
msub
mtd
mtr
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710028035.XA
Other languages
Chinese (zh)
Inventor
梁栋
屈磊
谭守标
唐俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201710028035.XA priority Critical patent/CN107085700A/en
Publication of CN107085700A publication Critical patent/CN107085700A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of face identification method being combined based on rarefaction representation with neural networks with single hidden layer technology, the time-consuming longer or unstable classification performance defect of recognition methods is solved compared with prior art.The present invention comprises the following steps:The face scope of face is determined in man face image acquiring and detection, the test two field picture extracted from video;Image preprocessing, carries out eliminating illumination or the pretreatment of noise jamming to test two field picture;Feature extraction, face characteristic extraction is carried out to test two field picture;Classification and Identification, by the face characteristic information extracted, scans for classification and matching with the feature templates that are stored in database, obtains final classification recognition result.Sparse representation method is combined by the present invention with Single hidden layer feedforward neural networks, is realized while very fast recognition speed is possessed, and keeps good Classification and Identification performance.

Description

A kind of recognition of face being combined based on rarefaction representation with neural networks with single hidden layer technology Method
Technical field
It is specifically a kind of to be based on rarefaction representation and neural networks with single hidden layer the present invention relates to image identification technical field The face identification method that technology is combined.
Background technology
As one of most successful application in the fields such as image procossing, pattern-recognition, recognition of face is due to without identification object Coordinate, can be received much concern the features such as remote concealed operation, identification process close friend.Except pure significance of scientific research, in business and hold Also there are many applications in method, such as supervision, safety, communication and man-machine interaction.By the research of 30 years, various faces Recognition methods is proposed in succession by researcher.
With the rise of compressive sensing theory, as the rarefaction representation of its core technology, data analysis can be not only reduced With the cost of processing, and the compression efficiency of data can be improved, thus the method based on rarefaction representation is due to its outstanding point Direction of scientific rersearch, is absorbed in is based on one after another by class performance and the extensive concern that researcher is received to noise and the robustness blocked In the research of the recognition of face of rarefaction representation, realize that the more precision of recognition of face improves face recognition technology, but this method Often more take.
Single hidden layer feedforward neural networks eliminate continuous iteration to obtain compared to traditional neural network learning training method Best Generalization Capability, but the classification of this method are realized in the redundant and complicated process of optimized parameter, pursuit with most fast pace of learning Performance is more unstable.
Therefore, how rarefaction representation is combined with Single hidden layer feedforward neural networks, set using the advantage of its own Count out a kind of face identification method and have become the technical problem for being badly in need of solving.
The content of the invention
The invention aims to solve, recognition methods in the prior art is time-consuming longer or unstable classification performance to be lacked Fall into and above-mentioned ask solved with the face identification method that neural networks with single hidden layer technology is combined based on rarefaction representation there is provided a kind of Topic.
To achieve these goals, technical scheme is as follows:
A kind of face identification method being combined based on rarefaction representation with neural networks with single hidden layer technology, including following step Suddenly:
The face scope of face is determined in man face image acquiring and detection, the test two field picture extracted from video;
Image preprocessing, carries out eliminating illumination or the pretreatment of noise jamming to test two field picture;
Feature extraction, face characteristic extraction is carried out to test two field picture;
Classification and Identification, by the face characteristic information extracted, classification is scanned for the feature templates that are stored in database Matching, obtains final classification recognition result.
Described Classification and Identification comprises the following steps:
Sample training is carried out to training set;
Test sample classification is carried out to the face characteristic information for testing two field picture.
Described comprises the following steps to training set progress sample training:
Calculate rarefaction representation coefficient xi, make it as the input sample of neural networks with single hidden layer;
Random generation input weight wiWith deviation bi
Calculate hidden layer output matrix H;
Calculate optimal output weights
Wherein H+=(HTH)-1HT
Wherein,For optimal output weights, H+For hidden layer output matrix H Moore-Penrose generalized inverse matrix;
Export the optimal output weights of neural networks with single hidden layer
Described calculates rarefaction representation coefficient xiComprise the following steps:
Training set A is obtained from database, training set A includes common c facial images of n different objects,
Wherein:C images are divided into n groups, and each object includes niIndividual face sample image, n is object number, niTo be each The sample image quantity of object;
I-th group of facial image in training set A is defined as Ai,
Wherein, ai,j∈RD×1Represent in i-th group j-th of face sample image (j=1,2 ..., ni) D that is constituted Dimensional vector;
C training sample image of n group is linked successively, c=n1+n2+...+nn,
Base or excessively complete dictionary A are constituted,
A=[A1,A2,…An];
If images to be recognized y belongs to the i-th group objects, A is usediIn face sample image linear expression y,
Wherein, xiIt is y in AiOn expression coefficient,
Images to be recognized y is as follows in the linear expression of all training samples:
Y=Ax ∈ RD
Wherein, sparse coefficient vector
Sparse coefficient vector x is solved under packet and local sensitive constraint, it is expressed as minimization problem:
Wherein, λ is the regulation parameter for weighing local susceptibility and grouping sparsity,For point multiplication operation,
p∈Rn×1For local restriction vector, its similitude to test sample and around it between training sample enters row constraint, It is expressed as follows:
Wherein, η is normal number parameter, dk(y,ai) it is core Euclidean distance, it is expressed as follows:
Wherein, gaussian kernel function is as follows:
σ is the standard error parameter of Gaussian kernel.
The described hidden layer output matrix H that calculates comprises the following steps:
Make { (xi,ti)|xi∈Rd,ti∈Rm, i=1 ..., N } it is the training set containing N number of different samples;
Wherein:Input sample xi=(xi1,xi2,...,xid)T, expectation target output label vector ti=(ti1,ti2,..., tim)T;Possess L hidden node (general L < < N) and activation primitive is g (wi,bi, neural networks with single hidden layer unified model x) It is expressed as follows:
Wherein, wi=(wi1,wi2,...,wid)TFor i-th of hidden node of connection and the input weights of input layer, bi It is the deviation of i-th of hidden node, βi=(βi1i2,...,βim)TIt is i-th of hidden node of connection and output node layer Export weights, wi·xjRepresent wiAnd xjInner product, g (wi,bi, x) it is Sigmoid functions or RBF functions;
The unified model of neural networks with single hidden layer is written as matrix form:
H β=T
Wherein,
Wherein, H is the hidden layer output matrix of neutral net, and i-th in matrix H to be classified as correspondence on i-th of hidden node defeated Enter x1,x2,...,xNOutput vector.
The face characteristic information of described pair of test two field picture carries out test sample classification and comprised the following steps:
Input test sample y and the optimal output weights of neural networks with single hidden layer in training airplane
Processing formula is minimized using sparse coefficient vector x sparse coding is carried out to y, obtain the corresponding rarefaction representation systems of y Number
WillAs the input sample of neural networks with single hidden layer, classified using neural networks with single hidden layer, by judging most Small residual error method obtains final classification result, and its formula is as follows:
Output category result class label t.
Beneficial effect
A kind of face identification method being combined based on rarefaction representation with neural networks with single hidden layer technology of the present invention, with showing There is technology compared to sparse representation method is combined with Single hidden layer feedforward neural networks, realize and possessing very fast recognition speed Meanwhile, keep good Classification and Identification performance.
The present invention obtains facial image under recognition of face framework using sparse representation method has the sparse table of identification Show coefficient, the double constraints of local susceptibility and grouping sparsity are carried out in sparse coding, neural networks with single hidden layer pair is used Facial image sample carries out effective classification and identification.The perfect neural networks with single hidden layer for possessing very fast pace of learning of the present invention When applied to classification due to noise-sensitive so that the relatively low deficiency of accuracy, compared to the identity of neural networks with single hidden layer Can, relatively significantly it is improved;Though sparse representation method classification performance is preferably and to the more robust such as illumination, noise, carry out More taken during image classification, the present invention also effectively improves this problem so that the image classification time is substantially reduced.
Brief description of the drawings
Fig. 1 is method precedence diagram of the invention;
Fig. 2 is the algorithm logic figure of neural networks with single hidden layer in the prior art.
Embodiment
To make to have a better understanding and awareness to architectural feature of the invention and the effect reached, to preferably Embodiment and accompanying drawing coordinate detailed description, are described as follows:
Recognition of face is:Face characteristic to be identified and obtained skin detection are compared, and according to two Similarity degree between person is judged the identity information of face.As shown in figure 1, of the present invention a kind of based on rarefaction representation The face identification method being combined with neural networks with single hidden layer technology, it comprises the following steps:
The first step, man face image acquiring is determined with detection in the test two field picture extracted using conventional method from video The face scope of face.Video is obtained, video is divided with frame, is positioned from the image of every frame and marks off face substantially Face scope, that is, carry out the detection of facial image, face determined in test image or the two field picture extracted from video Approximate location.Generally in video, it is necessary to which multiple image is used to trace detection face part.
Second step, image preprocessing carries out eliminating the pre- of illumination or noise jamming using conventional method to test two field picture Processing.Image preprocessing step is used to eliminate illumination or noise jamming, obtains preferable standard faces image, is to extract Shandong in the later stage The face characteristic of rod provides strong ensure.
3rd step, feature extraction carries out face characteristic extraction using conventional method to test two field picture.Also referred to as face table Levy, be that some features for being directed to face are carried out, be also the process that feature modeling is carried out to face.
4th step, Classification and Identification.By the face characteristic information extracted, searched with the feature templates that are stored in database Rope classification and matching, obtains final classification recognition result.By what is stored in face characteristic information, with database obtained from extraction Feature templates scan for classification and matching, can be by setting a threshold value, when similarity exceedes this threshold value, then classification With obtained result output, final classification can also be obtained using the method for least residual herein by judging minimum residual method As a result.It is comprised the following steps that:
(1) training process, sample training is carried out to training set.It is dilute with being grouped that local susceptibility is carried out for training sample set The sparse coding under property double constraints is dredged, the rarefaction representation coefficient of resulting training sample is regard as neural networks with single hidden layer Input sample, and substitute into neutral net and be trained, in network training, the input weights and deviation in activation primitive are equal Randomly select, realize the Optimal Learning performance of the neutral net by constantly adjusting hidden node number, namely obtain optimal Export weights.It comprises the following steps:
A, calculate rarefaction representation coefficient xi, make it as the input sample of neural networks with single hidden layer.
Here, the grouping sparsity method for expressing with internal structural is considered first, and relative to the number of sparse coding According to openness, data locality preferably can enter row constraint to the similitude data, thus the present invention will enter to sparse coding The double constraints of row local susceptibility and grouping sparsity so that the internal structural information of test sample and training dictionary is filled Divide and utilize, so as to obtain more efficiently rarefaction representation coefficient.
First, training set A is obtained from database, training set A includes common c facial images of n different objects.Its In:C images are divided into n groups, and each object includes niIndividual face sample image.Because a people there may be multiple different angles Facial image, that is to say, that the multiple face samples of people correspondence, n is then object number (number or face number), niIt is then The sample image quantity (sample image corresponding to everyone) of each object.
I-th group of facial image in training set A is defined as Ai,
Wherein, ai,j∈RD×1Represent in i-th group j-th of face sample image (j=1,2 ..., ni) D that is constituted Dimensional vector, the D dimensional vectors are to be sequentially connected in series acquisition to the pixel brightness value that face sample image is respectively arranged.
Secondly, c training sample image of n group is linked successively, c=n1+n2+...+nn, constitute base or excessively complete word Allusion quotation A, excessively complete dictionary can be by the atom linear expression in the dictionary, A by the vector of any one identical dimensional1In have n1It is individual Sample, A2In have n2Individual sample, similarly, AnIn have nnIndividual sample.
A=[A1,A2,…An]。
Again, if images to be recognized y belongs to the i-th group objects, A is usediIn face sample image linear expression y,
Wherein, x is madeiIt is y in AiOn expression coefficient,X hereiniFor illustrating y Can be completely with i-th group of dictionary AiLinear expression, herein and need not be solved, and that really solve is following x.
Then images to be recognized y is as follows in the linear expression of all training samples:
Y=Ax ∈ RD
Wherein, sparse coefficient vector
Here, assume that y belongs to i-th group before, therefore y can be only with i-th group of dictionary linear expression, because A is by each Packet composition A=[A1,A2,…An], the non-zero value part in wherein x is A in corresponding AiThe part at place, and other parts are 0。
Finally, sparse coefficient vector x is solved under packet and local sensitive constraint, it can be expressed as minimum and ask Topic:
Wherein, λ is the regulation parameter for weighing local susceptibility and grouping sparsity,For point multiplication operation,
p∈Rn×1For local restriction vector, its similitude to test sample and around it between training sample enters row constraint, It is expressed as follows:
Wherein, η is normal number parameter, dk(y,ai) it is core Euclidean distance, it is expressed as follows:
Wherein, gaussian kernel function is as follows:
σ is the standard error parameter of Gaussian kernel.
B, random generation input weight wiWith deviation bi, wiAnd biInitialized with random value, generally take 0~1 it Between white noise random value.
C, hidden layer output matrix H is calculated, its calculation procedure is as follows:
A) { (x is madei,ti)|xi∈Rd,ti∈Rm, i=1 ..., N } it is the training set containing N number of different samples;
Wherein:Input sample xi=(xi1,xi2,...,xid)T, expectation target output label vector ti=(ti1,ti2,..., tim)T
B) as shown in Fig. 2 c possesses L hidden node (general L < < N) and activation primitive is g (wi,bi,x)
Neural networks with single hidden layer unified model be expressed as follows:
Wherein, wi=(wi1,wi2,...,wid)TFor i-th of hidden node of connection and the input weights of input layer, bi It is the deviation of i-th of hidden node, βi=(βi1i2,...,βim)TIt is i-th of hidden node of connection and output node layer Export weights, wi·xjRepresent wiAnd xjInner product, g (wi,bi, x) it is Sigmoid functions or RBF functions.
C) unified model of neural networks with single hidden layer is written as matrix form:
H β=T
Wherein,
Wherein, H is the hidden layer output matrix of neutral net, and i-th in matrix H to be classified as correspondence on i-th of hidden node defeated Enter x1,x2,...,xNOutput vector.
D, the optimal output weights of calculating
Wherein H+=(HTH)-1HT
Wherein,For optimal output weights, H+For hidden layer output matrix H Moore-Penrose generalized inverse matrix.
E, the output optimal output weights of neural networks with single hidden layer
(2) test process, test sample classification is carried out to the face characteristic information for testing two field picture.Test sample is carried out Local susceptibility and the sparse coding under grouping sparsity double constraints, obtain the rarefaction representation system corresponding to the test sample Number, and as the input sample of neural networks with single hidden layer, training set is carried out to instruct in sample training step while will pass through Optimal output weights obtained by practicingIt is updated among the network, using differentiating that minimal error obtained corresponding to test sample Class label, that is, obtain final classification result, completes the identification of face.It is comprised the following steps that:
A, input test sample y and the optimal output weights of neural networks with single hidden layer in training airplane
B, using sparse coefficient vector x minimize processing formula to y carry out sparse coding, obtain the corresponding rarefaction representations of y CoefficientLocal susceptibility and the sparse coding under grouping sparsity double constraints are carried out to test sample y, the test is obtained Rarefaction representation coefficient corresponding to sample y
C, generalAs the input sample of neural networks with single hidden layer, classified using neural networks with single hidden layer, by judging Least residual method obtains final classification result, and its formula is as follows:
D, output category result class label t, complete classification, different faces are identified from test sample y.
The present invention compensate for possessing the neural networks with single hidden layer of very fast pace of learning when applied to classification due to noise It is sensitive thus accuracy is relatively low and classification performance is preferable;Sparse representation method to the more robust such as illumination, noise but more consumes When deficiency.The method that the present invention is provided is realized while very fast recognition speed is possessed, and keeps preferable Classification and Identification performance.
General principle, principal character and the advantages of the present invention of the present invention has been shown and described above.The technology of the industry Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and that described in above-described embodiment and specification is the present invention Principle, various changes and modifications of the present invention are possible without departing from the spirit and scope of the present invention, these change and Improvement is both fallen within the range of claimed invention.The protection domain of application claims by appended claims and its Equivalent is defined.

Claims (6)

1. a kind of face identification method being combined based on rarefaction representation with neural networks with single hidden layer technology, it is characterised in that bag Include following steps:
11) the face scope of face is determined in man face image acquiring and detection, the test two field picture extracted from video;
12) image preprocessing, carries out eliminating illumination or the pretreatment of noise jamming to test two field picture;
13) feature extraction, face characteristic extraction is carried out to test two field picture;
14) Classification and Identification, by the face characteristic information extracted, classification is scanned for the feature templates that are stored in database Match somebody with somebody, obtain final classification recognition result.
2. a kind of recognition of face being combined based on rarefaction representation with neural networks with single hidden layer technology according to claim 1 Method, it is characterised in that described Classification and Identification comprises the following steps:
21) sample training is carried out to training set;
22) test sample classification is carried out to the face characteristic information for testing two field picture.
3. a kind of recognition of face being combined based on rarefaction representation with neural networks with single hidden layer technology according to claim 2 Method, it is characterised in that described to comprise the following steps to training set progress sample training:
31) rarefaction representation coefficient x is calculatedi, make it as the input sample of neural networks with single hidden layer;
32) random generation input weight wiWith deviation bi
33) hidden layer output matrix H is calculated;
34) optimal output weights are calculated
Wherein H+=(HTH)-1HT
Wherein,For optimal output weights, H+For hidden layer output matrix H Moore-Penrose generalized inverse matrix;
35) the optimal output weights of neural networks with single hidden layer are exported
4. a kind of recognition of face being combined based on rarefaction representation with neural networks with single hidden layer technology according to claim 3 Method, it is characterised in that described calculates rarefaction representation coefficient xiComprise the following steps:
41) training set A is obtained from database, training set A includes common c facial images of n different objects,
Wherein:C images are divided into n groups, and each object includes niIndividual face sample image, n is object number, niFor each object Sample image quantity;
I-th group of facial image in training set A is defined as Ai,
<mrow> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mo>,</mo> <msub> <mi>n</mi> <mi>i</mi> </msub> </mrow> </msub> <mo>&amp;rsqb;</mo> <mo>&amp;Element;</mo> <msup> <mi>R</mi> <mrow> <mi>D</mi> <mo>&amp;times;</mo> <msub> <mi>n</mi> <mi>i</mi> </msub> </mrow> </msup> <mo>,</mo> </mrow>
Wherein, ai,j∈RD×1Represent in i-th group j-th of face sample image (j=1,2 ..., ni) constituted D dimension row to Amount;
42) c training sample image of n group is linked successively, c=n1+n2+...+nn,
Base or excessively complete dictionary A are constituted,
A=[A1,A2,…An];
43) set images to be recognized y and belong to the i-th group objects, use AiIn face sample image linear expression y,
<mrow> <mi>y</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mi>i</mi> </msub> </munderover> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>A</mi> <mi>i</mi> </msub> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> </mrow>
Wherein, xiIt is y in AiOn expression coefficient,
Images to be recognized y is as follows in the linear expression of all training samples:
Y=Ax ∈ RD
Wherein, sparse coefficient vector
44) sparse coefficient vector x is solved under packet and local sensitive constraint, it is expressed as minimization problem:
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <munder> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mrow> <mi>x</mi> <mo>&amp;Element;</mo> <msup> <mi>R</mi> <mi>D</mi> </msup> </mrow> </munder> <mi>&amp;lambda;</mi> <mo>|</mo> <mo>|</mo> <mi>p</mi> <mo>&amp;CircleTimes;</mo> <mi>x</mi> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>A</mi> <mi>i</mi> </msub> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> </mrow> </mtd> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <mo>|</mo> <mo>|</mo> <mi>y</mi> <mo>-</mo> <mi>A</mi> <mi>x</mi> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> <mo>&lt;</mo> <mi>&amp;epsiv;</mi> </mrow> </mtd> </mtr> </mtable> </mfenced>
Wherein, λ is the regulation parameter for weighing local susceptibility and grouping sparsity,For point multiplication operation,
p∈Rn×1For local restriction vector, its similitude to test sample and around it between training sample enters row constraint, represents It is as follows:
<mrow> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>=</mo> <msqrt> <mrow> <mi>exp</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>(</mo> <mrow> <mi>y</mi> <mo>,</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> </mrow> <mo>)</mo> <mo>/</mo> <mi>&amp;eta;</mi> <mo>)</mo> </mrow> </mrow> </msqrt> <mo>,</mo> </mrow>
Wherein, η is normal number parameter, dk(y,ai) it is core Euclidean distance, it is expressed as follows:
<mrow> <msub> <mi>d</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>,</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mrow> <mi>k</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mi>k</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>,</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </msqrt> <mo>,</mo> </mrow>
Wherein, gaussian kernel function is as follows:
<mrow> <mi>k</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>,</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mo>-</mo> <msup> <mrow> <mo>|</mo> <mrow> <mi>y</mi> <mo>-</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msup> <mi>&amp;sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow>
σ is the standard error parameter of Gaussian kernel.
5. a kind of recognition of face being combined based on rarefaction representation with neural networks with single hidden layer technology according to claim 3 Method, it is characterised in that the described hidden layer output matrix H that calculates comprises the following steps:
51) { (x is madei,ti)|xi∈Rd,ti∈Rm, i=1 ..., N } it is the training set containing N number of different samples;
Wherein:Input sample xi=(xi1,xi2,...,xid)T, expectation target output label vector ti=(ti1,ti2,...,tim)T
52) possess L hidden node and activation primitive is g (wi,bi, neural networks with single hidden layer unified model x) is expressed as follows:
<mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <msub> <mi>g</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <msub> <mi>g</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>N</mi> <mo>,</mo> </mrow>
Wherein, wi=(wi1,wi2,...,wid)TFor i-th of hidden node of connection and the input weights of input layer, biIt is i-th The deviation of individual hidden node, βi=(βi1i2,...,βim)TIt is that the output for connecting i-th of hidden node and output node layer is weighed Value, wi·xjRepresent wiAnd xjInner product, g (wi,bi, x) it is Sigmoid functions or RBF functions;
53) unified model of neural networks with single hidden layer is written as matrix form:
H β=T
Wherein,
<mrow> <mi>&amp;beta;</mi> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>&amp;beta;</mi> <mn>1</mn> <mi>T</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>&amp;beta;</mi> <mn>2</mn> <mi>T</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>&amp;beta;</mi> <mi>L</mi> <mi>T</mi> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mrow> <mi>L</mi> <mo>&amp;times;</mo> <mi>m</mi> </mrow> </msub> <mo>,</mo> <mi>T</mi> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>t</mi> <mn>1</mn> <mi>T</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>t</mi> <mn>2</mn> <mi>T</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>t</mi> <mi>N</mi> <mi>T</mi> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mrow> <mi>N</mi> <mo>&amp;times;</mo> <mi>m</mi> </mrow> </msub> <mo>,</mo> </mrow>
Wherein, H is the hidden layer output matrix of neutral net, and i-th in matrix H is classified as correspondence input x on i-th of hidden node1, x2,...,xNOutput vector.
6. a kind of recognition of face being combined based on rarefaction representation with neural networks with single hidden layer technology according to claim 2 Method, it is characterised in that the face characteristic information of described pair of test two field picture carries out test sample classification and comprised the following steps:
61) input test sample y and the optimal output weights of neural networks with single hidden layer in training airplane
62) minimize processing formula using sparse coefficient vector x and sparse coding is carried out to y, obtain the corresponding rarefaction representation coefficients of y
63) willAs the input sample of neural networks with single hidden layer, classified using neural networks with single hidden layer, by judging most Small residual error method obtains final classification result, and its formula is as follows:
<mrow> <munder> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mi>t</mi> </munder> <mo>|</mo> <mi>H</mi> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&amp;OverBar;</mo> </mover> <mo>)</mo> </mrow> <mover> <mi>&amp;beta;</mi> <mo>^</mo> </mover> <mo>-</mo> <mi>t</mi> <mo>|</mo> <mo>;</mo> </mrow>
64) output category result class label t.
CN201710028035.XA 2017-01-16 2017-01-16 A kind of face identification method being combined based on rarefaction representation with neural networks with single hidden layer technology Pending CN107085700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710028035.XA CN107085700A (en) 2017-01-16 2017-01-16 A kind of face identification method being combined based on rarefaction representation with neural networks with single hidden layer technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710028035.XA CN107085700A (en) 2017-01-16 2017-01-16 A kind of face identification method being combined based on rarefaction representation with neural networks with single hidden layer technology

Publications (1)

Publication Number Publication Date
CN107085700A true CN107085700A (en) 2017-08-22

Family

ID=59614723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710028035.XA Pending CN107085700A (en) 2017-01-16 2017-01-16 A kind of face identification method being combined based on rarefaction representation with neural networks with single hidden layer technology

Country Status (1)

Country Link
CN (1) CN107085700A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147818A (en) * 2019-04-11 2019-08-20 江苏大学 A method of the laser welding forming defect based on rarefaction representation predicts classification
CN110310663A (en) * 2019-05-16 2019-10-08 平安科技(深圳)有限公司 Words art detection method, device, equipment and computer readable storage medium in violation of rules and regulations
CN110866143A (en) * 2019-11-08 2020-03-06 山东师范大学 Audio scene classification method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760821A (en) * 2016-01-31 2016-07-13 中国石油大学(华东) Classification and aggregation sparse representation face identification method based on nuclear space
CN106066994A (en) * 2016-05-24 2016-11-02 北京工业大学 A kind of face identification method of the rarefaction representation differentiated based on Fisher
CN106250811A (en) * 2016-06-15 2016-12-21 南京工程学院 Unconfinement face identification method based on HOG feature rarefaction representation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760821A (en) * 2016-01-31 2016-07-13 中国石油大学(华东) Classification and aggregation sparse representation face identification method based on nuclear space
CN106066994A (en) * 2016-05-24 2016-11-02 北京工业大学 A kind of face identification method of the rarefaction representation differentiated based on Fisher
CN106250811A (en) * 2016-06-15 2016-12-21 南京工程学院 Unconfinement face identification method based on HOG feature rarefaction representation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙茜: "基于局部敏感性和混合稀疏表示的人脸识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147818A (en) * 2019-04-11 2019-08-20 江苏大学 A method of the laser welding forming defect based on rarefaction representation predicts classification
CN110310663A (en) * 2019-05-16 2019-10-08 平安科技(深圳)有限公司 Words art detection method, device, equipment and computer readable storage medium in violation of rules and regulations
CN110866143A (en) * 2019-11-08 2020-03-06 山东师范大学 Audio scene classification method and system
CN110866143B (en) * 2019-11-08 2022-11-22 山东师范大学 Audio scene classification method and system

Similar Documents

Publication Publication Date Title
CN110532900B (en) Facial expression recognition method based on U-Net and LS-CNN
CN108615010B (en) Facial expression recognition method based on parallel convolution neural network feature map fusion
CN105631479B (en) Depth convolutional network image labeling method and device based on non-equilibrium study
CN105447473B (en) A kind of any attitude facial expression recognizing method based on PCANet-CNN
CN111125358B (en) Text classification method based on hypergraph
CN110222163A (en) A kind of intelligent answer method and system merging CNN and two-way LSTM
CN107657204A (en) The construction method and facial expression recognizing method and system of deep layer network model
CN112464865A (en) Facial expression recognition method based on pixel and geometric mixed features
Ku et al. Face recognition based on mtcnn and convolutional neural network
CN111291556A (en) Chinese entity relation extraction method based on character and word feature fusion of entity meaning item
CN114821640A (en) Skeleton action identification method based on multi-stream multi-scale expansion space-time diagram convolution network
CN115966010A (en) Expression recognition method based on attention and multi-scale feature fusion
CN110414626A (en) A kind of pig variety ecotype method, apparatus and computer readable storage medium
CN108986091A (en) Casting defect image detecting method based on depth Hash network
Wang et al. Improvement of MNIST image recognition based on CNN
CN107085700A (en) A kind of face identification method being combined based on rarefaction representation with neural networks with single hidden layer technology
Chen et al. Military image scene recognition based on CNN and semantic information
Deng A survey of convolutional neural networks for image classification: Models and datasets
CN111144500A (en) Differential privacy deep learning classification method based on analytic Gaussian mechanism
Wang et al. Facial expression recognition based on CNN
CN114170659A (en) Facial emotion recognition method based on attention mechanism
CN111783688B (en) Remote sensing image scene classification method based on convolutional neural network
Ling et al. A facial expression recognition system for smart learning based on YOLO and vision transformer
CN117011219A (en) Method, apparatus, device, storage medium and program product for detecting quality of article
Duan An object recognition method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170822

WD01 Invention patent application deemed withdrawn after publication