CN103984922B - Face identification method based on sparse representation and shape restriction - Google Patents
Face identification method based on sparse representation and shape restriction Download PDFInfo
- Publication number
- CN103984922B CN103984922B CN201410179522.2A CN201410179522A CN103984922B CN 103984922 B CN103984922 B CN 103984922B CN 201410179522 A CN201410179522 A CN 201410179522A CN 103984922 B CN103984922 B CN 103984922B
- Authority
- CN
- China
- Prior art keywords
- shape
- images
- recognized
- image
- training image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a face identification method based on sparse representation and shape restriction, and belongs the field of image processing, computer vision and mode identification. The method comprises the following steps of firstly, marking the face position and the shape of each image to be identified; secondly, carrying out feature extraction on the image to be identified on the basis of initialized shapes; thirdly, carrying out image matching on the basis of a shape model; fourthly, carrying out sparse representation on the textural features of the image to be identified on the basis of texture features of training images to obtain a coefficient collection corresponding to the training images; fifthly, analyzing obtained coefficients, and taking the identity of the training image which corresponds to the maximum coefficient as an ultimate identification result. Compared with the prior art, the face identification method based on the sparse representation and the shape restriction is relative robust on selection of initial positions, can be applied to face identification under facial shape changing, can improve the face identification precision and the application range greatly, and has good popularization and application value.
Description
Technical field
The present invention relates to image procossing, computer vision, mode identification technology, specifically a kind of based on sparse
Represent the face identification method with shape constraining.
Background technology
Recognition of face is that one kind has recognized a kind of more natural, more direct recognition method compared with fingerprint, retina and iris etc.
Jing becomes the study hotspot of living things feature recognition of future generation, be directed to image procossing, computer vision, pattern recognition and
Multiple subjects such as neutral net.But when face shape changes, traditional face identification method is difficult to prove effective.
Find by prior art documents, it is mainly logical currently for the face identification method of face shape change
Crossing the mode of shape conversion is carried out.H. Mohammadzade and D. Hatzinakos exist《IEEE Transactions on
Affective Computing》(affection computation, IEEE magazines, vol.4, no.1,69-82,2013) on deliver "
Projection into Expression Subspaces for Face Recognition from Single Sample
per Person"(Single image face identification method based on expression subspace projection).This article is extracted using expression subspace
The face characteristic insensitive to expression shape change, solves impact of the expression to recognition of face.But, positioning of this method to face
Requirement is very high, needs accurately position of human eye;And the method use training set come obtain expression subspace, it may appear that mistake
Fitting phenomenon, i.e., the method failure when new samples are different from training set.The two shortcomings have impact on the performance of the method.
In addition, Wagner, A. et al. exist《IEEE Transactions on Pattern Analysis and
Machine Intelligence》(Pattern analyses and machine intelligence IEEE magazine, vol.34, no.2, pp. 372-386,
2012)On deliver " Toward a Practical Face Recognition System: Robust Alignment and
Illumination by Sparse Representation”(With regard to actual face identification system:By rarefaction representation reality
The alignment and illumination of existing robust), this article proposes solve the problems, such as that initial alignment is inaccurate using the method for rarefaction representation first.
But, the method cannot process facial change of shape, have impact on the use range of the method.
The patent documentation of Publication No. CN101667246A disclose a kind of " face identification method based on rarefaction representation ",
A kind of " rarefaction representation recognition of face side based on constrained sampling is disclosed in the patent documentation of Publication No. CN101833654A
Method ", these methods are all absorbed in character representation, only in the case of face alignment(Such as positioned according to eyes)Ability
Reach good recognition effect.
So far, also nobody proposes that the recognition of face side inaccurately and in the case of facial change of shape can be being positioned
Method.
The content of the invention
The technical assignment of the present invention is for above-mentioned the deficiencies in the prior art, there is provided one kind based on rarefaction representation and shape about
The face identification method of beam.The method can be applicable to the recognition of face under face shape change, can greatly improve recognition of face
Precision and the scope of application.
The present invention is initialized first to the shape and position of face, and to face coarse positioning is carried out, and obtains the devices such as eyes
The position of official;Secondly, two width images are carried out with accurately mate using matching algorithm, and adds the shape constraining of face, eliminate shape
Shape changes the impact to matching algorithm;Again, the result images of matching are carried out into feature extraction using triangulation, is obtained and shape
The unrelated textural characteristics of shape, eliminate impact of the change of shape to recognizing;Finally, it is special to the texture for obtaining using sparse representation theory
Levy and be identified, further eliminate the impact that change of shape is produced to textural characteristics.
Specifically, technical assignment of the invention is realized in the following manner:One kind based on rarefaction representation and shape about
The face identification method of beam, it is characterised in that the method is comprised the following steps:
First, shape initialization
Face location in each width images to be recognized is marked with shape;
2nd, feature extraction is carried out to images to be recognized based on initialized shape
Shape based on images to be recognized extracts texture, forms images to be recognized texture feature vector, images to be recognized stricture of vagina
Reason characteristic vector is uniquely corresponding with the identity of images to be recognized;
3rd, the images match based on shape
Training image is matched based on textural characteristics to be identified, obtains the textural characteristics of each training image,
Images to be recognized texture feature vector is uniquely corresponding with the identity of training image;
Four, based on the training image textural characteristics obtained by step 3, the images to be recognized texture that step 2 is obtained
Feature carries out rarefaction representation, obtains the coefficient sets corresponding with training image;
Five, the coefficient that step 4 is obtained is analyzed, the training picture identity corresponding to the coefficient of maximum is taken as most
Whole recognition result.
Furthermore, it is understood that step one includes:
A, to a width images to be recognized, extract face feature point using labeling method, carry out initialization operation;
B, determine shape, the shape representation for making each image in shape is, each image is expressed as one
Average shapeWithThe linear combination of individual shape vector.
Above-mentioned labeling method of stating is preferably active shape model or active apparent model.
The concrete grammar of step 2 is:The human face region of images to be recognized is carried out using Delaunay Triangulation drawing
Point, then the textural characteristics of images to be recognized are mapped to by average shape according to point correspondence, obtain unrelated with shape treating
The textural characteristics of identification image.
Step 3 includes:
The shape that a, the images to be recognized textural characteristics and step one realization that are obtained based on step 2 are trained, it is right
Each width training image carries out images match, obtains its shape;
B, the human face region of training image is divided using Delaunay Triangulation, then according to point correspondence
The textural characteristics of training image are mapped to into average shape, the textural characteristics unrelated with shape are obtained, the spy of training image is realized
Levy expression.
The concrete grammar of step 4 is:Images to be recognized is expressed as linear group of training image using sparse expression theory
Close, i.e.,
WhereinCoefficient corresponding to all training images;
Solve optimumProblem, L-1 norm optimization problems can be expressed as, i.e.,
And optimal solution is obtained by Augmented Lagrange Multiplier algorithms.
The analysis method of step 5 is to take maximum and take minimum residual method
Compared with prior art, the face identification method based on rarefaction representation and shape constraining of the invention is to initial position
Selection compare robust, it is not necessary to expression is identified or is converted, error is reduced, higher discrimination is realized.
Description of the drawings
Accompanying drawing 1 is shape initialization schematic diagram in embodiment;
Accompanying drawing 2 is feature extraction schematic diagram in embodiment;
Accompanying drawing 3 is the images match figure in embodiment based on shape;
Accompanying drawing 4 is that sparse expression recognition methodss schematic diagram is based in embodiment.
Specific embodiment
The recognition of face based on rarefaction representation and shape constraining with reference to Figure of description with specific embodiment to the present invention
Method is described in detail below.
Embodiment:
The present invention's is comprised the following steps based on the face identification method of rarefaction representation and shape constraining:
The first step, shape initialization
1. a pair width images to be recognized, using ASM methods face feature point, such as eyes, nose, mouth, face mask are extracted
Deng as shown in Figure 1.This patent does not limit the quantity and extracting method of face feature point, and the present embodiment uses active shape mould
Type carries out initialization operation.
2. the determination of shape.The shape representation for making each image in shape is
.Each image can be expressed as an average shapeWithThe linear combination of individual shape vector:
(1)
Second step, feature extraction is carried out based on initialized shape to images to be recognized
The human face region of images to be recognized is divided using Delaunay Triangulation, then according to point correspondence
The textural characteristics of images to be recognized are mapped to into average shape, the feature unrelated with shape is obtained, as shown in Figure 2.Treat knowledge
The result that other image carries out feature extraction is expressed as。
3rd step, the images match based on shape
1st, the textural characteristics for being obtained based on second step and the first step realize the shape for training, and each width is trained
Image carries out images match, obtains its shape, as shown in Figure 3.
The images to be recognized is made to be, training image is, shape conversion function representation is, shape conversion parameter is,
The object function of matching is
(2)
The solution of the object function is an iterative process.First to parameterLinearisation is carried out, i.e.,
(3)
This is a least square problem and with closed solutions:
(4)
WhereinIt isJacobian matrix, i.e.,
(5)
For the gradient of training image,For derivative of the deformation function to deformation parameter,For Hessian matrixes
Gauss-Newton is estimated:
(6)
Finally,(2)Middle parameterRenewal carry out in the following way:
(7)
Shape per piece image can pass through(1)Try to achieve.
2nd, the feature representation of training image.It is similar with images to be recognized, using Delaunay Triangulation to training figure
The human face region of picture is divided, and then the textural characteristics of training image is mapped to into average shape according to point correspondence, is obtained
Obtain the feature unrelated with shape.Feature obtained by training image collection can be expressed as
(8)
4th step, images to be recognized is expressed as the linear combination of training image using sparse expression theory, i.e.,
(9)
WhereinCoefficient corresponding to all training images.Solve optimumProblem, L-1 norms can be expressed as most
Optimization problem, i.e.,
(10)
And optimal solution can be obtained by Augmented Lagrange Multiplier algorithms.
5th step, takes identity of the image identity corresponding to coefficient maximum in coefficient vector for images to be recognized, i.e.,
(11)
The process is as shown in Figure 4.
Claims (5)
1. it is a kind of based on rarefaction representation and the face identification method of shape constraining, it is characterised in that the method is comprised the following steps:
First, shape initialization
Face location in each width images to be recognized is marked with shape, including:
A, to a width images to be recognized, extract face feature point using labeling method, carry out initialization operation;
B, determine shape, the shape representation for making each image in shape is s, and each image is expressed as an average shape
Shape s0With the linear combination of n shape vector;
2nd, feature extraction is carried out to images to be recognized based on initialized shape
Shape based on images to be recognized extracts texture, forms images to be recognized texture feature vector, and images to be recognized texture is special
Levy vector uniquely corresponding with the identity of images to be recognized;
3rd, the images match based on shape
Training image is matched based on textural characteristics to be identified, obtains the textural characteristics of each training image, wait to know
Other image texture characteristic vector is uniquely corresponding with the identity of training image;
Four, based on the training image textural characteristics obtained by step 3, the images to be recognized textural characteristics that step 2 is obtained
Rarefaction representation is carried out, the coefficient sets corresponding with training image are obtained;
Five, the coefficient that step 4 is obtained is analyzed, the training picture identity corresponding to the coefficient of maximum is taken as final knowledge
Other result.
2. it is according to claim 1 based on rarefaction representation and the face identification method of shape constraining, it is characterised in that described
Labeling method is active shape model or active apparent model.
3. it is according to claim 1 based on rarefaction representation and the face identification method of shape constraining, it is characterised in that step
Two concrete grammar is:The human face region of images to be recognized is divided using Delaunay Triangulation, then according to point
The textural characteristics of images to be recognized are mapped to average shape by corresponding relation, obtain the texture of the images to be recognized unrelated with shape
Feature.
4. it is according to claim 1 based on rarefaction representation and the face identification method of shape constraining, it is characterised in that step
Three include:
The shape that a, the images to be recognized textural characteristics and step one realization that are obtained based on step 2 are trained, to each
Width training image carries out images match, obtains its shape;
B, the human face region of training image is divided using Delaunay Triangulation, then will be instructed according to point correspondence
The textural characteristics for practicing image are mapped to average shape, obtain the textural characteristics unrelated with shape, realize the mark sheet of training image
Reach.
5. it is according to claim 1 based on rarefaction representation and the face identification method of shape constraining, it is characterised in that step
Four concrete grammar is:Images to be recognized is expressed as the linear combination of training image using sparse expression theory, i.e.,
T=A α (1)
Wherein, T is images to be recognized;A is training image;α is the coefficient corresponding to all training images;
The problem of optimum α is solved, L-1 norm optimization problems can be expressed as, i.e.,
And optimal solution is obtained by Augmented Lagrange Multiplier algorithms.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410179522.2A CN103984922B (en) | 2014-04-30 | 2014-04-30 | Face identification method based on sparse representation and shape restriction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410179522.2A CN103984922B (en) | 2014-04-30 | 2014-04-30 | Face identification method based on sparse representation and shape restriction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103984922A CN103984922A (en) | 2014-08-13 |
CN103984922B true CN103984922B (en) | 2017-04-26 |
Family
ID=51276884
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410179522.2A Active CN103984922B (en) | 2014-04-30 | 2014-04-30 | Face identification method based on sparse representation and shape restriction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103984922B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105956606B (en) * | 2016-04-22 | 2019-09-10 | 中山大学 | A kind of pedestrian's identification method again based on asymmetry transformation |
CN106295694B (en) * | 2016-08-05 | 2019-04-09 | 浙江工业大学 | Face recognition method for iterative re-constrained group sparse representation classification |
US10121094B2 (en) | 2016-12-09 | 2018-11-06 | International Business Machines Corporation | Signal classification using sparse representation |
CN107273840A (en) * | 2017-06-08 | 2017-10-20 | 天津大学 | A kind of face recognition method based on real world image |
CN107563328A (en) * | 2017-09-01 | 2018-01-09 | 广州智慧城市发展研究院 | A kind of face identification method and system based under complex environment |
CN108764049B (en) * | 2018-04-27 | 2021-11-16 | 南京邮电大学 | Face recognition method based on sparse representation |
US10713544B2 (en) | 2018-09-14 | 2020-07-14 | International Business Machines Corporation | Identification and/or verification by a consensus network using sparse parametric representations of biometric images |
CN110008997B (en) * | 2019-03-06 | 2023-11-24 | 平安科技(深圳)有限公司 | Image texture similarity recognition method, device and computer readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101169830A (en) * | 2007-11-30 | 2008-04-30 | 西安电子科技大学 | Human face portrait automatic generation method based on embedded type hidden markov model and selective integration |
CN101667246A (en) * | 2009-09-25 | 2010-03-10 | 西安电子科技大学 | Human face recognition method based on nuclear sparse expression |
CN101777131A (en) * | 2010-02-05 | 2010-07-14 | 西安电子科技大学 | Method and device for identifying human face through double models |
CN101819628A (en) * | 2010-04-02 | 2010-09-01 | 清华大学 | Method for performing face recognition by combining rarefaction of shape characteristic |
CN101833654A (en) * | 2010-04-02 | 2010-09-15 | 清华大学 | Sparse representation face identification method based on constrained sampling |
CN101833672A (en) * | 2010-04-02 | 2010-09-15 | 清华大学 | Sparse representation face identification method based on constrained sampling and shape feature |
US8452107B2 (en) * | 2009-10-02 | 2013-05-28 | Qualcomm Incorporated | Methods and systems for occlusion tolerant face recognition |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7856125B2 (en) * | 2006-01-31 | 2010-12-21 | University Of Southern California | 3D face reconstruction from 2D images |
US8374442B2 (en) * | 2008-11-19 | 2013-02-12 | Nec Laboratories America, Inc. | Linear spatial pyramid matching using sparse coding |
-
2014
- 2014-04-30 CN CN201410179522.2A patent/CN103984922B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101169830A (en) * | 2007-11-30 | 2008-04-30 | 西安电子科技大学 | Human face portrait automatic generation method based on embedded type hidden markov model and selective integration |
CN101667246A (en) * | 2009-09-25 | 2010-03-10 | 西安电子科技大学 | Human face recognition method based on nuclear sparse expression |
US8452107B2 (en) * | 2009-10-02 | 2013-05-28 | Qualcomm Incorporated | Methods and systems for occlusion tolerant face recognition |
CN101777131A (en) * | 2010-02-05 | 2010-07-14 | 西安电子科技大学 | Method and device for identifying human face through double models |
CN101819628A (en) * | 2010-04-02 | 2010-09-01 | 清华大学 | Method for performing face recognition by combining rarefaction of shape characteristic |
CN101833654A (en) * | 2010-04-02 | 2010-09-15 | 清华大学 | Sparse representation face identification method based on constrained sampling |
CN101833672A (en) * | 2010-04-02 | 2010-09-15 | 清华大学 | Sparse representation face identification method based on constrained sampling and shape feature |
Also Published As
Publication number | Publication date |
---|---|
CN103984922A (en) | 2014-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103984922B (en) | Face identification method based on sparse representation and shape restriction | |
CN107145842B (en) | Face recognition method combining LBP characteristic graph and convolutional neural network | |
Rekha et al. | Shape, texture and local movement hand gesture features for indian sign language recognition | |
Zhang et al. | GAN-based image augmentation for finger-vein biometric recognition | |
CN101561710B (en) | Man-machine interaction method based on estimation of human face posture | |
WO2017219391A1 (en) | Face recognition system based on three-dimensional data | |
CN108549873A (en) | Three-dimensional face identification method and three-dimensional face recognition system | |
CN101833654B (en) | Sparse representation face identification method based on constrained sampling | |
WO2017059591A1 (en) | Finger vein identification method and device | |
CN104392223B (en) | Human posture recognition method in two-dimensional video image | |
CN106778785B (en) | Construct the method for image Feature Selection Model and the method, apparatus of image recognition | |
Cheng et al. | Image-to-class dynamic time warping for 3D hand gesture recognition | |
CN105787442B (en) | A kind of wearable auxiliary system and its application method of the view-based access control model interaction towards disturbance people | |
CN104008375B (en) | The integrated face identification method of feature based fusion | |
CN104408405B (en) | Face representation and similarity calculating method | |
CN105868716A (en) | Method for human face recognition based on face geometrical features | |
CN106529504B (en) | A kind of bimodal video feeling recognition methods of compound space-time characteristic | |
CN101908149A (en) | Method for identifying facial expressions from human face image sequence | |
CN104299003A (en) | Gait recognition method based on similar rule Gaussian kernel function classifier | |
CN108268814A (en) | A kind of face identification method and device based on the fusion of global and local feature Fuzzy | |
CN104794449A (en) | Gait energy image acquisition method based on human body HOG (histogram of oriented gradient) features and identity identification method | |
CN105069745A (en) | face-changing system based on common image sensor and enhanced augmented reality technology and method | |
CN110544310A (en) | feature analysis method of three-dimensional point cloud under hyperbolic conformal mapping | |
CN115830652B (en) | Deep palm print recognition device and method | |
CN106980845B (en) | Face key point positioning method based on structured modeling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |