CN103413119A - Single sample face recognition method based on face sparse descriptors - Google Patents

Single sample face recognition method based on face sparse descriptors Download PDF

Info

Publication number
CN103413119A
CN103413119A CN2013103145737A CN201310314573A CN103413119A CN 103413119 A CN103413119 A CN 103413119A CN 2013103145737 A CN2013103145737 A CN 2013103145737A CN 201310314573 A CN201310314573 A CN 201310314573A CN 103413119 A CN103413119 A CN 103413119A
Authority
CN
China
Prior art keywords
key point
image
face
people
single sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013103145737A
Other languages
Chinese (zh)
Inventor
赖剑煌
刘娜
郑伟诗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN2013103145737A priority Critical patent/CN103413119A/en
Publication of CN103413119A publication Critical patent/CN103413119A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a single sample face recognition method based on face sparse descriptors. The method includes the first step of carrying out alignment and normalized preprocessing on all available face images in a reference image set, the second step of calculating a group of scale spaces and differential scale spaces on each face image to carry out key point detection, the third step of selecting the local area using the key point as the center, carrying out statistics on the histogram of the area in the gradient direction, and using the histogram as the local feature description corresponding to the key point, the fourth step of calculating non-similarity measures of the images to be tested and each image in the reference image set on the basis of local match, and the fifth step of using the nearest neighbor classifier to carry out classification and recognition on the input images to be tested according to the non-similarity measures. The single sample face recognition method can quite effectively solve the problem of single sample face recognition on the conditions of sheltering, changing of expression and postures and the like.

Description

Single sample face identification method based on the sparse descriptor of people's face
Technical field
The present invention relates to image processes and pattern identification research field, particularly a kind of single sample face identification method based on the sparse descriptor of people's face.
Background technology
Recognition of face, as one of of paramount importance task in living things feature recognition, its objective is the identity of automatically identifying or confirm to input the people in still image or dynamic video by the face database of having registered.Face recognition technology is widely used at aspects such as public safety, gate control system, border security, amusement and social networks at present.
But study at present the more statistical nature that all is based on and improve discrimination, this just need to set up a training study mechanism, and the foundation of this study mechanism just requires that more training sample must be arranged, but in a lot of practical application, such as in the identifications such as driving license or passport, usually everyone only has a reference sample image (one sample per person), thereby makes existing recognition methods based on statistical nature can't obtain practical application.In addition, namely allow to accomplish to set up the multisample database, because the cost of collecting more training sample is very large, should not promote equally.
For the problems referred to above, the concept of single sample recognition of face has been proposed in prior art, its definition is: in the face database of face identification system, everyone only has a facial image as training sample, in order to feature extraction, Classification and Identification and the identity authentication of back.In addition by blocking, the facial image difference problem that causes of the various variations such as expression and posture is also another challenge in face recognition technology.The method of current single sample recognition of face is broadly divided into two kinds: a kind of is to utilize only individual facial image, by various technological means, construct other facial images of same person, with this, be used for expanding the training sample database, in order to learning training, Classification and Identification.Another kind is from only individual people's face sample, by different feature extracting methods, as much as possible obtain characteristic information, to improve discrimination.Wherein many key points descriptor is this class methods.
Many key points descriptor due to its adopt first detect key point then respectively the mode of independent description key point local feature generate characteristics of image, add it to the robustness that block, visual angle etc. changes, applicable use solves the single specimen discerning problem under change condition.The SIFT operator is as very classical many key points descriptor, its proposition is mainly used to solve the object matches problem under different scale and visual angle condition, just recognition of face is different from object matches, if directly the SIFT operator is applied in recognition of face, can there be following two problems: 1) aspect feature extraction, facial image is different from general subject image, should be more suitable for according to the characteristics design of facial image self many key points descriptor of facial image.2) aspect similarity measurement, there is height consuming time in original SIFT characteristic matching strategy and the problem such as degree of accuracy is low.Above two problems make based on the face identification method of SIFT operator no matter on recognition performance or time efficiency, all do not reach the practical application request of recognition of face, have limited the range of application of the method.
Summary of the invention
Fundamental purpose of the present invention is to overcome the shortcoming of prior art with not enough, a kind of single sample face identification method based on the sparse descriptor of people's face is provided, the method is to identify present image to concentrate corresponding class at reference picture, utilized better the characteristic of facial image, can effectively process block, the single sample recognition of face problem under the change condition such as expression and posture, have very strong practicality.
Purpose of the present invention realizes by following technical scheme: the single sample face identification method based on the sparse descriptor of people's face comprises the following steps:
(1) to reference picture, concentrate all facial images of using to align and normalized pre-service;
(2) on every facial image, calculate the sparse descriptor of people's face (FSD operator) via following steps:
(2-1) key point location:
(2-1-1) calculate one group of metric space;
(2-1-2) according to above-mentioned metric space, calculate the difference metric space;
(2-1-3) calculate in the difference metric space extreme point and on metric space corresponding position, using this as key point;
(2-2) choose the regional area centered by above-mentioned key point, add up this regional gradient orientation histogram; Using histogram as local feature description corresponding to this key point;
(2-3) local feature description corresponding to all key points gathers together namely becomes the key point local feature description set that this facial image is corresponding, the sparse descriptor of people's face that namely this facial image is corresponding;
(3) input image to be tested, calculate according to step (2) the sparse descriptor of people's face that image to be tested is corresponding; On the basis of local matching, calculate the measure of dissimilarity that image to be tested and reference picture are concentrated every width image;
(4), according to the resulting measure of dissimilarity of step (3), adopt nearest neighbor classifier to carry out Classification and Identification to the image to be tested of input.
Concrete, the computing method in described step (2-1-1) mesoscale space are:
S(x,y,σ)=G(x,y,σ) *I(x,y);
Wherein I (x, y) is input picture,
Figure BDA00003557696900031
For Gaussian function, σ is the scale parameter of Gaussian function the inside.
Concrete, in described step (2-1-2), the computing method of difference metric space are:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ)) *I(x,y)
=S(x,y,kσ)-S(x,y,σ)
Original SIFT algorithm is in computation process, consider the response more by force of difference of Gaussian function edge, and can there be many edges in the object matches scene, so in order to prevent the impact of skirt response on object matches, it has removed the skirt response point, and recognition of face is different with object matches, marginal point in facial image, such as eyebrow, eyes, the key point of the positions such as face is vital to identification, so identification needs to retain all key points of extracting for facial image, the present invention adopts the mode of metric space and difference metric space to carry out key point and chooses, no longer do the removal skirt response and put this processing procedure, therefore all skirt response points have been retained.
Concrete, in described step (2-1-3), the key point acquiring method is:
At difference metric space D (x, y, σ), by relatively, with 8 neighborhood territory pixel values of yardstick and 9 * 2 neighborhood territory pixel values of upper and lower yardstick, detect difference metric space D (x, y, extreme point σ) with and at metric space S (x, y, σ) upper corresponding position, this extreme point is key point.
Concrete, the concrete grammar of described step (2-2) local feature description is as follows:
(2-2-1) with the key point detected, be set to the image region that A * A size is got at center;
(2-2-2) calculate gradient direction and the gradient magnitude of pixel on this image region;
(2-2-3) with Gaussian function, the gradient magnitude of key point neighborhood image is carried out to filtering;
(2-2-4) cut apart image region and be several a * a sub-block, adopt Tri linear interpolation on each sub-block, to add up respectively the gradient orientation histogram of 8 directions;
(2-2-5) histogram calculated on all sub-blocks is together in series and namely obtains the local feature description of an a * a corresponding to this key point * 8 dimensions.
Concrete, the concrete grammar that calculates measure of dissimilarity in described step (3) is:
(3-1) establishing current test pattern is P, and on this test pattern, the key point number is m, and local feature description corresponding to each key point is f i P, i=1,2 ..., m, the key point local feature description set that test pattern is corresponding is
Figure BDA00003557696900032
If G is reference picture, concentrate a template image, on this image, the key point number is n, and local feature description corresponding to each key point is
Figure BDA00003557696900033
J=1,2 ..., n, the key point local feature description set that test pattern is corresponding is
Figure BDA00003557696900034
(3-2) for i key point position in test pattern P, on template image G centered by this position, find in its certain neighborhood scope and f i PThe key point feature of coupling, the distance threshold T between a default matching characteristic Dist, matching criterior is as follows:
Figure BDA00003557696900041
Wherein
Figure BDA00003557696900042
With
Figure BDA00003557696900043
Respectively f i PMinor increment and time small distance with all key point descriptors that extract in its certain neighborhood scope corresponding in template image G; T RatioA parameter that is used for controlling in matching criterior for SIFT;
(3-3) calculate the mean distance of matching characteristic:
AVE _ dist ( P , G ) = ( Σ i = 1 m ( δ ( f i p , G ) × d 1 i ) ) / Σ i = 1 m δ ( f i p , G ) ;
(3-4) calculate measure of dissimilarity:
Mean distance in conjunction with coupling number and matching characteristic obtains measure of dissimilarity, i.e. weighted average distance:
d ( P , G ) = AVE _ dist ( P , G ) / Σ i = 1 m δ ( f i P , G ) .
Concrete, in described step (4), adopt nearest neighbor classifier to carry out Classification and Identification to the input test image, establish reference picture and concentrate total Q width template image, be designated as set G Q, each template image has all obtained measure of dissimilarity with image calculation to be tested, and under image P to be tested, class is:
label ( P ) = arg min Q ( d ( P , G Q ) ) ;
Namely with the reference template image of P weighted average distance minimum, be the class under test pattern.
The present invention compared with prior art, has following advantage and beneficial effect:
1, the sparse descriptor FSD of people's face of the present invention's proposition, by avoiding gaussian pyramid, difference of Gaussian pyramid, the location of accurate key point and the calculating of principal direction, the calculating of the sparse descriptor FSD of people's face is more efficient than SIFT operator.The more important thing is, the FSD operator only calculates minimum one group of metric space and difference metric space in computation process carries out key point and chooses, and retained all to the vital skirt response point of recognition of face, and to all key points without calculating any principal direction.So describing, the feature in the FSD operator, than SIFT operator, distinctive is arranged more.
2, the measure of dissimilarity of the present invention's proposition, by adopting the local matching strategy and carrying out the dissimilarity between computed image in conjunction with average matching distance and coupling number, than the measure of dissimilarity calculated on the global registration strategy basis original more fast effectively.
3, all key points local feature description in the present invention calculates under the same coordinate system.The present invention adopts a kind of local matching strategy, namely to each the key point feature in test pattern, in template image, be set in certain neighborhood scope at center and find its coupling key point feature with key point, for fear of two larger key point features of distance, mated, in matching criterior, introduce a threshold value, can not be greater than this threshold value to guarantee the distance between matching characteristic.
The accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the inventive method;
Fig. 2 is key point local feature description subnumber group schematic diagram;
Fig. 3 is facial image schematic diagram in the AR face database;
Fig. 4 is the facial image schematic diagram in the CMU-PIE face database;
Embodiment
The present invention is described in further detail below in conjunction with embodiment and accompanying drawing, but embodiments of the present invention are not limited to this.
Embodiment 1
As shown in Figure 1, the single sample face identification method based on the sparse descriptor of people's face mainly comprises following step:
S1: concentrate all facial images of using to align and normalized pre-service to reference picture.
First carry out pre-service, thereby can, when test, come the position in accurate corresponding reference picture according to key point position on test pattern.
S2: calculate the FSD operator.
For every width image, carry out respectively the calculating of FSD operator, concrete steps comprise:
S21: key point location.
Comprise that step is as follows:
S211: build one group of metric space S (x, y, σ):
S(x,y,σ)=G(x,y,σ) *I(x,y)。
Wherein I (x, y) is input picture, G ( x , y , σ ) = 1 2 πσ 2 e - ( x 2 + y 2 ) / 2 σ 2 ;
S212: build difference metric space D (x, y, σ):
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ)) *I(x,y)
=S(x,y,kσ)-S(x,y,σ)
S213: positioning feature point: at difference metric space D (x, y, σ), by relatively with 9 * 2 neighborhood territory pixel values of 8 neighborhood territory pixel values of yardstick and upper and lower yardstick, detect in difference metric space D (x, y, σ) extreme point with and at metric space S (x, y, σ) upper corresponding position.
S22: local feature description.As shown in Figure 2, comprise the following steps:
S221: be set to the image region that 16 * 16 sizes are got at center with the key point detected.
S222: gradient direction and the gradient magnitude of calculating pixel on this image region.
S223: the gradient magnitude of key point neighborhood image is carried out to filtering with Gaussian function.
S224: cut apart image region and be several 4 * 4 sub-blocks, adopt Tri linear interpolation to add up respectively the gradient orientation histogram of 8 directions on each sub-block.
S225: the histogram calculated on all sub-blocks is together in series and namely obtains the local feature description of 4 * 4 * 8=128 dimension corresponding to this key point.
S23: local feature description corresponding to all key points gathers together namely becomes the key point local feature description set that this facial image is corresponding, the sparse descriptor of people's face that namely this facial image is corresponding.
With reference to facial images all in image set, all with the sparse descriptor of above-mentioned people's face, mean, vector is stored, use in order to follow-up identification.
S3: start test, input image to be tested.
S31: establishing image to be tested is P, first test pattern is alignd and normalized pre-service equally, calculates the FSD operator according to above-mentioned steps S2, and establishing key point number on this test pattern is m, and local feature description corresponding to each key point is f i P, i=1,2 ..., m, the key point local feature description set that test pattern is corresponding is
Figure BDA00003557696900061
This test pattern P need to compare with all images that reference picture is concentrated, to judge the class under it, the wherein decision process of piece image of take is example, and establishing G is that reference picture is concentrated a template image, on this image, the key point number is n, and local feature description corresponding to each key point is
Figure BDA00003557696900062
J=1,2 ..., n, the key point local feature description set that test pattern is corresponding is
V G = { f 1 G , f 2 G , . . . , f j G , . . . , f n G } .
S32: to each the key point feature in test pattern P, on template image G, be set in certain neighborhood scope at center and find its coupling key point feature with key point, for fear of two larger key point features of distance, mated, in matching criterior, introduced a threshold value T Dist, can not be greater than T to guarantee the distance between matching characteristic Dist, the matching criterior under new constraint condition becomes:
Wherein
Figure BDA00003557696900071
With
Figure BDA00003557696900072
Respectively f i PMinor increment and time small distance with all key point descriptors that extract in its certain neighborhood scope corresponding in template image G; T RatioA parameter that is used for controlling in matching criterior for SIFT.This formula can find at template image G the key point of coupling for judging.
S33: the mean distance of matching characteristic:
AVE _ dist ( P , G ) = ( Σ i = 1 m ( δ ( f i p , G ) × d 1 i ) ) / Σ i = 1 m δ ( f i p , G ) .
S34: measure of dissimilarity:
Mean distance in conjunction with coupling number and matching characteristic obtains measure of dissimilarity, i.e. a weighted average distance:
d ( P , G ) = AVE _ dist ( P , G ) / Σ i = 1 m δ ( f i P , G ) ;
S4: adopt nearest neighbor classifier to carry out Classification and Identification to the input test image, establish reference picture and concentrate total Q width template image, be designated as set G Q, each template image has all obtained measure of dissimilarity with image calculation to be tested, and under image P to be tested, class is:
label ( P ) = arg min Q ( d ( P , G Q ) ) ;
Namely with the reference template image of P weighted average distance minimum, be the class under test pattern.
By following contrast experiment, effect of the present invention is described: the contrast experiment (sees Fig. 3 and Fig. 4) respectively and carries out on AR and CMU-PIE face database, according to the difference changed, we are divided into three parts by the contrast experiment, be respectively block, the contrast experiment under expression and posture change condition.Relatively algorithm has in contrast test: based on the method for many key points descriptor SIFT, two kinds based on the methods of improving SIFT: SURF and KPSIFT, two kinds based on dense local feature description's submethod: HoG and LBP, and two kinds of classics respectively based on the face identification method of PCA and Gabor feature, about the calculating of Gabor feature, we adopt the Gabor small echo of 5 yardsticks, 8 directions to carry out feature extraction.
On table 1AR database, difference is blocked the discrimination (%) under condition
Figure BDA00003557696900076
Figure 1
The contrast experiment of blocking under change condition, carry out on the AR face database, select the front Nature face image of first stage as the reference image set, and the conventional shielded image in two stages is (as the e in Fig. 3, f, shown in e ' and f ') as the test pattern image set, experimental result, in Table 1, therefrom can find out, the performance performance of the FSD operator of proposition under the condition of blocking is far superior to other control methods.At first with based on the method (HoG, LBP, PCA, Gabor) of global characteristics, comparing, can process better occlusion issue based on the method for many key points descriptor, this is because of the impact that can remove occlusion area by the key point characteristic matching on sparse; Secondly, compare with the KPSIFT operator with original SIFT and SURF, due to the FSD operator, take full advantage of the characteristic of facial image in computation process, so can obtain higher discrimination; It should be noted that in addition, when test pattern during from second photographing phase, our method still can reach 100% discrimination nearly, and significantly descending appears in the discrimination of other method, and this method that we are described changed certain robustness is still arranged the time.
Discrimination (%) on table 2AR database under different expression conditions
Figure BDA00003557696900082
On the AR face database, carrying out the contrast test under the expression shape change condition.Select the front Nature face image of first stage as the reference image set, select the neutrality expression of the expression shape change image (smile, angry and scream) in two stages and second stage as the test pattern image set.From the experimental result table 2, can find out, the discrimination of FSD operator under most of expression shape change condition that we propose is all high than other contrast algorithm.This is main because under general expression shape change condition, not all people's face local feature all can be affected, and not accurately identified by the local features that expression shape change affects so adopt the matching strategy of the sparse descriptor of people's face and feature to utilize.Under the variation of screaming, all methods have all obtained poor discrimination, this is main because make people's face table look like to occur huge variation in the expression of screaming, especially to identifying vital eyes, eyebrow, distortion has all appearred in the feature in face zone, although but our method can not under the change condition of screaming, well be showed, but this does not affect its practical application, because the expression of screaming is uncommon under the practical application condition.
Discrimination (%) on table 3AR database under the different gestures condition
Figure BDA00003557696900091
Select CMU-PIE people's face data to carry out the contrast experiment under the posture change condition, get posture C27 in Fig. 4 as reference image set, remaining posture (C09, C07, C05, C37, C02, C29, C11, C14) as the test pattern image set.From the result table 3, can find out, under the smaller condition of posture change, most of method can show reasonable performance basically, yet, along with the increase of posture angle, the FSD operator that we propose is compared with other contrast algorithm, has obtained higher discrimination.Reason coexist block with the expression shape change condition under similar, posture change has affected the part local feature of facial image, and the sparse local feature description of the people's face submethod that we propose can utilize the key point feature of robust to identify selectively.
Above-described embodiment is preferably embodiment of the present invention; but embodiments of the present invention are not restricted to the described embodiments; other any do not deviate from change, the modification done under Spirit Essence of the present invention and principle, substitutes, combination, simplify; all should be equivalent substitute mode, within being included in protection scope of the present invention.

Claims (7)

1. based on single sample face identification method of the sparse descriptor of people's face, it is characterized in that, comprise the following steps:
(1) to reference picture, concentrate all facial images of using to align and normalized pre-service;
(2) on every facial image, calculate the sparse descriptor of people's face via following steps:
(2-1) key point location:
(2-1-1) calculate one group of metric space;
(2-1-2) according to above-mentioned metric space, calculate the difference metric space;
(2-1-3) calculate in the difference metric space extreme point and on metric space corresponding position, using this as key point;
(2-2) choose the regional area centered by above-mentioned key point, add up this regional gradient orientation histogram; Using histogram as local feature description corresponding to this key point;
(2-3) local feature description corresponding to all key points gathers together namely becomes the key point local feature description set that this facial image is corresponding, the sparse descriptor of people's face that namely this facial image is corresponding;
(3) input image to be tested, calculate according to step (2) the sparse descriptor of people's face that image to be tested is corresponding; On the basis of local matching, calculate the measure of dissimilarity that image to be tested and reference picture are concentrated every width image;
(4), according to the resulting measure of dissimilarity of step (3), adopt nearest neighbor classifier to carry out Classification and Identification to the image to be tested of input.
2. the single sample face identification method based on the sparse descriptor of people's face according to claim 1, is characterized in that, the computing method in described step (2-1-1) mesoscale space are:
S(x,y,σ)=G(x,y,σ) *I(x,y);
Wherein I (x, y) is input picture,
Figure FDA00003557696800011
For Gaussian function, σ is the scale parameter of Gaussian function the inside.
3. the single sample face identification method based on the sparse descriptor of people's face according to claim 2, is characterized in that, in described step (2-1-2), the computing method of difference metric space are:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=S(x,y,kσ)-S(x,y,σ)?。
4. the single sample face identification method based on the sparse descriptor of people's face according to claim 3, is characterized in that, in described step (2-1-3), the key point acquiring method is:
At difference metric space D (x, y, σ), by relatively, with 8 neighborhood territory pixel values of yardstick and 9 * 2 neighborhood territory pixel values of upper and lower yardstick, detect difference metric space D (x, y, extreme point σ) with and at metric space S (x, y, σ) upper corresponding position, this extreme point is key point.
5. the single sample face identification method based on the sparse descriptor of people's face according to claim 1, is characterized in that, the concrete grammar of described step (2-2) local feature description is as follows:
(2-2-1) with the key point detected, be set to the image region that A * A size is got at center;
(2-2-2) calculate gradient direction and the gradient magnitude of pixel on this image region;
(2-2-3) with Gaussian function, the gradient magnitude of key point neighborhood image is carried out to filtering;
(2-2-4) cut apart image region and be several a * a sub-block, adopt Tri linear interpolation on each sub-block, to add up respectively the gradient orientation histogram of 8 directions;
(2-2-5) histogram calculated on all sub-blocks is together in series and namely obtains the local feature description of an a * a corresponding to this key point * 8 dimensions.
6. the single sample face identification method based on the sparse descriptor of people's face according to claim 1, is characterized in that, the concrete grammar that calculates measure of dissimilarity in described step (3) is:
(3-1) establishing current test pattern is P, and on this test pattern, the key point number is m, and local feature description corresponding to each key point is f i P, i=1,2 ..., m, the key point local feature description set that test pattern is corresponding is
Figure FDA00003557696800021
If G is reference picture, concentrate a template image, on this image, the key point number is n, and local feature description corresponding to each key point is J=1,2 ..., n, the key point local feature description set that test pattern is corresponding is
Figure FDA00003557696800023
(3-2) for i key point position in test pattern P, on template image G centered by this position, find in its certain neighborhood scope and f i PThe key point feature of coupling, the distance threshold T between a default matching characteristic Dist, matching criterior is as follows:
Figure FDA00003557696800024
Wherein
Figure FDA00003557696800025
With Respectively f i PMinor increment and time small distance with all key point descriptors that extract in its certain neighborhood scope corresponding in template image G; T RatioA parameter that is used for controlling in matching criterior for SIFT;
(3-3) calculate the mean distance of matching characteristic:
Figure FDA00003557696800027
(3-4) calculate measure of dissimilarity:
Mean distance in conjunction with coupling number and matching characteristic obtains measure of dissimilarity, i.e. weighted average distance:
Figure FDA00003557696800031
7. the single sample face identification method based on the sparse descriptor of people's face according to claim 1, it is characterized in that, in described step (4), adopt nearest neighbor classifier to carry out Classification and Identification to the input test image, if reference picture is concentrated total Q width template image, be designated as set G Q, each template image has all obtained measure of dissimilarity with image calculation to be tested, and under image P to be tested, class is:
Figure FDA00003557696800032
Namely with the reference template image of P weighted average distance minimum, be the class under test pattern.
CN2013103145737A 2013-07-24 2013-07-24 Single sample face recognition method based on face sparse descriptors Pending CN103413119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013103145737A CN103413119A (en) 2013-07-24 2013-07-24 Single sample face recognition method based on face sparse descriptors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013103145737A CN103413119A (en) 2013-07-24 2013-07-24 Single sample face recognition method based on face sparse descriptors

Publications (1)

Publication Number Publication Date
CN103413119A true CN103413119A (en) 2013-11-27

Family

ID=49606128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013103145737A Pending CN103413119A (en) 2013-07-24 2013-07-24 Single sample face recognition method based on face sparse descriptors

Country Status (1)

Country Link
CN (1) CN103413119A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318548A (en) * 2014-10-10 2015-01-28 西安电子科技大学 Rapid image registration implementation method based on space sparsity and SIFT feature extraction
CN104899576A (en) * 2015-06-23 2015-09-09 南京理工大学 Face identification feature extraction method based on Gabor conversion and HOG
CN105224957A (en) * 2015-10-23 2016-01-06 苏州大学 A kind of method and system of the image recognition based on single sample
CN105426836A (en) * 2015-11-17 2016-03-23 上海师范大学 Single-sample face recognition method based on segmented model and sparse component analysis
CN105787416A (en) * 2014-12-23 2016-07-20 Tcl集团股份有限公司 Mobile terminal-based face recognition method and system
CN105934757A (en) * 2014-01-30 2016-09-07 华为技术有限公司 Method and apparatus for detecting incorrect associations between keypoints of first image and keypoints of second image
CN106022241A (en) * 2016-05-12 2016-10-12 宁波大学 Face recognition method based on wavelet transformation and sparse representation
CN106407958A (en) * 2016-10-28 2017-02-15 南京理工大学 Double-layer-cascade-based facial feature detection method
CN106778531A (en) * 2016-11-25 2017-05-31 北京小米移动软件有限公司 Face detection method and device
CN107480628A (en) * 2017-08-10 2017-12-15 苏州大学 A kind of face identification method and device
CN107862270A (en) * 2017-10-31 2018-03-30 深圳云天励飞技术有限公司 Face classification device training method, method for detecting human face and device, electronic equipment
CN107886090A (en) * 2017-12-15 2018-04-06 苏州大学 A kind of single sample face recognition method, system, equipment and readable storage medium storing program for executing
CN108681725A (en) * 2018-05-31 2018-10-19 西安理工大学 A kind of weighting sparse representation face identification method
CN109544537A (en) * 2018-11-26 2019-03-29 中国科学技术大学 The fast automatic analysis method of hip joint x-ray image
CN109614928A (en) * 2018-12-07 2019-04-12 成都大熊猫繁育研究基地 Panda recognition algorithms based on limited training data
CN110210511A (en) * 2019-04-19 2019-09-06 哈尔滨工业大学 A kind of improvement PCA-SIFT method for registering images based on cosine measure
CN111009004A (en) * 2019-11-24 2020-04-14 华南理工大学 Hardware optimization method for accelerating image matching
CN111523454A (en) * 2020-04-22 2020-08-11 华东师范大学 Partial face recognition method based on sample expansion and point set matching
CN112380995A (en) * 2020-11-16 2021-02-19 华南理工大学 Face recognition method and system based on deep feature learning in sparse representation domain

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739546A (en) * 2008-11-05 2010-06-16 沈阳工业大学 Image cross reconstruction-based single-sample registered image face recognition method
CN101819628A (en) * 2010-04-02 2010-09-01 清华大学 Method for performing face recognition by combining rarefaction of shape characteristic
EP2577606A2 (en) * 2010-05-28 2013-04-10 Microsoft Corporation Facial analysis techniques

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739546A (en) * 2008-11-05 2010-06-16 沈阳工业大学 Image cross reconstruction-based single-sample registered image face recognition method
CN101819628A (en) * 2010-04-02 2010-09-01 清华大学 Method for performing face recognition by combining rarefaction of shape characteristic
EP2577606A2 (en) * 2010-05-28 2013-04-10 Microsoft Corporation Facial analysis techniques

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NA LIU等: "A facial sparse descriptor for single image based face recognition", 《NEUROCOMPUTING》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105934757A (en) * 2014-01-30 2016-09-07 华为技术有限公司 Method and apparatus for detecting incorrect associations between keypoints of first image and keypoints of second image
CN105934757B (en) * 2014-01-30 2019-06-07 华为技术有限公司 A kind of method and apparatus of the key point for detecting the first image and the incorrect incidence relation between the key point of the second image
CN104318548A (en) * 2014-10-10 2015-01-28 西安电子科技大学 Rapid image registration implementation method based on space sparsity and SIFT feature extraction
CN104318548B (en) * 2014-10-10 2017-02-15 西安电子科技大学 Rapid image registration implementation method based on space sparsity and SIFT feature extraction
CN105787416A (en) * 2014-12-23 2016-07-20 Tcl集团股份有限公司 Mobile terminal-based face recognition method and system
CN104899576A (en) * 2015-06-23 2015-09-09 南京理工大学 Face identification feature extraction method based on Gabor conversion and HOG
CN105224957A (en) * 2015-10-23 2016-01-06 苏州大学 A kind of method and system of the image recognition based on single sample
CN105224957B (en) * 2015-10-23 2019-03-08 苏州大学 A kind of method and system of the image recognition based on single sample
CN105426836B (en) * 2015-11-17 2019-01-18 上海师范大学 A kind of single sample face recognition method based on branch's formula model and sparse component analysis
CN105426836A (en) * 2015-11-17 2016-03-23 上海师范大学 Single-sample face recognition method based on segmented model and sparse component analysis
CN106022241A (en) * 2016-05-12 2016-10-12 宁波大学 Face recognition method based on wavelet transformation and sparse representation
CN106022241B (en) * 2016-05-12 2019-05-03 宁波大学 A kind of face identification method based on wavelet transformation and rarefaction representation
CN106407958A (en) * 2016-10-28 2017-02-15 南京理工大学 Double-layer-cascade-based facial feature detection method
CN106407958B (en) * 2016-10-28 2019-12-27 南京理工大学 Face feature detection method based on double-layer cascade
CN106778531A (en) * 2016-11-25 2017-05-31 北京小米移动软件有限公司 Face detection method and device
CN107480628B (en) * 2017-08-10 2020-08-25 苏州大学 Face recognition method and device
CN107480628A (en) * 2017-08-10 2017-12-15 苏州大学 A kind of face identification method and device
CN107862270A (en) * 2017-10-31 2018-03-30 深圳云天励飞技术有限公司 Face classification device training method, method for detecting human face and device, electronic equipment
CN107886090A (en) * 2017-12-15 2018-04-06 苏州大学 A kind of single sample face recognition method, system, equipment and readable storage medium storing program for executing
CN107886090B (en) * 2017-12-15 2021-07-30 苏州大学 Single-sample face recognition method, system, equipment and readable storage medium
CN108681725A (en) * 2018-05-31 2018-10-19 西安理工大学 A kind of weighting sparse representation face identification method
CN109544537A (en) * 2018-11-26 2019-03-29 中国科学技术大学 The fast automatic analysis method of hip joint x-ray image
CN109614928A (en) * 2018-12-07 2019-04-12 成都大熊猫繁育研究基地 Panda recognition algorithms based on limited training data
CN110210511A (en) * 2019-04-19 2019-09-06 哈尔滨工业大学 A kind of improvement PCA-SIFT method for registering images based on cosine measure
CN111009004A (en) * 2019-11-24 2020-04-14 华南理工大学 Hardware optimization method for accelerating image matching
CN111009004B (en) * 2019-11-24 2023-05-23 华南理工大学 Hardware optimization method for accelerating image matching
CN111523454A (en) * 2020-04-22 2020-08-11 华东师范大学 Partial face recognition method based on sample expansion and point set matching
CN112380995A (en) * 2020-11-16 2021-02-19 华南理工大学 Face recognition method and system based on deep feature learning in sparse representation domain
CN112380995B (en) * 2020-11-16 2023-09-12 华南理工大学 Face recognition method and system based on deep feature learning in sparse representation domain

Similar Documents

Publication Publication Date Title
CN103413119A (en) Single sample face recognition method based on face sparse descriptors
WO2018072233A1 (en) Method and system for vehicle tag detection and recognition based on selective search algorithm
CN102663413B (en) Multi-gesture and cross-age oriented face image authentication method
CN107316031A (en) The image characteristic extracting method recognized again for pedestrian
CN107103317A (en) Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN105956579A (en) Rapid finger vein identification method integrating fuzzy template and point characteristics
Zhang et al. A pedestrian detection method based on SVM classifier and optimized Histograms of Oriented Gradients feature
CN103886589A (en) Goal-oriented automatic high-precision edge extraction method
CN106682641A (en) Pedestrian identification method based on image with FHOG- LBPH feature
Peng et al. Recognition of low-resolution logos in vehicle images based on statistical random sparse distribution
Wang et al. Traffic sign detection using a cascade method with fast feature extraction and saliency test
CN104951793B (en) A kind of Human bodys' response method based on STDF features
Xu et al. Real-time pedestrian detection based on edge factor and Histogram of Oriented Gradient
CN104299009A (en) Plate number character recognition method based on multi-feature fusion
Kim et al. Autonomous vehicle detection system using visible and infrared camera
Cai et al. Traffic sign recognition algorithm based on shape signature and dual-tree complex wavelet transform
CN103186790A (en) Object detecting system and object detecting method
CN103425985B (en) A kind of face wrinkles on one's forehead detection method
CN103020614A (en) Human movement identification method based on spatio-temporal interest point detection
CN103745204B (en) A kind of figure and features feature comparison method based on macle point
Deng et al. Detection and recognition of traffic planar objects using colorized laser scan and perspective distortion rectification
CN107784263A (en) Based on the method for improving the Plane Rotation Face datection for accelerating robust features
Chouchane et al. 3D and 2D face recognition using integral projection curves based depth and intensity images
Prashanth et al. Off-line signature verification based on angular features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20131127

WD01 Invention patent application deemed withdrawn after publication