CN110096992A - A kind of face identification method indicating non-linear fusion Pasteur coefficient based on collaboration - Google Patents

A kind of face identification method indicating non-linear fusion Pasteur coefficient based on collaboration Download PDF

Info

Publication number
CN110096992A
CN110096992A CN201910342740.6A CN201910342740A CN110096992A CN 110096992 A CN110096992 A CN 110096992A CN 201910342740 A CN201910342740 A CN 201910342740A CN 110096992 A CN110096992 A CN 110096992A
Authority
CN
China
Prior art keywords
sample
virtual
test sample
formula
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910342740.6A
Other languages
Chinese (zh)
Other versions
CN110096992B (en
Inventor
阎石
贾玉洁
邓佳璐
姚雯倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lanzhou University
Original Assignee
Lanzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lanzhou University filed Critical Lanzhou University
Priority to CN201910342740.6A priority Critical patent/CN110096992B/en
Publication of CN110096992A publication Critical patent/CN110096992A/en
Application granted granted Critical
Publication of CN110096992B publication Critical patent/CN110096992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of face identification methods that non-linear fusion Pasteur coefficient is indicated based on collaboration, this method, which introduces, reinforces the more low and higher equicohesive pixel of original image, and reduce other pixels, new virtual sample is generated, then with virtual training sample and original training sample difference linear expression virtual test sample and test sample;And calculate Pasteur's coefficient similarity between test sample and training sample;By Pasteur's coefficient similarity and Euclidean distance non-linear fusion, test sample is finally determined as to the classification of the smallest training sample of residual values.The invention has the benefit that the histogram information of Pasteur's coefficient similarity, which is introduced into collaboration, to be indicated to play the role of a kind of supplement to Euclidean distance in algorithm;And the merging of virtual sample and original sample, merging for Euclidean distance and the histogram information of Pasteur's coefficient similarity are allowed between two kinds of information by the way of non-linear fusion and can preferably be combined, so that the accuracy of image classification is higher.

Description

A kind of face identification method indicating non-linear fusion Pasteur coefficient based on collaboration
Technical field
The present invention relates to technical field of image processing, indicate non-linear fusion bar based on collaboration in particular to one kind The face identification method of family name's coefficient.
Background technique
With the fast development of modern information technologies, the technology for carrying out authentication has gone to biological characteristic level.It is modern Biological identification technology mainly passes through computer and is intimately associated with high-tech means, utilizes the intrinsic physiological property of human body and row It is characterized to carry out the identification of personal identification.Wherein face is the set of a mode for including abundant information, is that the mankind are mutual One of mutually dialectical and outstanding feature of identification, compared with other human body biological characteristics such as fingerprint, iris, voice, recognition of face is more Add it is direct, friendly, without interfering the normal behaviour of people that can admirably achieve recognition effect.
Recognition of face identification, access control, in terms of have a wide range of applications, be present mode One research hotspot of identification and artificial intelligence field.And due to can arbitrarily be placed using the equipment of face recognition technology, The placement concealment of equipment is very good, can non-contact quick lock in target identification object, therefore face recognition technology quilt at a distance Foreign countries are widely applied in public's security system, and application is in large scale.But in practical applications, we can not often obtain greatly The training sample of amount extracts for training for classifying and knowing another characteristic.This aspect is because of face identification system Memory space it is limited, a large amount of training sample can not be accommodated;On the other hand be because in a short time for same target without Method obtains its multiple face sample photo and is used as training.And limited training sample can not comprehensively express face in illumination item Expression shape change and change in location under part, so being difficult to improve the accuracy of recognition of face.Therefore, it solves to have in training sample In the case where limit, improving the problem of quickly identifying the discrimination of facial image in the short time is particularly important.
Summary of the invention
To solve the above problems, the purpose of the present invention is to provide one kind to indicate non-linear fusion Pasteur system based on collaboration Several face identification method, for quickly identifying facial image, it is right under real scene to meet under training sample limited circumstances The requirement for the accuracy that small sample is quickly identified.
To achieve the above object, the present invention provides a kind of faces that non-linear fusion Pasteur coefficient is indicated based on collaboration Recognition methods, this method comprises: the following steps are included:
Step 1: the intensity by enhancing the pixel of the moderate strength of original image reduces the intensity of other pixels, generates Virtual sample then by reinforcing the more low and higher equicohesive pixel of original image, and reduces other pixels, generates New virtual sample, selected part original image is as original training sample, remaining original image is as test sample;
Step 2: algorithm being indicated by collaboration, with virtual training sample, new virtual training sample and original training sample Linear expression virtual test sample, new virtual test sample and test sample respectively;
Step 3: calculating virtual test sample, new virtual test sample, original test sample and virtual training sample, Pasteur's coefficient similarity between new virtual training sample, original training sample;
Step 4: the Pasteur's coefficient similarity and Euclidean distance non-linear fusion that step 3 is calculated;
Step 5: being determined according to the calculated result of step 4 to virtual test sample, new virtual test sample and original survey Sample is originally classified, and is merged, is differentiated according to fused residual error, residual values by class to all obtained residual errors The smallest differentiation classification is original test sample, virtual test sample and new virtual test sample generic.
It is further improved as of the invention, in step 1, the image of new virtual sample is expressed as,
Wherein, IijIndicate the pixel value of original image ith row and jth column, JijIndicate new the i-th row of virtual sample image With the pixel value of jth column.
As further improvement of the invention, step 2 is specifically included:
Step 201: image array is become into column vector,Indicate test sample Z-direction amount form,Indicate original training Sample xiVector form, and meet formula (1)
Wherein, aiIndicate coefficient;
If withThen formula (1) can be write as formula (2)
Step 202: passing throughA is calculated, wherein λ indicates one Positive integer (λ=0.01), I are a unit matrix;The similarity of class training sample and test sample is calculated by formula (3), Indicate residual error:
Wherein, dkFor the expression residual error of every a kind of training sample and test sample,For test sample Z-direction amount form, Indicate original training sample xiVector form.
Step 203: by virtual training sample X '1Carrying out collaboration indicates that algorithm, linear list show virtual test sample And expression residual error is calculated by formula (4):
Wherein, dlFor the expression residual error of class virtual training sample and virtual test sample,For virtual test sample vector Form,Indicate the vector form of virtual training sample,For the expression of virtual training sample linear expression virtual test sample Coefficient.
Step 204: by new virtual training sample X '2Carrying out collaboration indicates that algorithm, linear list show new virtual test SampleAnd expression residual error is calculated by formula (5),
Wherein, dqFor the expression residual error of every a new class of virtual training sample and new virtual test sample,It is virtual Test sample vector form,Indicate the vector form of virtual training sample,It is new for new virtual training sample linear expression Virtual test sample expression coefficient.
As further improvement of the invention, step 3 is specifically included:
Step 301: calculating original test sampleThe i class sample x concentrated with original training sampleiHistogram it is similar Degree, Pasteur's coefficient formula are formula (6)
Wherein, p, q 'iRespectively indicate original test sample Z and i class training sample xiHistogram data;
Step 302: calculating virtual test sampleWith the i class sample x ' in virtual training sample set1iHistogram it is similar Degree, Pasteur's coefficient formula are formula (7):
Wherein, p1、q′1iRespectively indicate virtual test sampleWith i class virtual training sample x '1iHistogram data.
Step 303: calculating new virtual test sampleWith the i class sample x ' in new virtual training sample set2iIt is straight Square figure similarity, Pasteur's coefficient formula are formula (8):
As further improvement of the invention, step 4 is specifically included:
Step 401: by every one kind original training sample X and original test samplePasteur's coefficient histogram it is similar The minimum value and residual values for spending e carry out non-linear fusion in score level, and index non-linear fusion generates new residual error resikFormula (9) and logarithm non-linear fusion generate new residual error distkFormula (10)
Step 402: by every a kind of virtual training sample X '1With virtual test samplePasteur's coefficient histogram phase Like degree e1Minimum value and residual values non-linear fusion is carried out in score level, generate new residual error formula (11) and formula (12),
Step 403: will be per a new class of virtual training sample X '2With new virtual test samplePasteur coefficient Histogram similarity e2Minimum value and residual values non-linear fusion is carried out in score level, generate new residual error formula (13) and formula (14),
Step 404: all residual errors newly obtained being subjected to non-linear fusion, generate new residual error formula (15) and formula (16),
It is improved as of the invention further, in step 5, original test sample, virtual test sample and new virtual survey The class tag definition of sample sheet is to be expressed as formula (17) and formula (18),
The invention has the benefit that new virtual sample strengthens the ignored detailed information of facial image;By On the basis of enhancing the pixel of original image moderate strength and reducing the virtual sample of other image pixel intensities, it is former to introduce reinforcement The more low and higher equicohesive pixel of beginning image, and other pixels are reduced, generate new virtual sample.Then with original Training sample, virtual training sample and new virtual training sample difference linear expression test sample, virtual test sample and new Virtual test sample, and Pasteur's coefficient similarity between test sample and training sample is calculated, by Pasteur's coefficient similarity With Euclidean distance non-linear fusion, test sample is finally determined as to the classification of the smallest training sample of residual values.It joined new Virtual sample, complementary effect is played to original virtual sample, and strengthen the ignored details letter of face picture Breath.In non-linear fusion, residual sum similarity information is handled using logarithmic function and exponential function, increase compared with Gap between small residual values;In sorting phase, Euclidean distance has merged the histogram information of Pasteur's coefficient similarity, non- The information that linear fusion enables two kinds of distinct methods to generate preferably combines, and is easy to consolidate the weight of similar residual error The property wanted.Before non-linear fusion, the present invention by residual sum similarity information using logarithmic function and exponential function at Reason, increases the gap between lesser residual values, to differentiate that the classification of test sample improves accuracy rate.
Detailed description of the invention
Fig. 1 is a kind of face identification method flow chart that non-linear fusion Pasteur coefficient is indicated based on collaboration of the present invention;
Fig. 2 is a kind of recognition of face that non-linear fusion Pasteur coefficient is indicated based on collaboration described in the embodiment of the present invention The schematic diagram of the sample in the library ORL of method;
Fig. 3 is a kind of recognition of face that non-linear fusion Pasteur coefficient is indicated based on collaboration described in the embodiment of the present invention The schematic diagram of the sample in the library GT of method;
Fig. 4 is a kind of face identification method that non-linear fusion Pasteur coefficient is indicated based on collaboration described in the embodiment of the present invention The library FERET sample schematic diagram.
Specific embodiment
The present invention is described in further detail below by specific embodiment and in conjunction with attached drawing.
As shown in Figure 1, a kind of face for indicating non-linear fusion Pasteur coefficient based on collaboration described in the embodiment of the present invention Recognition methods, method includes the following steps:
Step 1: the intensity by enhancing the pixel of the moderate strength of original image reduces the intensity of other pixels, generates Virtual sample then by reinforcing the more low and higher equicohesive pixel of original image, and reduces other pixels, generates New virtual sample, selected part original image is as original training sample, remaining original image is as test sample;
Step 2: algorithm being indicated by collaboration, with virtual training sample, new virtual training sample and original training sample Linear expression virtual test sample, new virtual test sample and test sample respectively;
Step 3: calculating virtual test sample, new virtual test sample, original test sample and virtual training sample, Pasteur's coefficient similarity between new virtual training sample, original training sample;
Step 4: the Pasteur's coefficient similarity and Euclidean distance non-linear fusion that step 3 is calculated;
Step 5: being determined according to the calculated result of step 4 to virtual test sample, new virtual test sample and original survey Sample is originally classified, and is merged, is differentiated according to fused residual error, residual values by class to all obtained residual errors The smallest differentiation classification is original test sample, virtual test sample and new virtual test sample generic.
Further, in step 1, the image of virtual sample is expressed as Jij=Iij*(m-Iij), wherein IijIndicate original The intensity of the pixel of image ith row and jth column, JijIndicate the intensity of the pixel of virtual sample image ith row and jth column.New The image of virtual sample is expressed as:
Wherein, IijIndicate the pixel value of original image ith row and jth column, JijIndicate the i-th row of virtual sample image and the The pixel value of j column.
Further, in step 2, existing c class face sample, every one kind has n training sample.Enable x1,...,xNIndicate institute The N number of original training sample (N=nc) having.Assuming that xi∈RP×QIndicate i-th of training sample i ∈ (1,2 ..., N).
Collaboration indicates that algorithm specifically includes:
Step 201: image array is become into column vector,Indicate test sample Z-direction amount form,Indicate original training Sample xiVector form, and meet formula (1)
Wherein, aiIndicate coefficient;
If withThen formula (1) can be write as formula (2)
Step 202: passing throughA is calculated, wherein λ indicates one Positive integer (λ=0.01), I are a unit matrix, and the similarity of class training sample and test sample is calculated by formula (3), Indicate residual error:
Wherein, dkFor the expression residual error of class training sample and test sample,For test sample Z-direction amount form,It indicates Original training sample xiVector form;
Step 203: by virtual training sample X '1Carrying out collaboration indicates that algorithm, linear list show virtual test sample And expression residual error is calculated by formula (4):
Wherein, dlFor the expression residual error of class virtual training sample and virtual test sample,For virtual test sample vector Form,Indicate the vector form of virtual training sample;
Step 204: by new virtual training sample X '2Carrying out collaboration indicates that algorithm, linear list show new virtual test SampleAnd expression residual error is calculated by formula (5),
Wherein, dqFor the expression residual error of every a new class of virtual training sample and new virtual test sample,It is virtual Test sample vector form,Indicate the vector form of virtual training sample,It is new for new virtual training sample linear expression Virtual test sample expression coefficient.
Further, step 3 specifically includes:
Step 301: calculating original test sampleThe i class sample x concentrated with original training sampleiHistogram it is similar Degree, Pasteur's coefficient formula are formula (6)
Wherein, p, q 'iRespectively indicate original test sample Z and i class training sample xiHistogram data;
Step 302: calculating virtual test sampleWith the i class sample x ' in virtual training sample set1iHistogram it is similar Degree, Pasteur's coefficient formula are formula (7):
Wherein, p1、q′1iRespectively indicate virtual test sampleWith i class virtual training sample x '1iHistogram data.
Step 303: calculating new virtual test sampleWith the i class sample x ' in new virtual training sample set2iIt is straight Square figure similarity, Pasteur's coefficient formula are formula (8):
Wherein, p2、q′2iRespectively indicate virtual test sampleWith i class virtual training sample x '2iHistogram data.
Further, it is specifically included in step 4:
Step 401: by the original training sample X of every one kind and original test samplePasteur's coefficient histogram phase Non-linear fusion is carried out in score level like the minimum value and residual values of degree e, index non-linear fusion generates new residual error resikFormula (9) and logarithm non-linear fusion generate new residual error distkFormula (10)
Step 402: by every a kind of virtual training sample X '1With virtual test samplePasteur's coefficient histogram phase Like degree e1Minimum value and residual values non-linear fusion is carried out in score level, generate new residual error formula (11) and formula (12),
Step 403: will be per a new class of virtual training sample X '2With new virtual test samplePasteur coefficient Histogram similarity e2Minimum value and residual values non-linear fusion is carried out in score level, generate new residual error formula (13) and formula (14),
Step 404: all residual errors newly obtained being subjected to non-linear fusion, generate new residual error formula (15) and formula (16),
Further, in step 5, original test sample, the class label of virtual test sample and new virtual test sample It is defined to indicate that as formula (17) and formula (18),
The embodiment of the present invention carries out the experiment that small sample quickly identifies, ORL data set using ORL, GT, FERET data set Comprising the sample from 40 people, 10 pictures of each offer.These pictures are to shoot to obtain under different time, include Facial expression abundant, Fig. 2 are the example images in ORL data set.The face data library GT (Georgia Tech) includes The face-image of 50 people.Everyone has 15 color images, and background is complicated.Image, which is shown, has different expressions, lighting condition With the front of angle.The background for removing each image first, is then converted into gray image, and Fig. 3 is from GT data set In example images.FERET subdata base includes 700 images from 100 people, everyone provides different posture changings With 7 images of different illumination, Fig. 4 is the example images in FERET Sub Data Set.
These three data sets contain different time, facial expression, posture and light change abundant.For these feelings Condition, each data set take everyone preceding 1,2,3,4 facial image as original training sample, remaining facial image respectively As test sample, test proposes time and the discrimination of algorithm.Comparative experiments of the invention has used classical MSA, LRC, RBTM, CIRLRC, NFRFR, KRBM, FSSP and DSSR algorithm.Experimental result is as shown in table 1, table 2 and table 3:
Table 1
Table 2
Table 3
By table 1, table 2 and table 3 it is found that mainly for the small scale changes in faces of face, (eye closing of opening eyes comes back in the library ORL Bow, small scale deflection etc.), in the library GT primarily directed to the expression synthesis of face (smile, frown, large scale side head etc.) and Lighting angle variation, and the library FERET is mainly illumination and the transformation of posture.This method compares the methods of other same types, right In the identification of the facial expression transformation and light change of face, there is certain robustness.Having the same with MSA algorithm On the basis of virtual sample, it joined new virtual sample and play complementary effect to the virtual sample in MSA algorithm, reinforce It is easy ignored detailed information in face picture, so the discrimination of this paper has certain advantage.Since the present invention is limited The histogram information for having used Euclidean distance fusion Pasteur's coefficient similarity, by many experiments we have found that the identification of the two Mistake coincidence factor is low, and the discrimination after merging can improve.And present invention employs non-linear fusion, and it is right Residual error data is pre-processed, so that the gap between smaller residual error increases, is more conducive to the classification of test sample.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any Modification, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (6)

1. a kind of face identification method for indicating non-linear fusion Pasteur coefficient based on collaboration, which is characterized in that this method includes Following steps:
Step 1: the intensity by enhancing the pixel of the moderate strength of original image reduces the intensity of other pixels, generates virtual Sample then by reinforcing the more low and higher equicohesive pixel of original image, and reduces other pixels, generates new void Quasi- sample, selected part original image is as original training sample, remaining original image is as test sample;
Step 2: algorithm being indicated by collaboration, is distinguished with virtual training sample, new virtual training sample and original training sample Linear expression virtual test sample, new virtual test sample and original test sample;
Step 3: calculating virtual test sample, new virtual test sample, original test sample and virtual training sample, new void Pasteur's coefficient similarity between quasi- training sample, original training sample;
Step 4: the Pasteur's coefficient similarity and Euclidean distance non-linear fusion that step 3 is calculated;
Step 5: being determined according to the calculated result of step 4 to virtual test sample, new virtual test sample and original test specimens This is classified, and merges to all obtained residual errors by class, is differentiated that residual values are the smallest according to fused residual error Differentiation classification is original test sample, virtual test sample and new virtual test sample generic.
2. a kind of face identification method that non-linear fusion Pasteur coefficient is indicated based on collaboration according to claim 1, It is characterized in that, in step 1, the image of virtual sample is expressed as Jij=Iij*(m-Iij), wherein IijIndicate the i-th row of original image With the pixel value of jth column, JijIndicate that the pixel value of virtual sample image ith row and jth column, the image of new virtual sample indicate Are as follows:
Wherein, IijIndicate the pixel value of original image ith row and jth column, JijIndicate virtual sample image ith row and jth column Pixel value.
3. a kind of face identification method that non-linear fusion Pasteur coefficient is indicated based on collaboration according to claim 1, It is characterized in that, step 2 specifically includes:
Step 201: image array is become into column vector,Indicate test sample Z-direction amount form,Indicate original training sample xi Vector form, and meet formula (1)
Wherein, aiIndicate coefficient;
If withA=[a1,…,aN], then formula (1) can be write as formula (2)
Step 202: passing throughA is calculated, wherein λ indicates a positive integer (λ=0.01), I are a unit matrix, and the similarity of class training sample and test sample is calculated by formula (3), that is, is indicated Residual error:
Wherein, dkFor the expression residual error of every a kind of training sample and test sample,For test sample Z-direction amount form,Indicate former Beginning training sample xiVector form;
Step 203: by virtual training sample X '1Carrying out collaboration indicates that algorithm, linear list show virtual test sampleAnd pass through Formula (4), which calculates, indicates residual error,
Wherein, dlFor the expression residual error of every a kind of virtual training sample and virtual test sample,For virtual test sample vector shape Formula,Indicate the vector form of virtual training sample,For the expression system of virtual training sample linear expression virtual test sample Number;
Step 204: by new virtual training sample X '2Carrying out collaboration indicates that algorithm, linear list show new virtual test sampleAnd expression residual error is calculated by formula (5),
Wherein, dqFor the expression residual error of every a new class of virtual training sample and new virtual test sample,For virtual test Sample vector form,Indicate the vector form of virtual training sample,For the new void of new virtual training sample linear expression The expression coefficient of quasi- test sample.
4. a kind of face identification method that non-linear fusion Pasteur coefficient is indicated based on collaboration according to claim 1, It is characterized in that, step 3 specifically includes:
Step 301: calculating original test sampleThe i class sample x concentrated with original training sampleiHistogram similarity, this bar Family name's coefficient formula is formula (6)
Wherein, p, q 'iRespectively indicate original test sample Z and i class training sample xiHistogram data;
Step 302: calculating virtual test sampleWith the i class sample x ' in virtual training sample set1iHistogram similarity, should Pasteur's coefficient formula is formula (7):
Wherein, p1、q′1iRespectively indicate virtual test sampleWith i class virtual training sample x '1iHistogram data;
Step 303: calculating new virtual test sampleWith the i class sample x ' in new virtual training sample set2iHistogram Similarity, Pasteur's coefficient formula are formula (8):
Wherein, p2、q′2iRespectively indicate virtual test sampleWith i class virtual training sample x '2iHistogram data.
5. a kind of face identification method that non-linear fusion Pasteur coefficient is indicated based on collaboration according to claim 1, It is characterized in that, step 4 specifically includes:
Step 401: by the original training sample X of every one kind and original test samplePasteur's coefficient histogram similarity e Minimum value and residual values carry out non-linear fusion in score level, and index non-linear fusion generates new residual error resikFormula (9) and logarithm non-linear fusion generates new residual error distkFormula (10),
Step 402: by every a kind of virtual training sample X '1With virtual test samplePasteur's coefficient histogram similarity e1 Minimum value and residual values non-linear fusion is carried out in score level, generate new residual error formula (11) and formula (12),
Step 403: will be per a new class of virtual training sample X '2With new virtual test samplePasteur's coefficient histogram Similarity e2Minimum value and residual values non-linear fusion is carried out in score level, generate new residual error formula (13) and formula (14),
Step 404: all residual errors newly obtained being subjected to non-linear fusion, generate new residual error formula (15) and formula (16).
6. a kind of face identification method that non-linear fusion Pasteur coefficient is indicated based on collaboration according to claim 1, It is characterized in that, in step 5, original test sample, the label of the generic of virtual test sample and new virtual test sample Definition is expressed as formula (17) and formula (18).
CN201910342740.6A 2019-04-26 2019-04-26 Face recognition method based on collaborative representation nonlinear fusion Bhattacharyya coefficient Active CN110096992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910342740.6A CN110096992B (en) 2019-04-26 2019-04-26 Face recognition method based on collaborative representation nonlinear fusion Bhattacharyya coefficient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910342740.6A CN110096992B (en) 2019-04-26 2019-04-26 Face recognition method based on collaborative representation nonlinear fusion Bhattacharyya coefficient

Publications (2)

Publication Number Publication Date
CN110096992A true CN110096992A (en) 2019-08-06
CN110096992B CN110096992B (en) 2022-12-16

Family

ID=67445891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910342740.6A Active CN110096992B (en) 2019-04-26 2019-04-26 Face recognition method based on collaborative representation nonlinear fusion Bhattacharyya coefficient

Country Status (1)

Country Link
CN (1) CN110096992B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171190A (en) * 2022-07-23 2022-10-11 贵州华数云谷科技有限公司 Virtual image generation and fusion method for face recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700076A (en) * 2015-02-13 2015-06-10 电子科技大学 Face image virtual sample generating method
US20160034789A1 (en) * 2014-08-01 2016-02-04 TCL Research America Inc. System and method for rapid face recognition
CN105426871A (en) * 2015-12-16 2016-03-23 华南理工大学 Similarity measure computation method suitable for moving pedestrian re-identification
CN105787430A (en) * 2016-01-12 2016-07-20 南通航运职业技术学院 Method for identifying second level human face with weighted collaborative representation and linear representation classification combined
CN107273845A (en) * 2017-06-12 2017-10-20 大连海事大学 A kind of facial expression recognizing method based on confidence region and multiple features Weighted Fusion
CN108376256A (en) * 2018-05-08 2018-08-07 兰州大学 One kind is based on ARM processing platform dynamic processing face identification systems and its equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160034789A1 (en) * 2014-08-01 2016-02-04 TCL Research America Inc. System and method for rapid face recognition
CN104700076A (en) * 2015-02-13 2015-06-10 电子科技大学 Face image virtual sample generating method
CN105426871A (en) * 2015-12-16 2016-03-23 华南理工大学 Similarity measure computation method suitable for moving pedestrian re-identification
CN105787430A (en) * 2016-01-12 2016-07-20 南通航运职业技术学院 Method for identifying second level human face with weighted collaborative representation and linear representation classification combined
CN107273845A (en) * 2017-06-12 2017-10-20 大连海事大学 A kind of facial expression recognizing method based on confidence region and multiple features Weighted Fusion
CN108376256A (en) * 2018-05-08 2018-08-07 兰州大学 One kind is based on ARM processing platform dynamic processing face identification systems and its equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHENGJUN XIE等: "Collaborative object tracking model with local sparse representation", 《JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION》 *
CHUNWEI TIAN等: "Integrating Sparse and Collaborative Representation Classifications for Image Classification", 《INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS》 *
YONG XU等: "Using the original and ‘symmetrical face’ training samples to perform representation based two-step face recognition", 《PATTERN RECOGNITION》 *
YONGXU等: "Multiple representations and sparse representation for image classification", 《PATTERN RECOGNITION LETTERS》 *
何刚等: "基于镜像脸的FLDA单训练样本人脸识别方法", 《计算机与数字工程》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171190A (en) * 2022-07-23 2022-10-11 贵州华数云谷科技有限公司 Virtual image generation and fusion method for face recognition

Also Published As

Publication number Publication date
CN110096992B (en) 2022-12-16

Similar Documents

Publication Publication Date Title
Wang et al. Research on face recognition based on deep learning
Shao et al. Joint discriminative learning of deep dynamic textures for 3D mask face anti-spoofing
Li et al. A review of face recognition technology
Lin et al. Face liveness detection by rppg features and contextual patch-based cnn
Peng et al. Face presentation attack detection using guided scale texture
LeCun et al. Learning methods for generic object recognition with invariance to pose and lighting
CN106203391A (en) Face identification method based on intelligent glasses
KR20130037734A (en) A system for real-time recognizing a face using radial basis function neural network algorithms
Mao et al. Face occlusion recognition with deep learning in security framework for the IoT
CN107220598B (en) Iris image classification method based on deep learning features and Fisher Vector coding model
CN110263670A (en) A kind of face Local Features Analysis system
AU2013271337A1 (en) Biometric verification
Deng et al. Similarity-preserving image-image domain adaptation for person re-identification
Zeng et al. A survey of micro-expression recognition methods based on lbp, optical flow and deep learning
Hannan et al. Analysis of detection and recognition of Human Face using Support Vector Machine
Kumar et al. One-shot face recognition
CN110096992A (en) A kind of face identification method indicating non-linear fusion Pasteur coefficient based on collaboration
Li et al. Foldover features for dynamic object behaviour description in microscopic videos
Vareto et al. Open-set face recognition with maximal entropy and Objectosphere loss
Bhattacharya et al. Qdf: A face database with varying quality
CN110287973B (en) Image feature extraction method based on low-rank robust linear discriminant analysis
Dumitrescu et al. Combining neural networks and global gabor features in a hybrid face recognition system
Hassanpour et al. ChatGPT and biometrics: an assessment of face recognition, gender detection, and age estimation capabilities
Su et al. An enhanced siamese angular softmax network with dual joint-attention for person re-identification
Liang et al. Exploring regularized feature selection for person specific face verification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant