CN107729840A - One kind is based on sparse figured face recognition discriminant analysis method - Google Patents

One kind is based on sparse figured face recognition discriminant analysis method Download PDF

Info

Publication number
CN107729840A
CN107729840A CN201710953190.2A CN201710953190A CN107729840A CN 107729840 A CN107729840 A CN 107729840A CN 201710953190 A CN201710953190 A CN 201710953190A CN 107729840 A CN107729840 A CN 107729840A
Authority
CN
China
Prior art keywords
mrow
msub
mtd
discriminant analysis
sketch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710953190.2A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201710953190.2A priority Critical patent/CN107729840A/en
Publication of CN107729840A publication Critical patent/CN107729840A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The one kind proposed in the present invention is based on sparse figured face recognition discriminant analysis method, and its main contents includes:Adaptive sparse diagramming, discriminant analysis based on space partition zone and the discriminant analysis method for isomery face recognition, its process is, first build expression data set of the facial sketch photo to composition, patch is represented by feature descriptor, then represented by the discriminant analysis of facial match to improve adaptive sparse figure, simply connect the expression of all face-image patches, then distinguish strategy using space and discriminant analysis is carried out to each region, finally calculate the similarity score of three accurate vectors using cosine similarity metric and merged.The discriminant analysis method for isomery face recognition of the present invention, the adaptive sparse vector of facial match is improved, recognition capability and ga s safety degree is improved, effectively eliminates the influence of texture (i.e. style) difference, substantially increase face recognition performance.

Description

One kind is based on sparse figured face recognition discriminant analysis method
Technical field
The present invention relates to area of facial recognition, is differentiated more particularly, to a kind of based on sparse figured face recognition Analysis method.
Background technology
Facial recognition techniques are as one kind in biometrics identification technology, compared with other biometrics identification technologies, Have the characteristics that it is direct, conveniently, be easily accepted by, due to its application prospect extensively and scientific research value is great and be increasingly subject to scientific research The attention of workers, huge progress is also achieved in the past few years.It can be widely applied to government, army, bank, The fields such as welfare, ecommerce, safe defence, such as enterprise and the recognition of face access control and attendance system and face of house Identify antitheft door etc., and public security, the administration of justice, criminal investigation can track down and arrest runaway convict in China using face identification system network, It can carry out authentication in the crowded place such as airport, railway station with reference to E-Passport and identity card, safeguard public field Safety.However, traditional facial recognition techniques are easily influenceed by big texture (i.e. style) difference, recognition performance is low Under, it is not easy to promote the use of.
The present invention proposes one kind and is based on sparse figured face recognition discriminant analysis method, first structure face element - expression data set of the photo to composition is retouched, patch is represented by feature descriptor, then passes through the discriminant analysis of facial match Represented to improve adaptive sparse figure, simply connect the expression of all face-image patches, then distinguish strategy using space Discriminant analysis is carried out to each region, finally calculates the similarity score of three accurate vectors simultaneously using cosine similarity metric Merged.The discriminant analysis method for isomery face recognition of the present invention, improve the adaptive sparse of facial match to Amount, recognition capability and ga s safety degree are improved, the influence of texture (i.e. style) difference is effectively eliminated, substantially increases face recognition Performance.
The content of the invention
For recognition performance it is low the problem of, it is an object of the invention to provide one kind to be based on sparse figured face Discriminant analysis method is identified, expression data set of the facial sketch-photo to composition is first built, spot is represented by feature descriptor Block, then represented by the discriminant analysis of facial match to improve adaptive sparse figure, simply connect all face-images and mend The expression of fourth, then distinguish strategy using space and discriminant analysis is carried out to each region, finally using cosine similarity metric Calculate the similarity score of three accurate vectors and merged.
To solve the above problems, the present invention, which provides one kind, is based on sparse figured face recognition discriminant analysis method, Its main contents includes:
(1) adaptive sparse diagramming;
(2) discriminant analysis based on space partition zone;
(3) it is used for the discriminant analysis method of isomery face recognition.
Wherein, described adaptive sparse diagramming, construct first by M facial sketch-photos to forming Represent data set;Each face-image is divided into N number of overlapping patch, and each patch is represented by feature descriptor; An a given sketch image t and work images g, is classified into patch, and represents each patch with feature descriptor, with Expression data set before is identical.
Further, described feature descriptor, makes yi(i=1,2 ..., N) represents sketch image, f (yi) correspond to yiFeature descriptor;The Euclidean distance selection of feature based descriptor surrounds yiTable in region of search R × R of position Registration is according to sketch closest on each facial sketch of concentration;Therefore, it is possible to find that M related sketch patch, for element Tracing is as yi;It is also possible to find photograph album photo xiRelevant picture.
Further, described patch, K nearest neighbor neighborhood is selected from the fragment of correlation, and detects sketch piece Section yiIt may be considered by column vectoryiThe linear combination of K nearest neighbor neighborhood of weighting; It may then pass through joint and model all sketch image patches and its neighborhood to build Marko's husband's network model:
Wherein, (i, j) ∈ Ξ represent i-th of sketch image, j-th of sketch image neighborhood.
Further, described linear combination,Represent linear group of the feature descriptor of K nearest neighbor neighborhood Close, such as
Wherein, w isConnection;Matrix Q and c are quadratic parameters;Problem (2) can pass through cascade decomposition Method solves.
Further, the problem of described (2), can be shown below with optimization problem (2):
Constraint in function (3), such asWithFollowing constraint it is identical;
This is non-negative sparse regularization.
Wherein, the discriminant analysis based on space partition zone, the adaptive of two sketch images and work images is being obtained After answering sparse figure expression, these expressions are improved by the discriminant analysis of facial match;The table of all face-image patches Showing simply to link together, then using classical subspace analysis, such as principal component analysis (PCA) and linear discriminant analysis (LDA) identifying information for matching is extracted;Discriminant analysis based on space partition zone includes three kinds of spaces and distinguishes strategy.
Further, strategy is distinguished in three kinds of described spaces, by the K of image blockcRow are combined as a space partition zone;Then Discriminant analysis can be carried out respectively on each space partition zone region;Merge extracted feature to be matched;In order to utilize base In capable space partition zone strategy, KrCapable image block can also carry out discriminant analysis as space partition zone to each region.
Further, described discriminant analysis, experimental section will discuss different Kc,KrAnd KlInfluence, and using optimal Kc,KrAnd Kl;During the discriminant analysis carried out to each space partition zone, retain first using PCA using 99% variance; Then, LDA is carried out further to reduce dimension and improve distinguishability;Finally, by all projection vectors of same facial image Connect, and the similarity scores between sketch image and work images are calculated using cosine similarity metric.
Wherein, the described discriminant analysis method for isomery face recognition, first, face-image is divided it is blocking, and And each image block is represented using public characteristic descriptor;Secondly, for sketch image (or work images), data are being represented Markov is built in the sketch image patch (or picture library photo patch) of concentration and the feature of sketch patch (or photo patch) Network model;It may then pass through solution formula (3) and represented to generate the adaptive sparse figure of input picture;3rd, application Represent based on row, based on row and space segmentation strategy based on study optimizing adaptive sparse figure and improve it to distinguish Property;Finally, the similarity score of three accurate vectors is calculated using cosine similarity metric, is then merged.
Brief description of the drawings
Fig. 1 is a kind of system framework figure based on sparse figured face recognition discriminant analysis method of the present invention.
Fig. 2 is a kind of space partition zone strategy based on sparse figured face recognition discriminant analysis method of the present invention.
Fig. 3 is that a kind of isomery face that is used for based on sparse figured face recognition discriminant analysis method of the present invention is known Other discriminant analysis method.
Embodiment
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the application can phase Mutually combine, the present invention is described in further detail with specific embodiment below in conjunction with the accompanying drawings.
Fig. 1 is a kind of system framework figure based on sparse figured face recognition discriminant analysis method of the present invention.It is main To include adaptive sparse diagramming, the discriminant analysis based on space partition zone and the discriminant analysis for isomery face recognition Method.
Adaptive sparse diagramming, construct first by M facial sketch-photos to the expression data set that forms; Each face-image is divided into N number of overlapping patch, and each patch is represented by feature descriptor;Give a sketch An image t and work images g, patch is classified into, and each patch is represented with feature descriptor, with expression number before It is identical according to collecting.
Make yi(i=1,2 ..., N) represents sketch image, f (yi) correspond to yiFeature descriptor;Feature based describes The Euclidean distance selection of symbol surrounds yiIn expression data set in region of search R × R of position on each facial sketch away from From nearest sketch;Therefore, it is possible to find that M related sketch patch, for sketch image yi;It is also possible to find photograph album Photo xiRelevant picture.
K nearest neighbor neighborhood is selected from the fragment of correlation, and detects sketch fragment yiMay be considered from row to Amount yiThe linear combination of K nearest neighbor neighborhood of weighting;It may then pass through joint modeling All sketch image patches and its neighborhood build Marko's husband's network model:
Wherein, (i, j) ∈ Ξ represent i-th of sketch image, j-th of sketch image neighborhood.
Linear combination,The linear combination of the feature descriptor of K nearest neighbor neighborhood is represented, such as
Wherein, w isConnection;Matrix Q and c are quadratic parameters;Problem (2) can pass through cascade decomposition Method solves.
It can be shown below with optimization problem (2):
Constraint in function (3), such asWithFollowing constraint it is identical;
This is non-negative sparse regularization.
Discriminant analysis based on space partition zone, obtaining the adaptive sparse figure table of two sketch images and work images After showing, these expressions are improved by the discriminant analysis of facial match;The expression of all face-image patches can simply connect It is connected together, then using classical subspace analysis, is extracted such as principal component analysis (PCA) and linear discriminant analysis (LDA) Identifying information for matching;Discriminant analysis based on space partition zone includes three kinds of spaces and distinguishes strategy.
Fig. 2 is a kind of space partition zone strategy based on sparse figured face recognition discriminant analysis method of the present invention. By the K of image blockcRow are combined as a space partition zone;Then it can carry out differentiating respectively on each space partition zone region and divide Analysis;Merge extracted feature to be matched;In order to using based on capable space partition zone strategy, KrCapable image block is as space Subregion, discriminant analysis can also be carried out to each region.
Experimental section will discuss different Kc,KrAnd KlInfluence, and use optimal Kc,KrAnd Kl;To each space partition zone During the discriminant analysis of progress, retain first using PCA using 99% variance;Then, LDA is carried out to tie up with further reduce Spend and improve distinguishability;Finally, all projection vectors of same facial image are connected, and uses cosine similarity degree Measure to calculate the similarity scores between sketch image and work images.
Fig. 3 is that a kind of isomery face that is used for based on sparse figured face recognition discriminant analysis method of the present invention is known Other discriminant analysis method.First, face-image is divided blocking, and each image is represented using public characteristic descriptor Block;Secondly, for sketch image (or work images), the sketch image patch (or picture library photo patch) in data set is represented With structure markov network model in the feature of sketch patch (or photo patch);It may then pass through solution formula (3) The adaptive sparse figure for generating input picture represents;3rd, split plan using the space based on row, based on row and based on study Slightly represent to optimize adaptive sparse figure and improve its ga s safety degree;Finally, three are calculated using cosine similarity metric The similarity score of accurate vector, is then merged.
For those skilled in the art, the present invention is not restricted to the details of above-described embodiment, in the essence without departing substantially from the present invention In the case of refreshing and scope, the present invention can be realized with other concrete forms.In addition, those skilled in the art can be to this hair Bright to carry out various changes and modification without departing from the spirit and scope of the present invention, these improvement and modification also should be regarded as the present invention's Protection domain.Therefore, appended claims are intended to be construed to include preferred embodiment and fall into all changes of the scope of the invention More and change.

Claims (10)

1. one kind is based on sparse figured face recognition discriminant analysis method, it is characterised in that main including adaptive dilute Dredge diagramming (one);Discriminant analysis (two) based on space partition zone;Discriminant analysis method for isomery face recognition (3).
2. based on the adaptive sparse diagramming (one) described in claims 1, it is characterised in that construct first by M Expression data set of the individual facial sketch-photo to composition;Each face-image is divided into N number of overlapping patch, and passes through spy Descriptor is levied to represent each patch;An a given sketch image t and work images g, is classified into patch, and with spy Sign descriptor represents each patch, identical with expression data set before.
3. based on the feature descriptor described in claims 1, it is characterised in that make yi(i=1,2 ..., N) represent sketch map Picture, f (yi) correspond to yiFeature descriptor;The Euclidean distance selection of feature based descriptor surrounds yiPosition is searched Sketch closest on each facial sketch in data set is represented in the R × R of rope region;Therefore, it is possible to find that M related Sketch patch, for sketch image yi;It is also possible to find photograph album photo xiRelevant picture.
4. based on the patch described in claims 3, it is characterised in that K nearest neighbor neighborhood is selected from the fragment of correlation, And detect sketch fragment yiIt may be considered by column vectoryiK nearest neighbor of weighting is adjacent The linear combination in domain;It may then pass through joint and model all sketch image patches and its neighborhood to build Marko's husband's network mould Type:
<mrow> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <msub> <mi>y</mi> <mn>1</mn> </msub> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mi>w</mi> <msub> <mi>y</mi> <mi>N</mi> </msub> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mi>y</mi> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&amp;Pi;</mi> <mi>i</mi> </munder> <mi>&amp;Phi;</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> <mo>,</mo> <mi>f</mi> <mo>(</mo> <msub> <mi>w</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> <mo>)</mo> </mrow> <munder> <mi>&amp;Pi;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>&amp;Xi;</mi> </mrow> </munder> <mi>&amp;Psi;</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>,</mo> <msub> <mi>w</mi> <msub> <mi>y</mi> <mi>j</mi> </msub> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, (i, j) ∈ Ξ represent i-th of sketch image, j-th of sketch image neighborhood.
5. based on the linear combination described in claims 4, it is characterised in thatRepresent the feature of K nearest neighbor neighborhood The linear combination of descriptor, such as
<mrow> <mtable> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow> <munder> <mi>min</mi> <mi>w</mi> </munder> <msup> <mi>w</mi> <mi>T</mi> </msup> <mi>Q</mi> <mi>w</mi> <mo>+</mo> <msup> <mi>w</mi> <mi>T</mi> </msup> <mi>c</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>w</mi> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>&amp;le;</mo> <msub> <mi>w</mi> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&amp;le;</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>..</mn> <mo>,</mo> <mi>N</mi> <mo>,</mo> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mi>K</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, w isConnection;Matrix Q and c are quadratic parameters;Problem (2) can be asked by cascade decomposition method Solution.
6. the problem of based on described in claims 5 (2), it is characterised in that can be shown below with optimization problem (2):
<mrow> <mtable> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow> <munder> <mi>min</mi> <mi>w</mi> </munder> <msup> <mi>w</mi> <mi>T</mi> </msup> <mi>Q</mi> <mi>w</mi> <mo>+</mo> <msup> <mi>w</mi> <mi>T</mi> </msup> <mi>c</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msub> <mi>w</mi> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>m</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>&amp;le;</mo> <msub> <mi>w</mi> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>m</mi> </mrow> </msub> <mo>&amp;le;</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>..</mn> <mo>,</mo> <mi>N</mi> <mo>,</mo> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mi>M</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
Constraint in function (3), such asWithFollowing constraint it is identical;
<mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>w</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>&amp;le;</mo> <msub> <mi>w</mi> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>m</mi> </mrow> </msub> <mo>&amp;le;</mo> <mn>1</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
This is non-negative sparse regularization.
7. based on the discriminant analysis (two) based on space partition zone described in claims 1, it is characterised in that obtaining two elements After the adaptive sparse figure of tracing picture and work images represents, these tables are improved by the discriminant analysis of facial match Show;The expression of all face-image patches can simply link together, then using classical subspace analysis, such as principal component (PCA) and linear discriminant analysis (LDA) are analyzed to extract the identifying information for matching;Discriminant analysis bag based on space partition zone Include three kinds of spaces and distinguish strategy.
8. distinguish strategy based on three kinds of spaces described in claims 7, it is characterised in that by the K of image blockcRow are combined as one Individual space partition zone;Then discriminant analysis can be carried out respectively on each space partition zone region;Merge extracted feature to carry out Matching;In order to using based on capable space partition zone strategy, KrCapable image block can also enter as space partition zone to each region Row discriminant analysis.
9. based on the discriminant analysis described in claims 8, it is characterised in that experimental section will discuss different Kc,KrAnd KlShadow Ring, and use optimal Kc,KrAnd Kl;During the discriminant analysis carried out to each space partition zone, applied first using PCA 99% variance retains;Then, LDA is carried out further to reduce dimension and improve distinguishability;Finally, by same facial image All projection vectors connect, and calculated using cosine similarity metric similar between sketch image and work images Property fraction.
10. based on the discriminant analysis method (three) for isomery face recognition described in claims 1, it is characterised in that first First, face-image is divided blocking, and each image block is represented using public characteristic descriptor;Secondly, for sketch map As (or work images), sketch image patch (or picture library photo patch) and sketch patch in data set is represented (or photo Patch) feature on build markov network model;Solution formula (3) be may then pass through to generate the adaptive of input picture Sparse figure is answered to represent;3rd, optimize adaptive sparse using based on row, based on row and space segmentation strategy based on study Figure represents and improves its ga s safety degree;Finally, obtained using cosine similarity metric to calculate the similitude of three accurate vectors Point, then merged.
CN201710953190.2A 2017-10-13 2017-10-13 One kind is based on sparse figured face recognition discriminant analysis method Withdrawn CN107729840A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710953190.2A CN107729840A (en) 2017-10-13 2017-10-13 One kind is based on sparse figured face recognition discriminant analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710953190.2A CN107729840A (en) 2017-10-13 2017-10-13 One kind is based on sparse figured face recognition discriminant analysis method

Publications (1)

Publication Number Publication Date
CN107729840A true CN107729840A (en) 2018-02-23

Family

ID=61211244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710953190.2A Withdrawn CN107729840A (en) 2017-10-13 2017-10-13 One kind is based on sparse figured face recognition discriminant analysis method

Country Status (1)

Country Link
CN (1) CN107729840A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334238A (en) * 2019-03-27 2019-10-15 特斯联(北京)科技有限公司 A kind of Missing Persons based on recognition of face trace method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544507A (en) * 2013-10-15 2014-01-29 中国矿业大学 Method for reducing dimensions of hyper-spectral data on basis of pairwise constraint discriminate analysis and non-negative sparse divergence
CN104123741A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Method and device for generating human face sketch

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544507A (en) * 2013-10-15 2014-01-29 中国矿业大学 Method for reducing dimensions of hyper-spectral data on basis of pairwise constraint discriminate analysis and non-negative sparse divergence
CN104123741A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Method and device for generating human face sketch

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHUNLEI PENG: "Sparse Graphical Representation based Discriminant Analysis for Heterogeneous Face Recognition", 《ARXIV:1607.00137V1》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334238A (en) * 2019-03-27 2019-10-15 特斯联(北京)科技有限公司 A kind of Missing Persons based on recognition of face trace method and system
CN110334238B (en) * 2019-03-27 2020-01-31 特斯联(北京)科技有限公司 missing population tracing method and system based on face recognition

Similar Documents

Publication Publication Date Title
Marin et al. Random forests of local experts for pedestrian detection
Carvalho et al. Illuminant-based transformed spaces for image forensics
Lin et al. Discriminatively trained and-or graph models for object shape detection
US8184914B2 (en) Method and system of person identification by facial image
CN107609497A (en) The real-time video face identification method and system of view-based access control model tracking technique
US11594074B2 (en) Continuously evolving and interactive Disguised Face Identification (DFI) with facial key points using ScatterNet Hybrid Deep Learning (SHDL) network
Paulino et al. Latent fingerprint indexing: Fusion of level 1 and level 2 features
US9911027B2 (en) Fingerprint authentication system, fingerprint authentication program and fingerprint authentication method
Wang et al. S 3 d: scalable pedestrian detection via score scale surface discrimination
Su et al. Contour guided hierarchical model for shape matching
Ng et al. Matching of interest point groups with pairwise spatial constraints
Lumini et al. Two-class fingerprint matcher
Grosz et al. Minutiae-guided fingerprint embeddings via vision transformers
Winter et al. Demystifying face-recognition with locally interpretable boosted features (libf)
CN107729840A (en) One kind is based on sparse figured face recognition discriminant analysis method
Lei et al. Efficient feature selection for linear discriminant analysis and its application to face recognition
Ram et al. Biohashing application using fingerprint cancelable features
Yang et al. Privileged information-based conditional structured output regression forest for facial point detection
Chennamma et al. Face identification from manipulated facial images using SIFT
Di Domenico et al. Combining identity features and artifact analysis for differential morphing attack detection
Gadekar et al. Face Recognition Using SIFT-PCA Feature Extraction and SVM Classifier
Contardo et al. Analyzing the impact of police mugshots in face verification for crime investigations
Zou et al. Cross-dataset image matching network for heterogeneous palmprint recognition
Feng et al. Fingerprint representation and matching in ridge coordinate system
Kartheek et al. Texture based feature extraction using symbol patterns for facial expression recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20180223