CN106709502B - Multi-feature fusion recognition method based on voting method - Google Patents

Multi-feature fusion recognition method based on voting method Download PDF

Info

Publication number
CN106709502B
CN106709502B CN201611024110.7A CN201611024110A CN106709502B CN 106709502 B CN106709502 B CN 106709502B CN 201611024110 A CN201611024110 A CN 201611024110A CN 106709502 B CN106709502 B CN 106709502B
Authority
CN
China
Prior art keywords
voting
scoring
feature
sample
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611024110.7A
Other languages
Chinese (zh)
Other versions
CN106709502A (en
Inventor
张健
罗卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Information Technology
Original Assignee
Shenzhen Institute of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Information Technology filed Critical Shenzhen Institute of Information Technology
Priority to CN201611024110.7A priority Critical patent/CN106709502B/en
Publication of CN106709502A publication Critical patent/CN106709502A/en
Application granted granted Critical
Publication of CN106709502B publication Critical patent/CN106709502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

The invention discloses a multi-feature fusion recognition method based on a voting method, which comprises the following steps of: A. linear expression of the characteristics; B. voting and scoring; C. the invention can solve the problem of lower accuracy in single-sample face recognition by the invention, because the more the voting types are, the more the probability that the voting types are correct simultaneously is, and the higher the accuracy of recognition is.

Description

Multi-feature fusion recognition method based on voting method
Technical Field
The invention relates to the technical field of feature recognition, in particular to a multi-feature fusion recognition method based on a voting method.
Background
The single sample face recognition problem is a situation that is often encountered in practice, and therefore, in order for the face recognition technology to be applied to more occasions, the single sample problem must be carefully solved. On the other hand, although only one training sample per person is stored in the face database, which is disadvantageous for most face recognition technologies, several advantages exist that are expected to be achieved in practical applications: (1) The collection of samples is easy, whether indirect or direct: a common component of face recognition systems is the face database, where the construction of a face database storing face images as "templates" is a very time-consuming and labor-consuming task, which is effectively alleviated if only one image is required for each person as training sample. For the same reason, the development of face recognition systems will be easier and, in those cases where direct image acquisition is very difficult, if not impossible, a training sample per person has its unique advantages. Considering monitoring applications in public places, such as airports, stations, etc., a large number of people need to be identified and confirmed. In this case, rather than actually taking an image of each person, we can effectively build a desired face database, such as a passport, identification card, student card, identification card, driver's license, etc., by scanning a photograph on the document. (2) saving storage costs: the storage cost of the face recognition system will be reduced when only one image per person needs to be stored in the database. (3) saving the calculation cost: the computational expense of large-scale applications is significantly reduced because the number of training samples per person directly affects the costs involved in operations such as preprocessing, feature extraction and recognition in the face recognition process.
In summary, in practical applications, the single sample problem is unavoidable and has its unique advantages. In addition, growing acute insight about this particular problem is of greater significance, both in terms of face recognition and in terms of solving other, more general, small sample problems. Therefore, the single-sample face recognition problem has strong academic research value and application value, provides new challenges and new opportunities for the face recognition field, further expands the application range of the face recognition technology by solving the problem, and further improves the related technology.
Disclosure of Invention
The invention aims to provide a multi-feature fusion recognition method based on a voting method, so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: a multi-feature fusion identification method based on a voting method comprises the following steps:
A. linear expression of the characteristics;
B. voting and scoring;
C. score classification.
Preferably, the characteristic linear expression in the step a is as follows: firstly, extracting features of a single sample, and if I features are extracted, wherein I is a certain feature extraction method, I is E I; the feature is expressed linearly and the expression is noted as:
Figure BDA0001155891370000021
wherein y is i The ith feature of the picture to be measured; />
Figure BDA0001155891370000022
The ith feature of the training sample picture L; />
Figure BDA0001155891370000023
The expression weight of the L sample under the ith characteristic.
Preferably, the voting scoring mode in the step B is as follows: voting and scoring are carried out on each sample respectively; the scoring method expression is as follows:
Figure BDA0001155891370000024
wherein (1)>
Figure BDA0001155891370000025
Representing the number of votes obtained by voting the jth sample class under the ith feature; ΔGr is a voting function, which is defined as +.>
Figure BDA0001155891370000026
The number of tickets corresponding to the ranking of the ith feature in the order from big to small; l is the highest number of votes, i.e. when +.>
Figure BDA0001155891370000027
In the first row, the number of tickets obtained +.>
Figure BDA0001155891370000028
The second name is L-1 ticket, and the last name is 1 ticket; then the sum of the scores of the j-th class of samples under all the features is obtained: />
Figure BDA0001155891370000029
Preferably, the classification method in the step C is as follows: after the scoring sum under all the characteristics is obtained, each class is ordered according to the order of magnitude, and the class with the highest score is used as the final classification result, namely
Figure BDA00011558913700000210
Compared with the prior art, the invention has the beneficial effects that: the invention can solve the problem of lower accuracy in single-sample face recognition because the more the voting types are, the more the probability that the voting types are correct at the same time is possible under the condition of facing complex face change, so that the accuracy of recognition is higher.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a technical scheme that: a multi-feature fusion identification method based on a voting method comprises the following steps:
A. linear expression of the characteristics;
B. voting and scoring;
C. score classification.
In this embodiment, the characteristic linear expression in the step a is as follows: firstly, extracting features of a single sample, and if I features are extracted, wherein I is a certain feature extraction method, I is E I; the feature is expressed linearly and the expression is noted as:
Figure BDA0001155891370000031
wherein y is i The ith feature of the picture to be measured; />
Figure BDA0001155891370000032
The ith feature of the training sample picture L; />
Figure BDA0001155891370000033
The expression weight of the L sample under the ith characteristic.
In this embodiment, the voting scoring mode in the step B is: voting and scoring are carried out on each sample respectively; the scoring method expression is as follows:
Figure BDA0001155891370000034
wherein (1)>
Figure BDA0001155891370000035
Representing the number of votes obtained by voting the jth sample class under the ith feature; ΔGr is a voting function, which is defined as +.>
Figure BDA0001155891370000036
The number of tickets corresponding to the ranking of the ith feature in the order from big to small; l is the highest number of votes, i.e. when +.>
Figure BDA0001155891370000037
In the first row, the number of tickets obtained +.>
Figure BDA0001155891370000038
The second name is L-1 ticket, and the last name is 1 ticket; then the sum of the scores of the j-th class of samples under all the features is obtained: />
Figure BDA0001155891370000039
In this embodiment, the classification method in step C is as follows: after the scoring sum under all the characteristics is obtained, each class is ordered according to the order of magnitude, and the class with the highest score is used as the final classification result, namely
Figure BDA00011558913700000310
Experimental example:
the method was tested in three databases, AR, GT and FP respectively. In the three databases, a front face is selected as a training sample for each person, and the other faces are selected as test samples. The feature selection employs LBP, SIFT, PCA and row correlation (RR) four features.
Test data are as follows:
error rate of classification recognition in case of voting using two features:
Figure BDA00011558913700000311
error rate of classification recognition in case of voting using three features:
Figure BDA00011558913700000312
Figure BDA0001155891370000042
error rate of classification recognition in the case of voting using four features:
Figure BDA0001155891370000041
from the test results, as the number of features increases, so does the accuracy of their identification. Particularly for faces with relatively complex changes, such as the GT database, the recognition accuracy changes more than other databases. This means that the more the voting types are, the more the probability that the voting types are correct at the same time is, so that the accuracy of recognition is higher.
The beneficial effects of the invention are as follows: the invention can solve the problem of lower accuracy in single-sample face recognition because the more the voting types are, the more the probability that the voting types are correct at the same time is possible under the condition of facing complex face change, so that the accuracy of recognition is higher.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (1)

1. A multi-feature fusion recognition method based on a voting method is characterized in that: the method comprises the following steps:
A. linear expression of the characteristics;
B. voting and scoring;
C. scoring and classifying;
the characteristic linear expression mode in the step A is as follows: firstly, extracting features of a single sample, and if I features are extracted, wherein I is a certain feature extraction method, I is E I; the feature is expressed linearly and the expression is noted as:
Figure FDA0004114145370000011
wherein yi is the ith feature of the picture to be measured;
Figure FDA0004114145370000017
the ith feature of the training sample picture j; />
Figure FDA0004114145370000013
The expression weight of the jth sample under the ith characteristic;
and B, voting scoring mode is as follows: voting and scoring are carried out on each sample respectively; the scoring method expression is as follows:
Figure FDA0004114145370000014
wherein (1)>
Figure FDA0004114145370000015
Representing the number of votes obtained by voting the jth sample class under the ith feature; ΔGr is a voting function, which is defined as +.>
Figure FDA0004114145370000016
The number of tickets corresponding to the ranking of the ith feature in the order from big to small; l is the highest number of votes, i.e. when +.>
Figure FDA0004114145370000021
In the first row, the number of tickets obtained +.>
Figure FDA0004114145370000022
The second name is L-1 ticket, and the last name is 1 ticket; then the sum of the scores of the j-th class of samples under all the features is obtained: />
Figure FDA0004114145370000023
The classification method in the step C is as follows: after the scoring sum under all the characteristics is obtained, each class is ordered according to the order of magnitude, and the class with the highest score is used as the final classification result, namely
Figure FDA0004114145370000024
CN201611024110.7A 2016-11-18 2016-11-18 Multi-feature fusion recognition method based on voting method Active CN106709502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611024110.7A CN106709502B (en) 2016-11-18 2016-11-18 Multi-feature fusion recognition method based on voting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611024110.7A CN106709502B (en) 2016-11-18 2016-11-18 Multi-feature fusion recognition method based on voting method

Publications (2)

Publication Number Publication Date
CN106709502A CN106709502A (en) 2017-05-24
CN106709502B true CN106709502B (en) 2023-06-20

Family

ID=58941180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611024110.7A Active CN106709502B (en) 2016-11-18 2016-11-18 Multi-feature fusion recognition method based on voting method

Country Status (1)

Country Link
CN (1) CN106709502B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729835B (en) * 2017-10-10 2020-10-16 浙江大学 Expression recognition method based on fusion of traditional features of face key point region and face global depth features
CN113436379B (en) * 2021-08-26 2021-11-26 深圳市永兴元科技股份有限公司 Intelligent voting method, device, equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324038B (en) * 2011-09-06 2014-04-16 北京林业大学 Plant species identification method based on digital image
JP5616310B2 (en) * 2011-09-27 2014-10-29 日本電信電話株式会社 Image matching apparatus and image matching program
CN103903004B (en) * 2012-12-28 2017-05-24 汉王科技股份有限公司 Method and device for fusing multiple feature weights for face recognition
CN103810274B (en) * 2014-02-12 2017-03-29 北京联合大学 Multi-characteristic image tag sorting method based on WordNet semantic similarities
US9852364B2 (en) * 2014-03-19 2017-12-26 Hulu, LLC Face track recognition with multi-sample multi-view weighting
CN104143088B (en) * 2014-07-25 2017-03-22 电子科技大学 Face identification method based on image retrieval and feature weight learning
CN105827571B (en) * 2015-01-06 2019-09-13 华为技术有限公司 Multi-modal biological characteristic authentication method and equipment based on UAF agreement
CN106097360A (en) * 2016-06-17 2016-11-09 中南大学 A kind of strip steel surface defect identification method and device

Also Published As

Publication number Publication date
CN106709502A (en) 2017-05-24

Similar Documents

Publication Publication Date Title
CN106874889B (en) Multiple features fusion SAR target discrimination method based on convolutional neural networks
US10509985B2 (en) Method and apparatus for security inspection
US9594984B2 (en) Business discovery from imagery
CN110969087B (en) Gait recognition method and system
CN109829467A (en) Image labeling method, electronic device and non-transient computer-readable storage medium
CN109858476B (en) Tag expansion method and electronic equipment
Singh et al. A study of moment based features on handwritten digit recognition
JP2014232533A (en) System and method for ocr output verification
CN103761531A (en) Sparse-coding license plate character recognition method based on shape and contour features
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
CN107798308B (en) Face recognition method based on short video training method
CN104392246A (en) Inter-class inner-class face change dictionary based single-sample face identification method
CN111476310B (en) Image classification method, device and equipment
CN107977439A (en) A kind of facial image base construction method
CN113111880B (en) Certificate image correction method, device, electronic equipment and storage medium
CN104615998A (en) Vehicle search method based on multiple views
CN106709502B (en) Multi-feature fusion recognition method based on voting method
CN107368803A (en) A kind of face identification method and system based on classification rarefaction representation
CN109993049A (en) A kind of video image structure analysis system towards intelligent security guard field
CN110580507B (en) City texture classification and identification method
CN112396060B (en) Identification card recognition method based on identification card segmentation model and related equipment thereof
CN102521623A (en) Subspace-based incremental learning face recognition method
CN115984968A (en) Student time-space action recognition method and device, terminal equipment and medium
CN115375959A (en) Vehicle image recognition model establishing and recognizing method
CN104778479B (en) A kind of image classification method and system based on sparse coding extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant