CN105654056A - Human face identifying method and device - Google Patents
Human face identifying method and device Download PDFInfo
- Publication number
- CN105654056A CN105654056A CN201511025621.6A CN201511025621A CN105654056A CN 105654056 A CN105654056 A CN 105654056A CN 201511025621 A CN201511025621 A CN 201511025621A CN 105654056 A CN105654056 A CN 105654056A
- Authority
- CN
- China
- Prior art keywords
- face image
- probability
- features
- image pair
- likelihood ratio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000012549 training Methods 0.000 claims abstract description 33
- 238000007476 Maximum Likelihood Methods 0.000 claims abstract description 11
- 230000009467 reduction Effects 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 5
- 238000000513 principal component analysis Methods 0.000 claims description 5
- 238000004458 analytical method Methods 0.000 claims description 4
- 238000000556 factor analysis Methods 0.000 abstract description 6
- 230000011218 segmentation Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 10
- 238000012360 testing method Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention is applied to the technical field of human face identification, and provides a human face identifying method and a human face identifying device; the method comprises the following steps: extracting high-dimensional characteristics of a training sample and a to-be-identified human face image pair, and carrying out segmentation and dimensionality reduction on the high-dimensional characteristics; defining a latent factor model, wherein the latent factor analysis model expresses the characteristics as a linear combination of an identity factor, an age factor, a mean value and a noise; inputting the characteristics of the training sample, and optimizing the latent factor model via maximum likelihood estimation, wherein an objective function is a latent factor joint probability in a logarithmic form; inputting the characteristics of the human face image pair to the trained latent factor model and calculating a likelihood ratio, wherein the likelihood ratio is the ratio of the first probability to the second probability; and expressing a similarity with the likelihood ratio, and if the similarity is higher than a preset threshold value, identifying the human face image pair as the same person. According to the method and the device disclosed by the invention, the accuracy rate of the human face identification is improved.
Description
Technical Field
The invention belongs to the technical field of face recognition, and particularly relates to a face recognition method and device.
Background
In many public places, the related personnel need to be authenticated for security reasons. The face recognition technology utilizes a camera to shoot images or videos to obtain a large amount of face data, and then analyzes and obtains information related to identity from the large amount of face data.
In most application scenes, the face recognition is interfered by various factors, such as illumination, angle, shielding and the like, wherein age is also a main interference factor. At present, the characteristic improvement method is mostly adopted for the age-crossing face recognition, the characteristic representation which is more robust to the age change is obtained, and the simple distance measurement method is adopted for classification.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for face recognition, so as to solve the problem that the accuracy of the current age-based face recognition is low.
In a first aspect, a method for face recognition is provided, including:
extracting high-dimensional features of the training sample and the face image pair to be recognized, and segmenting and reducing the dimensions of the high-dimensional features;
defining a latent factor model that represents features as a linear combination of identity factors, age factors, means, and noise;
inputting the characteristics of the training sample, and optimizing the latent factor model through maximum likelihood estimation, wherein an objective function is a hidden factor joint probability in a logarithmic form;
inputting the characteristics of the human face image pair into the trained latent factor model, and calculating a likelihood ratio, wherein the likelihood ratio is a ratio of a first probability and a second probability, the first probability is the probability that the human face image pair is the same person, and the second probability is the probability that the human face image pair is not the same person;
and representing the similarity by using the likelihood ratio, and identifying the human face image pair as the same person if the similarity is higher than a preset threshold value.
In a second aspect, an apparatus for face recognition is provided, including:
the feature extraction unit is used for extracting high-dimensional features of the training sample and the face image pair to be recognized, and segmenting and reducing the dimensions of the high-dimensional features;
a defining unit for defining a latent factor model that expresses features as a linear combination of an identity factor, an age factor, a mean, and noise;
the training unit is used for inputting the characteristics of the training sample, optimizing the latent factor model through maximum likelihood estimation, and the target function is the hidden factor joint probability in a logarithmic form;
a calculating unit, configured to input features of the face image pair into the trained latent factor model, and calculate a likelihood ratio, where the likelihood ratio is a ratio of a first probability and a second probability, the first probability is a probability that the face image pair is the same person, and the second probability is a probability that the face image pair is not the same person;
and the recognition unit is used for representing the similarity by the likelihood ratio, and recognizing the human face image pair as the same person if the similarity is higher than a preset threshold value.
The embodiment of the invention fully considers the identity influence factors and the age influence factors in the face image by establishing the latent factor model, has stronger expression capability, can learn the identification information and the mode with stronger identity authentication, provides a classification method for calculating the posterior probability likelihood ratio by mathematical solution, and improves the accuracy of face recognition.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a method for face recognition according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a detailed implementation of a face recognition method according to an embodiment of the present invention;
FIG. 3 is a flow chart of an implementation of face image preprocessing of a face recognition method according to an embodiment of the present invention;
fig. 4 is a block diagram of a face recognition apparatus according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The embodiment of the invention takes the face image information as an object, extracts the characteristics and establishes the potential factor model, the potential factor model comprises an identity factor and an age factor, and the identity factor estimation is taken as a core, thereby achieving the purpose of cross-age face recognition.
Fig. 1 shows an implementation flow of a face recognition method provided by an embodiment of the present invention, which is detailed as follows:
in S101, extracting high-dimensional features of a training sample and a face image pair to be recognized, and segmenting and reducing the dimensions of the high-dimensional features;
in S102, defining a latent factor model, wherein the latent factor analysis model expresses characteristics as a linear combination of an identity factor, an age factor, a mean value and noise;
in S103, inputting the characteristics of the training sample, optimizing the latent factor model through maximum likelihood estimation, wherein an objective function is a hidden factor joint probability in a logarithmic form;
in S104, inputting the features of the face image pair into the trained latent factor model, and calculating a likelihood ratio, where the likelihood ratio is a ratio of a first probability and a second probability, the first probability is a probability that the face image pair is the same person, and the second probability is a probability that the face image pair is not the same person;
in S105, the likelihood ratio represents similarity, and if the similarity is higher than a preset threshold, the face image pair is identified as the same person.
Next, based on the embodiment shown in fig. 1, the embodiment of the present invention will be further described in detail:
as shown in fig. 2:
in S201, the high-dimensional features of the training samples are decomposed into N segments, where N is an integer greater than 1.
In S202, a corresponding latent factor analysis model is respectively established for each segmented high-dimensional feature, and the latent factor analysis model expresses the feature as a linear combination of an identity factor, an age factor, a mean value and noise, and the mathematical expression is as follows.
Wherein,is the average value of the average of the values,is the identity factor of the user, and,is the age factor of the person to be treated,is noise, given a priori assumptions, identity factorAnd age factorIs a gaussian independent distribution, and is,subject to the N (0, I) distribution,obey N (0, sigma)2I) Distributing;
in S203, model parameters are respectively optimized for the latent factor analysis model corresponding to each section of high-dimensional feature through maximum likelihood estimation, and the optimized objective function is a likelihood function of a hidden factor in a logarithmic form:
wherein,indicating that the input belongs to the ith individual, the kth age group,andrepresenting a corresponding identity factor and age factor, respectively.
And training the latent factor model by using the training set data to use the latent factor model as a classifier in the face recognition process. In the latent factor model, the latent factor model expresses features as a linear combination of identity factor, age factor, mean and noise, and its mathematical expression is as follows:
wherein,is the average value of the average of the values,is the identity factor of the user, and,is the age factor of the person to be treated,is noise. Given a priori assumptions, identity factorsAnd age factorIs a gaussian independent distribution, and is,subject to the N (0, I) distribution,obey N (0, sigma)2I) And (4) distribution.
The hyperparameter of the latent factor model isConstructing a likelihood function of the hidden factor in a logarithmic form as an objective function of the latent factor model:
wherein,indicating that the input belongs to the ith individual, the kth age group,andrespectively representing corresponding identity factors and age factors, and is known through a series of mathematical deductionsAndunder the conditions of (3), U, V and σ can be obtained2In the training process, the objective function is optimized through the maximum likelihood Estimation (EM) algorithm, and the optimal solution is obtained through full iteration.
In the embodiment of the invention, the high-dimensional features are divided into a plurality of sections, each section of high-dimensional features corresponds to one potential factor model to reduce the operation pressure brought by high-dimensional data, for example, the high-dimensional features are divided into 6 sections, each section constructs one classifier, one potential factor model is constructed for each classifier, and model parameters are respectively optimized through maximum likelihood estimation to complete the training of the models.
In S204, high-dimensional feature extraction is performed on a face image pair, which includes a test face image and a face image sample.
In the embodiment of the invention, after the high-dimensional feature extraction is carried out on the tested face image to be recognized and the face image sample at the same time, the face image and the face image sample are simultaneously input into a classifier for recognition processing. The extracted features comprise high-dimensional Local Binary Pattern (LBP) features and histogram of gradient directions (HOG) features, and dense high-dimensional feature representation is obtained by extracting the LBP features and the HOG features in a dense and multi-scale mode, wherein the high-dimensional feature representation comprises abundant local features and global features.
In S205, the extracted high-dimensional features are decomposed into N segments, and each of the decomposed high-dimensional features is input to a latent factor model corresponding to the pre-trained high-dimensional feature.
In S206, a likelihood ratio of the latent factor model corresponding to each section of the high-dimensional feature is calculated, where the likelihood ratio is a ratio of a first probability and a second probability, the first probability is a probability that the section of the high-dimensional feature of the face image pair is the same person, and the second probability is a probability that the section of the high-dimensional feature of the face image pair is not the same person.
In S207, the mean value of the likelihood ratios of the N potential factor models respectively corresponding to the N segments of high-dimensional features is calculated.
In S208, if the likelihood ratio mean is higher than a preset threshold, the test face image is identified as the face image sample.
After training, a latent factor model is obtained and applied to the testing process of face verification. Mathematical form of likelihood ratio:
wherein,are pairs of facial feature data to be verified.
The derivation is carried out to obtain:
wherein:
Etot=VVT+WWT+σ2I
Eac=VVT
comparing the tested face image and the face image sample by a likelihood ratio, calculating the ratio of the probability of belonging to the same person to the probability of belonging to no person, and using the ratio as the similarity of the face image pair, wherein the greater the likelihood ratio is, the higher the possibility of belonging to the same person of the tested face image and the face image sample is, therefore, setting a preset threshold value, when the likelihood ratio is higher than the preset threshold value, indicating that the tested face image and the face image sample are the same person, otherwise, different persons. Therefore, given a trained latent factor model, the face recognition can be carried out by determining a threshold value through the likelihood ratio.
After the latent factor model is trained based on the method, in the process of face recognition, after the extraction of the high-dimensional features of the face image is completed, the extracted high-dimensional features are divided into corresponding segments, and each segment of the decomposed high-dimensional features is respectively input into the latent factor model corresponding to the pre-trained high-dimensional feature and is calculated. For example, dividing the high-dimensional features into 6 segments as an example, the likelihood ratios of the 6 potential factor models need to be calculated respectively to obtain a likelihood ratio mean value, the likelihood ratio mean value is used as a final similarity, and a confidence threshold value is determined on a training set through an ROC curve.
As an embodiment of the present invention, before performing high-dimensional feature extraction, in order to achieve a higher face recognition accuracy, an input face image pair is preprocessed, and a sample face image and a test face image are aligned, where a preprocessing process is as shown in fig. 3:
in S301, face regions in the test face image and the face image sample are respectively located.
According to different application scenes, the face image can be obtained by various digital imaging systems such as a network camera, a camera, video monitoring and the like, after the face image is input, the face image is scanned by a face detection algorithm in a window with a proper size and a proper step length until the face in the face image is detected, and a face area is cut out and stored. In the present embodiment, the face detection algorithm that can be used includes, for example, an Adaboost face detection algorithm based on Haar-Like features.
In S302, key feature points are detected in the face region.
In S303, according to the detected key feature points, an alignment operation is performed on the test face image and the face image sample.
After the preset key points are detected in the face area, the face image is aligned and calibrated through affine transformation, so that the positions of the key feature points in the face area are basically consistent between the tested face image and a face image sample, the size and the position of the face are basically fixed, and the accuracy of face recognition is further improved.
Further, as an embodiment of the present invention, before performing S205, a dimensionality reduction process may be performed on the high-dimensional features through a Principal Component Analysis (PCA) and a Linear Discriminant Analysis (LDA) method, so as to further reduce the computational pressure in the face recognition process.
The embodiment of the invention provides a priori hypothesis by establishing a latent factor model, solves the posterior joint probability by taking the identity factor and the age factor as the hidden factors which are independently distributed, and further obtains the identity verification result of the face. Through the latent factor model, the identity influence factor and the age influence factor in the face image are fully considered, the face image recognition method has stronger expression capability, and stronger identification information and pattern for identity authentication can be learned. By fusing local features and global features with different scales, Local Binary Pattern (LBP) features and Histogram of Oriented Gradient (HOG) features, more abundant and effective feature representation is obtained. According to the latent factor model, a classification method for calculating the posterior probability likelihood ratio is provided through mathematical solving, and the accuracy of face recognition is improved.
In order to verify the feasibility and the accuracy of the face recognition method provided by the embodiment of the invention, the method is experimentally tested on an international published trans-age face database MORPH and compared with the traditional method:
morphealbum 2 is the largest trans-age face database published internationally and contains 78000 images of 20000 individuals, each with a different age of face image. 10000 individuals are randomly drawn for the experiment to be used as a training set, 10000 individuals are used as a testing set, and each person selects 2 images with the largest age difference.
The experimental test results are shown in table 1, the accuracy of the conventional face verification method is 78% -84%, and it can be seen that the test accuracy of latent factor analysis (HFA) is greatly improved. For the latent factor model, when the traditional HOG characteristics and cosine distances are adopted for classification, the accuracy is 91.14%, the characteristics of HOG + LBP provided by the patent are adopted, the accuracy is 92.12%, when the traditional HOG characteristics and the likelihood ratio provided by the patent are adopted for classification, the accuracy is 93.12%, the HOG + LBP provided by the patent is adopted, and the likelihood is used
Compared with the classification, the accuracy is improved to 94.23%. TABLE 1
Method of producing a composite material | Rate of accuracy |
HFA HOG cosine distance | 91.14% |
HFA HOG + LBP, cosine distance | 92.12% |
HFA HOG likelihood ratio | 93.12% |
HFA HOG + LBP, likelihood ratio | 94.23% |
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Corresponding to the method for face recognition described in the foregoing embodiment, fig. 4 shows a block diagram of a face recognition apparatus provided in the embodiment of the present invention, where the face recognition apparatus may be a software unit, a hardware unit, or a combination of software and hardware. For convenience of explanation, only the portions related to the present embodiment are shown.
Referring to fig. 4, the apparatus includes:
the feature extraction unit 41 is used for extracting high-dimensional features of the training sample and the face image pair to be recognized, and segmenting and reducing the dimensions of the high-dimensional features;
a defining unit 42 defining a latent factor model that expresses features as a linear combination of identity factors, age factors, means, and noise;
a training unit 43 for inputting the characteristics of the training samples, optimizing the latent factor model by maximum likelihood estimation, wherein the objective function is a hidden factor joint probability in a logarithmic form;
a calculating unit 44, configured to input features of the face image pair into the trained latent factor model, and calculate a likelihood ratio, where the likelihood ratio is a ratio of a first probability and a second probability, the first probability is a probability that the face image pair is the same person, and the second probability is a probability that the face image pair is not the same person;
and the recognition unit 45 represents the similarity by the likelihood ratio, and recognizes the face image pair as the same person if the similarity is higher than a preset threshold value.
Optionally, the apparatus further comprises:
the positioning unit is used for positioning a face area in the face image;
the detection unit detects key feature points in the face area;
and the alignment unit executes alignment operation on the face image according to the detected key feature points.
Optionally, the feature extraction unit 41 is specifically configured to:
and extracting local binary pattern features and gradient direction histogram features of the face image.
Optionally, the feature extraction unit 41 is specifically configured to:
decomposing the high-dimensional features into N segments, wherein N is an integer greater than 1;
the training unit 43 is specifically configured to:
optimizing one potential factor model for each segment of features of the training sample;
the calculation unit 44 is specifically configured to:
inputting each section of characteristics of the human face image pair into the trained corresponding segmented latent factor model, and calculating a likelihood ratio;
and calculating the average value of the N likelihood ratios as the likelihood ratio of the face image pair.
Optionally, the feature extraction unit 41 is specifically configured to:
and performing dimensionality reduction on each decomposed segment of features through principal component analysis and linear discriminant analysis.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present invention may be implemented in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (10)
1. A method of face recognition, comprising:
extracting high-dimensional features of the training sample and the face image pair to be recognized, and segmenting and reducing the dimensions of the high-dimensional features;
defining a latent factor model that represents features as a linear combination of identity factors, age factors, means, and noise;
inputting the characteristics of the training sample, and optimizing the latent factor model through maximum likelihood estimation, wherein an objective function is a hidden factor joint probability in a logarithmic form;
inputting the characteristics of the human face image pair into the trained latent factor model, and calculating a likelihood ratio, wherein the likelihood ratio is a ratio of a first probability and a second probability, the first probability is the probability that the human face image pair is the same person, and the second probability is the probability that the human face image pair is not the same person;
and representing the similarity by using the likelihood ratio, and identifying the human face image pair as the same person if the similarity is higher than a preset threshold value.
2. The method of claim 1, wherein prior to said extracting high-dimensional features of pairs of training samples and human face images to be recognized, the method further comprises:
positioning a face area in a face image;
detecting key feature points in the face region;
and performing alignment operation on the face image according to the detected key feature points.
3. The method of claim 1, wherein extracting high-dimensional features of pairs of training samples and human faces to be recognized comprises:
and extracting local binary pattern features and gradient direction histogram features of the face image.
4. The method of claim 1, wherein segmenting the high-dimensional features comprises:
decomposing the high-dimensional features into N segments, wherein N is an integer greater than 1;
the inputting the characteristics of the training sample, and the optimizing the latent factor model through maximum likelihood estimation comprises:
optimizing one potential factor model for each segment of features of the training sample;
inputting the features of the human face image pair into the trained latent factor model, and calculating the likelihood ratio comprises:
inputting each section of characteristics of the human face image pair into the trained corresponding segmented latent factor model, and calculating a likelihood ratio;
and calculating the average value of the N likelihood ratios as the likelihood ratio of the face image pair.
5. The method of claim 1, wherein segmenting and dimensionality reducing the high-dimensional features comprises:
and performing dimensionality reduction on each decomposed segment of features through principal component analysis and linear discriminant analysis.
6. An apparatus for face recognition, comprising:
the feature extraction unit is used for extracting high-dimensional features of the training sample and the face image pair to be recognized, and segmenting and reducing the dimensions of the high-dimensional features;
a defining unit for defining a latent factor model that expresses features as a linear combination of an identity factor, an age factor, a mean, and noise;
the training unit is used for inputting the characteristics of the training sample, optimizing the latent factor model through maximum likelihood estimation, and the target function is the hidden factor joint probability in a logarithmic form;
a calculating unit, configured to input features of the face image pair into the trained latent factor model, and calculate a likelihood ratio, where the likelihood ratio is a ratio of a first probability and a second probability, the first probability is a probability that the face image pair is the same person, and the second probability is a probability that the face image pair is not the same person;
and the recognition unit is used for representing the similarity by the likelihood ratio, and recognizing the human face image pair as the same person if the similarity is higher than a preset threshold value.
7. The apparatus of claim 6, wherein the apparatus further comprises:
the positioning unit is used for positioning a face area in the face image;
the detection unit is used for detecting key feature points in the face region;
and the alignment unit is used for executing alignment operation on the face image according to the detected key feature points.
8. The apparatus of claim 6, wherein the feature extraction unit is specifically configured to:
and extracting local binary pattern features and gradient direction histogram features of the face image.
9. The apparatus of claim 6, wherein the feature extraction unit is specifically configured to:
decomposing the high-dimensional features into N segments, wherein N is an integer greater than 1;
the training unit is specifically configured to:
optimizing one potential factor model for each segment of features of the training sample;
the computing unit is specifically configured to:
inputting each section of characteristics of the human face image pair into the trained corresponding segmented latent factor model, and calculating a likelihood ratio;
and calculating the average value of the N likelihood ratios as the likelihood ratio of the face image pair.
10. The apparatus of claim 6, wherein the feature extraction unit is specifically configured to:
and performing dimensionality reduction on each decomposed segment of features through principal component analysis and linear discriminant analysis.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201511025621.6A CN105654056A (en) | 2015-12-31 | 2015-12-31 | Human face identifying method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201511025621.6A CN105654056A (en) | 2015-12-31 | 2015-12-31 | Human face identifying method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105654056A true CN105654056A (en) | 2016-06-08 |
Family
ID=56490918
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201511025621.6A Pending CN105654056A (en) | 2015-12-31 | 2015-12-31 | Human face identifying method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105654056A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319938A (en) * | 2017-12-31 | 2018-07-24 | 奥瞳系统科技有限公司 | High quality training data preparation system for high-performance face identification system |
CN108597074A (en) * | 2018-04-12 | 2018-09-28 | 广东汇泰龙科技有限公司 | A kind of door opening method and system based on face registration Algorithm and face lock |
CN109376741A (en) * | 2018-09-10 | 2019-02-22 | 平安科技(深圳)有限公司 | Recognition methods, device, computer equipment and the storage medium of trademark infringement |
CN109816200A (en) * | 2018-12-17 | 2019-05-28 | 平安国际融资租赁有限公司 | Task method for pushing, device, computer equipment and storage medium |
CN110287761A (en) * | 2019-03-28 | 2019-09-27 | 电子科技大学 | A kind of face age estimation method analyzed based on convolutional neural networks and hidden variable |
CN110689087A (en) * | 2019-10-10 | 2020-01-14 | 西南石油大学 | Image sample generation method based on probability likelihood |
CN111062995A (en) * | 2019-11-28 | 2020-04-24 | 重庆中星微人工智能芯片技术有限公司 | Method and device for generating face image, electronic equipment and computer readable medium |
CN111723229A (en) * | 2020-06-24 | 2020-09-29 | 重庆紫光华山智安科技有限公司 | Data comparison method and device, computer readable storage medium and electronic equipment |
CN111738194A (en) * | 2020-06-29 | 2020-10-02 | 深圳力维智联技术有限公司 | Evaluation method and device for similarity of face images |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1885310A (en) * | 2006-06-01 | 2006-12-27 | 北京中星微电子有限公司 | Human face model training module and method, human face real-time certification system and method |
US20070098225A1 (en) * | 2005-10-28 | 2007-05-03 | Piccionelli Gregory A | Age verification method for website access |
CN104680119A (en) * | 2013-11-29 | 2015-06-03 | 华为技术有限公司 | Image identity recognition method, related device and identity recognition system |
-
2015
- 2015-12-31 CN CN201511025621.6A patent/CN105654056A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070098225A1 (en) * | 2005-10-28 | 2007-05-03 | Piccionelli Gregory A | Age verification method for website access |
CN1885310A (en) * | 2006-06-01 | 2006-12-27 | 北京中星微电子有限公司 | Human face model training module and method, human face real-time certification system and method |
CN104680119A (en) * | 2013-11-29 | 2015-06-03 | 华为技术有限公司 | Image identity recognition method, related device and identity recognition system |
Non-Patent Citations (1)
Title |
---|
DIHONG GONG等: "Hidden Factor Analysis for Age Invariant Face Recognition", 《IEEE》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319938B (en) * | 2017-12-31 | 2022-05-17 | 奥瞳系统科技有限公司 | High-quality training data preparation system for high-performance face recognition system |
CN108319938A (en) * | 2017-12-31 | 2018-07-24 | 奥瞳系统科技有限公司 | High quality training data preparation system for high-performance face identification system |
CN108597074A (en) * | 2018-04-12 | 2018-09-28 | 广东汇泰龙科技有限公司 | A kind of door opening method and system based on face registration Algorithm and face lock |
CN109376741A (en) * | 2018-09-10 | 2019-02-22 | 平安科技(深圳)有限公司 | Recognition methods, device, computer equipment and the storage medium of trademark infringement |
CN109816200A (en) * | 2018-12-17 | 2019-05-28 | 平安国际融资租赁有限公司 | Task method for pushing, device, computer equipment and storage medium |
CN109816200B (en) * | 2018-12-17 | 2023-11-28 | 平安国际融资租赁有限公司 | Task pushing method, device, computer equipment and storage medium |
CN110287761A (en) * | 2019-03-28 | 2019-09-27 | 电子科技大学 | A kind of face age estimation method analyzed based on convolutional neural networks and hidden variable |
CN110689087A (en) * | 2019-10-10 | 2020-01-14 | 西南石油大学 | Image sample generation method based on probability likelihood |
CN110689087B (en) * | 2019-10-10 | 2023-04-18 | 西南石油大学 | Image sample generation method based on probability likelihood |
CN111062995A (en) * | 2019-11-28 | 2020-04-24 | 重庆中星微人工智能芯片技术有限公司 | Method and device for generating face image, electronic equipment and computer readable medium |
CN111062995B (en) * | 2019-11-28 | 2024-02-23 | 重庆中星微人工智能芯片技术有限公司 | Method, apparatus, electronic device and computer readable medium for generating face image |
CN111723229A (en) * | 2020-06-24 | 2020-09-29 | 重庆紫光华山智安科技有限公司 | Data comparison method and device, computer readable storage medium and electronic equipment |
CN111723229B (en) * | 2020-06-24 | 2023-05-30 | 重庆紫光华山智安科技有限公司 | Data comparison method, device, computer readable storage medium and electronic equipment |
CN111738194A (en) * | 2020-06-29 | 2020-10-02 | 深圳力维智联技术有限公司 | Evaluation method and device for similarity of face images |
CN111738194B (en) * | 2020-06-29 | 2024-02-02 | 深圳力维智联技术有限公司 | Method and device for evaluating similarity of face images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105654056A (en) | Human face identifying method and device | |
Yuan et al. | Fingerprint liveness detection using an improved CNN with image scale equalization | |
KR20230021043A (en) | Method and apparatus for recognizing object, and method and apparatus for learning recognizer | |
Cahyono et al. | Face recognition system using facenet algorithm for employee presence | |
EP3229171A1 (en) | Method and device for determining identity identifier of human face in human face image, and terminal | |
Gao et al. | Reconstruction based finger-knuckle-print verification with score level adaptive binary fusion | |
CN108564040B (en) | Fingerprint activity detection method based on deep convolution characteristics | |
CN110705428B (en) | Facial age recognition system and method based on impulse neural network | |
CN111104852B (en) | Face recognition technology based on heuristic Gaussian cloud transformation | |
CN111507320A (en) | Detection method, device, equipment and storage medium for kitchen violation behaviors | |
US9378406B2 (en) | System for estimating gender from fingerprints | |
Stokkenes et al. | Multi-biometric template protection—A security analysis of binarized statistical features for bloom filters on smartphones | |
Haji et al. | Real time face recognition system (RTFRS) | |
CN113255557A (en) | Video crowd emotion analysis method and system based on deep learning | |
Muthusamy et al. | Trilateral Filterative Hermitian feature transformed deep perceptive fuzzy neural network for finger vein verification | |
Villariña et al. | Palm vein recognition system using directional coding and back-propagation neural network | |
Alsawwaf et al. | In your face: person identification through ratios and distances between facial features | |
CN117237757A (en) | Face recognition model training method and device, electronic equipment and medium | |
Pratiwi et al. | Identity recognition with palm vein feature using local binary pattern rotation Invariant | |
Bansal et al. | Multimodal biometrics by fusion for security using genetic algorithm | |
CN110490149A (en) | A kind of face identification method and device based on svm classifier | |
CN110956098B (en) | Image processing method and related equipment | |
CN113869510A (en) | Network training, unlocking and object tracking method, device, equipment and storage medium | |
CN113657197A (en) | Image recognition method, training method of image recognition model and related device | |
CN113505716A (en) | Training method of vein recognition model, and recognition method and device of vein image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160608 |