CN105740838A - Recognition method in allusion to facial images with different dimensions - Google Patents

Recognition method in allusion to facial images with different dimensions Download PDF

Info

Publication number
CN105740838A
CN105740838A CN201610083936.4A CN201610083936A CN105740838A CN 105740838 A CN105740838 A CN 105740838A CN 201610083936 A CN201610083936 A CN 201610083936A CN 105740838 A CN105740838 A CN 105740838A
Authority
CN
China
Prior art keywords
image
face
face image
matrix
dct
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610083936.4A
Other languages
Chinese (zh)
Inventor
张欣
刘海
于红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University
Original Assignee
Hebei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University filed Critical Hebei University
Priority to CN201610083936.4A priority Critical patent/CN105740838A/en
Publication of CN105740838A publication Critical patent/CN105740838A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides a recognition method in allusion to facial images with different dimensions. The method comprises the following steps: 1, carrying out discrete cosin transformation (DCT) on facial images and obtaining the same DCT coefficient, forming a sample set by a to-be-recognized unknown facial image and a known facial image set for matching, carrying out DCT on each two-dimensional facial grayscale image in the sample set, and selecting 56*46 components from a low-frequency part of a two-dimension DCT coefficient matrix of the transformation result to retain; 2, carrying out principal component analysis (PCA) on the transformed sample set to ensure that the dimensionality of a facial image characteristic value matrix is reduced to 20; and 3, calculating the characteristic matching value between every two the facial images by utilizing a correlation coefficient normalization algorithm, and selecting the facial image with the maximum characteristic matching value as a known facial image matched with the unknown facial image. The method provided by the invention is capable of solving the influences, caused by the image dimension differences of the facial images with different dimensions in the recognition process, on the recognition result, thereby improving the recognition correctness.

Description

Recognition method for different-scale face images
Technical Field
The invention relates to a method for identifying face images, in particular to a method for identifying face images with different scales.
Background
The face recognition is an important component of biological feature recognition, and has wide application prospects in the fields of public safety, real-time monitoring, authority authentication, man-machine interaction and the like. There are currently available such things as: entrance guard authentication, face attendance, target person tracking in a person gathering area and the like in a sensitive area are put into practical use.
The main method for face recognition comprises the following steps: a template matching method, a neural network method, a hidden Markov model-based method, an AdaBoost-based face recognition algorithm, a geometric feature-based face recognition method, an algebraic feature-based face recognition method and the like. These face recognition methods are usually based on a standard face library, and the digital face images in the standard face library all have the same scale, so these face recognition methods all require that the recognized person must be in good cooperation to achieve the design effect. However, in practical applications, the size of the captured face image is different from one another, whether in a video or a shot photograph.
When the scales of the face image to be recognized are different from the scales of the known face image, the problem of different face scales is solved by the common image scaling technology. When the image is reduced, namely down-sampled, part of pixels of the image are lost; the interpolation algorithms often used to enlarge an image increase the number of redundant pixels in the image. Therefore, when the scaling operation is performed on the face images with different scales by using the common method, the quality of the images is inevitably affected, and the accuracy of face recognition is further affected.
Disclosure of Invention
The invention provides an identification method for different-scale face images, and aims to solve the problem that the image scale difference of the different-scale face images has influence on an identification result in the identification process and improve the identification accuracy.
The invention is realized by the following steps:
the method for identifying the face images with different scales comprises the following steps:
a. performing Discrete Cosine Transform (DCT) on the face image and obtaining the same DCT coefficients: and jointly forming a sample set by the unknown face image to be recognized and the known face image set for matching, performing DCT (discrete cosine transformation) on each two-dimensional face gray level image in the sample set, and selecting 56 multiplied by 46 components from a low-frequency part in a two-dimensional DCT coefficient matrix of a conversion result to be reserved.
b. Performing Principal Component Analysis (PCA) on the transformed sample set to obtain a lower feature dimension: and (b) constructing a feature subspace containing 20 maximum feature values by using PCA (principal component analysis), and projecting the sample set transformed according to the step (a) into the feature subspace to respectively obtain the feature value of each face image.
c. Calculating a feature matching value between the face images by using a correlation coefficient normalization method: respectively calculating correlation coefficients of the characteristic values of the unknown face image and the characteristic values of the known face image set and normalizing the correlation coefficients to obtain a group of characteristic matching values; and selecting the known face image with the maximum feature matching value as the known face image matched with the unknown face image.
In the identification method, in the step a, for the digital face gray level image with the scale of M multiplied by N, a two-dimensional discrete cosine transform formula is as follows:
F ( u , v ) = α ( u ) α ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) c o s π ( 2 x + 1 ) u 2 M c o s π ( 2 y + 1 ) v 2 N - - - ( 1 )
m and N are respectively the row number and the column number of a two-dimensional pixel matrix of the face gray level image; x and y are two-dimensional space domain coordinates of the image; f (x, y) is an image two-dimensional space domain element vector value; when the generalized frequency variable u is 0,when u-1, 2, · M-1,when the generalized frequency variable v is 0,when v is 1,2, · N-1,the low frequency component of the image is concentrated in the upper left corner after DCT transformation, and the high frequency component is distributed in the lower right corner, the low frequency component contains the main information of the original image, compared with the information contained in the high frequency component, the information is less important, the high frequency part is cut off, and 56 × 46 DCT coefficients of the low frequency part are reserved, so that the face images with different scales have the same scale.
In the method for identifying the face images with different scales, the dimensionality of the face image after DCT transformation is reduced by using a PCA method in the step b.
The vector form of the face image sample set is as follows: x ═ X1,X2……Xm]Wherein X isi(i ═ 1,2,3.. m) represents the ith human face image and is expanded into a column vector of n × 1 dimensions, n is the image dimension, m is the sum of the image sample set, and the vector of the average face is:the mean face of each face image is:thereby forming a new matrixConstructing a covariance matrixThe eigenvalues and eigenvectors of the covariance matrix M are calculated, and the eigenvectors corresponding to the top k 20 eigenvalues are constructed as one eigensubspace. And the matrix corresponding to the subspace is the final eigenvalue matrix of the face image sample set.
In the identification method for the facial images with different scales, in the step c, the feature matching values of two mutually matched facial images are calculated according to the formula (2), and a facial image matching technology based on a correlation coefficient normalization formula is established.
Wherein,for an image eigenvalue matrix, x and y are the row and column numbers of its two-dimensional matrix, ω (s, t) is a subimage eigenvalue matrix with size B × a, and s and t are the row and column numbers of its two-dimensional matrix.
The method for identifying the face images with different scales comprises the steps of firstly carrying out DCT (discrete cosine transformation) on the face images with different scales, then extracting 56 x 46 DCT coefficients from a low-frequency part in a two-dimensional matrix of a transformation result, then carrying out PCA (principal component analysis) on a sample set after transformation to reduce the dimension of a face image characteristic value matrix to 20, and finally carrying out matching identification on the face image to be identified and a known face image by utilizing a correlation coefficient normalization method.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Fig. 2 is a partial face image in the ORL face library.
In fig. 3, a is a face image, and b is the result of DCT transformation on a.
Fig. 4 shows experimentally acquired face images of different scales.
Fig. 5 shows the face parts detected from the face image in fig. 4.
Detailed Description
The software and hardware conditions of the computer used for implementing the invention are as follows: in the Huashuo notebook, a CPU is intel-Corei5-3230M and 2.6GHz, a display card is NVIDIAGeForce720M, a memory is 4GB, an operating system is Window7, and a software programming language uses Matlab6.5.
The invention is described with reference to the accompanying drawings:
in fig. 1, in step S1, a face image is obtained through face region detection; step S2, performing DCT transformation on the face image to make the face images with different scales have the same DCT coefficient; step S3, carrying out PCA analysis on the conversion result, extracting the main features of the image and reducing the dimension; in step S4, matching identification is performed by calculating a matching value between images.
In step S1, a face image is acquired.
The experimental sample comprises images in an international face database orl (overlaveriresearch laboratory) and face images acquired by experiments. The ORL sample consists of 40, 10, and 400 face images in total, some of which are shown in fig. 2. The number of the experimentally collected human face gray level images with different scales is 200, wherein part of the images are shown in fig. 4.
Introduction of experimental samples:
ORL face library: the ORL face library consists of a series of face images taken during the period from 1992 to 1994 4 by the Olivetti laboratory of cambridge, england, totaling 40 subjects of different ages, gender and ethnicity. Each object consists of 10 images in total of 400 gray scale images, the image size is 112 × 92, the image background is black, and 256 gray scale levels. Wherein the facial expression and details of the face part are changed, such as laughing and not laughing, eyes are opened or closed, glasses are worn or not worn, the facial pose is also changed, the depth rotation and the plane rotation can reach 20 degrees, and the size of the face is also changed by at most 10 percent. This library is the most widely used standard database at present, and it contains a large number of comparison results.
Actually acquired face image data: under the premise of uniform illumination, the human face images of 20 persons are shot in a multi-angle manner by using a rear 800-ten-thousand-pixel camera of the iPhone4s with white as a background. The method comprises the steps of utilizing a skin color detection method to detect a face area of an image acquired by an experiment, obtaining sub-images of which the faces account for at least 50% of the whole image after detection, graying the sub-images to serve as face images for the experiment, wherein 10 images are obtained for each person, and the number of the face images is 200 in total, wherein part of the images are shown in figure 5.
Step S2, DCT transform. And performing DCT (discrete cosine transform) transformation on the detected image containing the human face part and extracting the same coefficient.
DCT is a commonly used image data compression method. For a digital image of size mxn, its two-dimensional DCT transform formula:
F ( u , v ) = α ( u ) α ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) c o s π ( 2 x + 1 ) u 2 M c o s π ( 2 y + 1 ) v 2 N - - - ( 1 )
m and N are respectively the row number and the column number of a two-dimensional pixel value matrix of the face gray level image; x and y are two-dimensional space domain coordinates of the image; f (x, y) is a vector element of a two-dimensional space domain of the image; when the generalized frequency variable u is 0,when u-1, 2, · M-1,when the generalized frequency variable v is 0,when v is 1,2, · N-1,after the image is DCT-transformed, its low frequency components are concentrated in the upper left corner and the high frequency components are distributed in the lower right corner, as shown in b in fig. 3. The low frequency components contain the main information of the original image, which is less important than the information contained in the high frequency components.
And randomly mixing 600 face gray level images of 60 total persons collected by the ORL face library and the experiment to form a sample set. And randomly selecting 300 human face images of each person from the sample set as unknown human face images to be identified, and taking the other 300 human face images as a known human face image set. And (3) performing DCT (discrete cosine transformation) on each two-dimensional face image in the sample set according to a formula (1), omitting a high-frequency part, and keeping X multiplied by Y DCT coefficients of a low-frequency part so that the face images with different scales have the same scale.
In the step, the high-frequency part is eliminated through DCT transformation, so that the face images with different scales have the same DCT coefficient, and a foundation is laid for later-stage matching identification.
And step S3, constructing a feature subspace containing K maximum feature values by PCA (principal component analysis), projecting the sample set transformed according to S2 into the feature subspace to respectively obtain the feature value of each human face image, thereby achieving the purpose of reducing the dimension of the feature space.
The vector form of the face image sample set is as follows: x ═ X1,X2……Xm]Wherein X isi(i ═ 1,2,3.. m) represents the ith human face image and is expanded into a column vector of n × 1 dimensions, n is the image dimension, m is the sum of the image sample set, and the vector of the average face is:the mean face of each face image is:thereby forming a new matrixConstructing a covariance matrixThe eigenvalues and eigenvectors of the covariance matrix M are calculated, and the eigenvectors corresponding to the top k 20 eigenvalues are constructed as one eigensubspace. And the matrix corresponding to the subspace is the final eigenvalue matrix of the face image sample set.
Step S4, the correlation coefficient normalization method identifies: and respectively carrying out correlation matching on the characteristic values of the facial images to be recognized and the characteristic values of the known facial images by using a formula (2), calculating each matching value, and selecting the largest one as the matched facial image.
Feature values for size M × N imagesAnd a characteristic value ω (s, t) of the sub-image of size B × a,the correlation with ω can be expressed as:wherein x and y areThe row and column numbers of the two-dimensional matrix of the image characteristic values; and s and t are row and column numbers of a two-dimensional matrix of the characteristic values of the omega image.
The process of calculating c (x, y) is in the feature matrixThe point-by-point moving submatrix omega (s, t) is calculated, the origin of omega is coincided with the point (x, y), and then omega and omega are calculatedIn the area covered by ω, corresponds to the sum of the products of the values, in that orderThe result is calculated as a response of the correlation value c (x, y) at the point (x, y).
The calculation of the correlation is obtained by associating image elements with sub-mode image elements, multiplying the correlation elements and accumulating. The sub-image omega can be regarded as a vector stored row-wise or column-wiseThe image area covered by ω during the calculation is treated as another vector stored in the same wayThus, the correlation calculation becomes a dot product operation between vectors.
The dot product of the two vectors is:wherein θ is a vectorThe included angle therebetween. It is obvious thatAndwith exactly the same direction (parallel), cos θ is 1, and the vector dot product takes its maximum valueThis means that the correlation operation yields the maximum response when the local area of the image is similar to the sub-image mode. However, the final value of the dot product of the vector is also associated with the vectorThe mode of the vector dot product is related, and the vector dot product correlation response has the defect of being sensitive to vector values. Thus, inMay not be similar to ω, but because of the high value of region (c) and the low value of region (c)Is itself large and also produces a high response. This problem can be solved by normalizing the vector by its modulus value, i.e. byTo calculate the correlation.
The formula after improvement, namely the correlation coefficient normalization calculation formula is as follows:
the correlation coefficient normalization method actually represents the similarity between two vectors (feature values of an image) by a cosine value of an angle between the vectors, i.e., the closer to 1, the better. Respectively calculating correlation coefficients of the characteristic values of the unknown face image and the characteristic values of the known face image set and normalizing the correlation coefficients to obtain a group of characteristic matching values; and selecting the known face image with the maximum feature matching value as the known face image matched with the unknown face image.
Under the conditions of the experimental sample set described in step S1, the effect of the difference in the retention coefficients X × Y after DCT transformation on the sample set identification rate in step S2 is shown in table 1; in step S3, the effect of the PCA feature value number on the sample set identification rate is shown in table 2.
By carrying out a mixing experiment on an international universal face database ORL (improved research laboratory) face library and a plurality of face images with different scales acquired by the experiment, the invention proves that the matching identification of the face images with different scales is effective and has better identification rate when the DCT coefficient is at least 56 multiplied by 46 and the characteristic value of the PCA method is at least 20.
TABLE 1 recognition accuracy of fixed PCA eigenvalue numbers
Tab.1FixedPCAprincipalcomponentscoresofrecognitionaccuracy
TABLE 2 identification accuracy of fixed DCT extraction coefficients
Tab.2RecognitionaccuracyrateisfixedtoextractDCTcoefficients

Claims (4)

1. The method for identifying the face images with different scales is characterized by comprising the following steps of:
a. performing Discrete Cosine Transform (DCT) on the face image and obtaining the same DCT coefficients: the unknown face image to be recognized and the known face image set used for matching jointly form a sample set, DCT (discrete cosine transformation) is carried out on each two-dimensional face gray level image in the sample set, and 56 x 46 components are selected from a low-frequency part in a two-dimensional DCT coefficient matrix of a conversion result and are reserved;
b. performing Principal Component Analysis (PCA) on the transformed sample set to obtain a lower feature dimension: b, constructing a feature subspace containing 20 maximum feature values by using PCA (principal component analysis), and projecting the sample set transformed according to the step a into the feature subspace to respectively obtain the feature value of each face image;
c. calculating a feature matching value between the face images by using a correlation coefficient normalization method: respectively calculating correlation coefficients of the characteristic values of the unknown face image and the characteristic values of the known face image set and normalizing the correlation coefficients to obtain a group of characteristic matching values; and selecting the known face image with the maximum feature matching value as the known face image matched with the unknown face image.
2. The method as claimed in claim 1, wherein the two-dimensional discrete cosine transform formula of the digital face gray image with M × N scale in step a is:
F ( u , v ) = α ( u ) α ( v ) Σ x = 0 M - 1 Σ y = 0 N - 1 f ( x , y ) c o s π ( 2 x + 1 ) u 2 M c o s π ( 2 y + 1 ) v 2 N - - - ( 1 )
m and N are respectively the row number and the column number of a two-dimensional pixel matrix of the face gray level image; x and y are two-dimensional space domain coordinates of the image; f (x, y) is an image two-dimensional space domain element vector value; when the generalized frequency variable u is 0,when u-1, 2, · M-1,when the generalized frequency variable v is 0,when v is 1,2, · N-1,the low frequency components of the image are concentrated in the upper left corner after DCT transformation, the high frequency components are distributed in the lower right corner, the high frequency part is cut off, and 56 × 46 DCT coefficients of the low frequency part are reserved, so that the face images with different scales have the same scale.
3. A method as claimed in claim 1, wherein in step b, for the face image value matrix after DCT transformation, the PCA is used to reduce the dimension:
the vector form of the face image sample set is as follows: x ═ X1,X2······Xm]Wherein X isi(i ═ 1,2,3.. m) represents the ith human face image and is expanded into a column vector of n × 1 dimensions, n is the image dimension, m is the sum of the image sample set, and the vector of the average face is:the mean face of each face image is:thereby forming a new matrixConstructing a covariance matrixCalculating the eigenvalue and the eigenvector of the covariance matrix M, and constructing the eigenvector corresponding to the front k being 20 eigenvalues into an eigen subspace; and the matrix corresponding to the subspace is the final eigenvalue matrix of the face image sample set.
4. The method for recognizing facial images with different scales as claimed in claim 1, wherein in step c, the feature matching value of two mutually matched facial images is calculated according to formula (2), and a facial image matching technique based on a correlation coefficient normalization formula is established:
wherein,an image eigenvalue matrix is adopted, and x and y are row and column numbers of a two-dimensional matrix of the image eigenvalue matrix; ω (s, t) is a magnitudeThe characteristic value of the unknown face image and the characteristic value of the known face image set are respectively calculated and normalized to obtain a group of characteristic matching values, and the selected characteristic matching value with the maximum value is the known face image matched with the unknown face image.
CN201610083936.4A 2016-02-06 2016-02-06 Recognition method in allusion to facial images with different dimensions Pending CN105740838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610083936.4A CN105740838A (en) 2016-02-06 2016-02-06 Recognition method in allusion to facial images with different dimensions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610083936.4A CN105740838A (en) 2016-02-06 2016-02-06 Recognition method in allusion to facial images with different dimensions

Publications (1)

Publication Number Publication Date
CN105740838A true CN105740838A (en) 2016-07-06

Family

ID=56245994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610083936.4A Pending CN105740838A (en) 2016-02-06 2016-02-06 Recognition method in allusion to facial images with different dimensions

Country Status (1)

Country Link
CN (1) CN105740838A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709598A (en) * 2016-12-15 2017-05-24 全球能源互联网研究院 One-class sample-based voltage stability prediction judgment method
CN108921043A (en) * 2018-06-08 2018-11-30 新疆大学 A kind of Uygur nationality's face identification method of new blending algorithm
CN110210340A (en) * 2019-05-20 2019-09-06 深圳供电局有限公司 Face characteristic value comparison method and system and readable storage medium
CN110458007A (en) * 2019-07-03 2019-11-15 平安科技(深圳)有限公司 Match method, apparatus, computer equipment and the storage medium of face
CN112070913A (en) * 2020-07-17 2020-12-11 盛威时代科技集团有限公司 Ticket checking processing method based on Internet of things technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size
CN101604376A (en) * 2008-10-11 2009-12-16 大连大学 Face identification method based on the HMM-SVM mixture model
CN102982322A (en) * 2012-12-07 2013-03-20 大连大学 Face recognition method based on PCA (principal component analysis) image reconstruction and LDA (linear discriminant analysis)
CN103729625A (en) * 2013-12-31 2014-04-16 青岛高校信息产业有限公司 Face identification method
CN104036254A (en) * 2014-06-20 2014-09-10 成都凯智科技有限公司 Face recognition method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size
CN101604376A (en) * 2008-10-11 2009-12-16 大连大学 Face identification method based on the HMM-SVM mixture model
CN102982322A (en) * 2012-12-07 2013-03-20 大连大学 Face recognition method based on PCA (principal component analysis) image reconstruction and LDA (linear discriminant analysis)
CN103729625A (en) * 2013-12-31 2014-04-16 青岛高校信息产业有限公司 Face identification method
CN104036254A (en) * 2014-06-20 2014-09-10 成都凯智科技有限公司 Face recognition method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709598A (en) * 2016-12-15 2017-05-24 全球能源互联网研究院 One-class sample-based voltage stability prediction judgment method
CN106709598B (en) * 2016-12-15 2022-02-15 全球能源互联网研究院 Voltage stability prediction and judgment method based on single-class samples
CN108921043A (en) * 2018-06-08 2018-11-30 新疆大学 A kind of Uygur nationality's face identification method of new blending algorithm
CN110210340A (en) * 2019-05-20 2019-09-06 深圳供电局有限公司 Face characteristic value comparison method and system and readable storage medium
CN110458007A (en) * 2019-07-03 2019-11-15 平安科技(深圳)有限公司 Match method, apparatus, computer equipment and the storage medium of face
CN110458007B (en) * 2019-07-03 2023-10-27 平安科技(深圳)有限公司 Method, device, computer equipment and storage medium for matching human faces
CN112070913A (en) * 2020-07-17 2020-12-11 盛威时代科技集团有限公司 Ticket checking processing method based on Internet of things technology
CN112070913B (en) * 2020-07-17 2022-05-10 盛威时代科技集团有限公司 Ticket checking processing method based on Internet of things technology

Similar Documents

Publication Publication Date Title
US6681032B2 (en) Real-time facial recognition and verification system
EP2737434B1 (en) Gait recognition methods and systems
Li et al. Overview of principal component analysis algorithm
US20030059124A1 (en) Real-time facial recognition and verification system
CN106934359B (en) Multi-view gait recognition method and system based on high-order tensor subspace learning
US11769316B2 (en) Facial image recognition using pseudo-images
CN105550657B (en) Improvement SIFT face feature extraction method based on key point
CN105740838A (en) Recognition method in allusion to facial images with different dimensions
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
Bae et al. Real-time face detection and recognition using hybrid-information extracted from face space and facial features
CN107103266A (en) The training of two-dimension human face fraud detection grader and face fraud detection method
CN107368803A (en) A kind of face identification method and system based on classification rarefaction representation
Rai et al. An illumination, expression, and noise invariant gender classifier using two-directional 2DPCA on real Gabor space
Shermina Face recognition system using multilinear principal component analysis and locality preserving projection
CN109919056B (en) Face recognition method based on discriminant principal component analysis
Gottumukkal et al. Real time face detection from color video stream based on PCA method
Mohammed et al. Face Recognition Based on Viola-Jones Face Detection Method and Principle Component Analysis (PCA)
Hbali et al. Object detection based on HOG features: Faces and dual-eyes augmented reality
Alrikabi et al. Deep learning-based face detection and recognition system
Winston et al. Performance comparison of feature extraction methods for iris recognition
JP3841482B2 (en) Face image recognition device
Razzaq et al. Structural Geodesic-Tchebychev Transform: An image similarity measure for face recognition
Khalifa et al. A hybrid Face Recognition Technique as an Anti-Theft Mechanism
Sa Gender Classification from Facial Images using PCA and SVM
Zhang et al. Gender recognition based on face image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160706