CN112232298B - Face classification method for small sample training data - Google Patents

Face classification method for small sample training data Download PDF

Info

Publication number
CN112232298B
CN112232298B CN202011243596.XA CN202011243596A CN112232298B CN 112232298 B CN112232298 B CN 112232298B CN 202011243596 A CN202011243596 A CN 202011243596A CN 112232298 B CN112232298 B CN 112232298B
Authority
CN
China
Prior art keywords
matrix
sample
training data
face
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011243596.XA
Other languages
Chinese (zh)
Other versions
CN112232298A (en
Inventor
孙磊
苏浩
谢翠芳
刘耘彤
王邵琦
崔如瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Publication of CN112232298A publication Critical patent/CN112232298A/en
Application granted granted Critical
Publication of CN112232298B publication Critical patent/CN112232298B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a face classification method for small sample training data, and belongs to the technical field of computer vision and pattern recognition. The method comprises the following steps: step S1: establishing a training data set; step S2: preprocessing data; and step S3: respectively establishing a sample matrix and a category orthogonal matrix, establishing a model and solving a characteristic vector; and step S4: training and solving the model to obtain a characteristic matrix Q; step S5: and classifying the human face through the test model. The method can greatly reduce the cost of collecting the sample under the condition of small sample training data, and can greatly reduce the cost paid when the data are collected by matching with the actual condition; the method is not limited to small sample data, and the classification performance can be still remarkably improved by increasing samples; the linear model is used for modeling, so that the deployment cost of face classification can be reduced, and the operation speed is increased; the operation speed is high in the training stage, and the migration cost of the model to other data sets can be reduced.

Description

Face classification method for small sample training data
Technical Field
The invention relates to a face classification method for small sample training data, and belongs to the technical field of information technology, computer vision and pattern recognition.
Background
At present, related technologies such as face classification and the like are needed in various occasions, such as a face attendance machine and the like. Generally, before face classification, face images of a designated crowd need to be collected in advance, then the face images are preprocessed, features of the face images are extracted, and finally the face classification is completed through a related algorithm of pattern recognition. In the field of pattern recognition, the above process is referred to as the model training phase. And the process of specifically implementing the model to the application requirements belongs to the test stage of the model. In the testing stage, after the tested image is acquired, feature extraction needs to be carried out on the image, the face classification model obtained in the training stage is used for classification, and finally a face classification result is obtained. In recent years, methods for classifying human faces by using related algorithms such as machine learning and deep learning have become popular.
However, in practical applications, the implementation of face classification is limited by two factors. On one hand, a large amount of high-quality face images are difficult to obtain, so that methods such as a neural network and deep learning which depend on a large amount of training samples are difficult to work; on the other hand, in industrial or production life, the user has a high requirement on the real-time performance of face classification. However, due to the constraint of the construction cost, the computing power of the related equipment is very limited, so that some complex face classification algorithms (such as an image classification algorithm based on deep learning) are difficult to deploy on the equipment or obtain the computing result in real time.
The method mainly takes practical application environment and requirements as backgrounds, and provides a face classification method for training data under the condition of small samples aiming at the restriction factors of less training sample data, more difficult algorithm deployment, longer algorithm running time and the like. The method has the prominent characteristics that: the model can be effectively established under the condition of small sample training data, and the cost for collecting samples can be greatly reduced; the method is not limited to the situation of small samples, and the classification accuracy can be obviously improved by increasing the training data; the linear model is used for modeling, so that the deployment cost of the face classification method can be reduced, and the operation speed can be increased; the method has higher operation speed in the training stage, and can reduce the migration cost of the model to other data sets.
Disclosure of Invention
The invention aims to provide a face classification method aiming at small sample training data, aiming at the technical defect that the use requirement is difficult to meet when the training sample data is less and the real-time property is higher in the existing face classification method.
In order to achieve the purpose, the invention adopts the following technical scheme.
The face classification method aiming at the small sample training data comprises the following steps:
step S1: establishing a training data set, specifically: acquiring front face gray images of M persons as training data to generate a training data set;
the training data set comprises M personal images and category labels, wherein at least 1 personal image corresponds to each category label 0,1, \8230;, M-1, the total number of images is N, and N > = M;
step S2: the data preprocessing specifically comprises the following steps: rearranging each image sample into a column vector, and then carrying out normalization processing on all images;
wherein the column vector is denoted as { x i ,i=1,...,N}x i Represents the ith sample, N represents the total number of samples, and the value is the same as the total number of images in step S1;
s3, respectively establishing a sample matrix and a category orthogonal matrix, and establishing a model, wherein the method specifically comprises the following substeps:
step S3-1: combining the column vectors of all the samples obtained in the step S2 into a sample matrix X, wherein the samples are sequentially arranged according to the descending order of the class labels in the combining process;
step S3-2: designing M class orthogonal vectors according to the class of each sample, combining the M class orthogonal vectors into a matrix, marking the matrix as R, and marking the matrix as a class orthogonal matrix;
wherein the categories are orthogonal toQuantity is given as { r j J =1,.. M }, i.e. each column of the matrix R, satisfies the following constraint:
1) Normalization constraint: r is (i)T ·r (i) =1;
2) Constraint of orthogonality r (i)T ·r i =0,i≠j;
3) Intra-class normalization: (I) (i) r (i) ) T ·(I (i) r (i) )=1;
4) Inter-class orthogonality: (I) (i) r (i) ) T ·(I (i) r (j) )=0,i≠j;
Wherein r is j Is N, each element of which corresponds to a sample vector;
Figure BDA0002769155840000021
is represented by r j The column vector is formed by elements corresponding to the ith type of face data; i is a unit matrix of size N x N, meaning that each row thereof corresponds to a sample vector; i is (i) Representing a matrix formed by row vectors corresponding to the ith type of face data in the I;
step S3-3: based on the sample matrix in step S3-1 and the class orthogonal matrix in step S3-2, a linear model of the form,
R=X T Q
the Q matrix is provided with M columns, corresponds to each type of human face and is called as a characteristic matrix, and column vectors of the Q matrix are characteristic vectors.
And step S4: training and solving the model, specifically: by calculating Q = X [ pinv (X) T X)]And R, solving the feature matrix Q.
Wherein, pinv (X) T X) represents solving for X T A generalized inverse matrix of X.
Step S5: classifying the human face through the test model, which specifically comprises the following substeps:
step S5-1: inputting a test sample into the model and recording as z;
step S5-2: calculating v = Q from the feature matrix Q obtained in step S4 T z, where v is a column vector,referred to as the projection of the vector z on Q;
step S5-3: and squaring each element in v, wherein the subscript serial number of the element with the maximum result in v is the classification result of the test sample z.
Advantageous effects
Compared with the prior art, the face classification method aiming at the small sample training data has the following beneficial effects:
1. the method can effectively establish a classification model and classify the human face under the condition of less human face image training data, the method is more matched with the actual condition, and the cost paid in the data collecting stage can be greatly reduced;
2. the method is not limited to small sample data, and the classification performance of the method can be improved even if the sample data is increased;
3. the method uses the linear model for modeling, so that the deployment cost of the face classification method can be reduced, and the operation speed can be increased;
4. the method has high operation speed in the training stage, and can reduce the migration cost of the model to other data sets.
Drawings
Fig. 1 is a flowchart of a face classification method for small sample training data according to the present invention;
FIG. 2 is a face data display of an embodiment of a face classification method for small sample training data according to the present invention;
FIG. 3 is a feature vector visualization result of an embodiment of a face classification method for small sample training data according to the present invention;
fig. 4 is a result of performing two classifications on test data under a small sample by using an embodiment of the face classification method for small sample training data and a least square fitting (or logistic regression) embodiment according to the present invention.
Detailed Description
The following describes a face classification method for small sample training data according to the present invention in detail with reference to the accompanying drawings and embodiments.
Example 1
Fig. 1 is a flowchart of a face classification method for small sample training data according to the present invention, and as can be seen from fig. 1, the method of the present invention includes the following steps:
step S1: establishing a training data set, specifically: the face image data of M individuals are collected, and the M individuals respectively correspond to column labels 0,1, \8230;, M-1. Ensuring that at least one image of each person is provided;
step S2: preprocessing data, specifically, rearranging each image into a column vector, and then performing normalization processing on all image data;
s3, respectively establishing a sample matrix and a category orthogonal matrix, and establishing a model;
and step S4: training and solving the model, specifically: by calculating Q = X [ pinv (X) ] T X)]And R, solving the feature matrix Q.
Step S5: and classifying the human face through the test model.
The invention is further described below.
The face images of M individuals are collected, and the M individuals respectively correspond to column labels 0,1, \8230;, M-1. Here we replace this process with an open source face data set. We used the face data set CMU-PIE, which contains 68 grayscale images of the face of individuals, each with 24 images and a resolution of 32 x 32. All images contained 13, 43 different lighting conditions and 4 different expressions. With some samples shown in figure 2.
A part of the image is randomly selected from the data as training data and the rest of the data as test data. For the training data, the pixels of each picture are rearranged into a column vector, which is marked as { x i I = 1.. Cndot.n }, where N represents the total number of training samples. All image data is then normalized. The normalization process is to subtract the sample mean and divide by the sample standard deviation. The column vectors of all samples are combined into a matrix, denoted as X, which is called a sample matrix. In the combining process, the combination is carried out according to the class label corresponding to each sample vectorThe ordering, e.g., the left-most columns of X are all face images of a first person, followed by all face data of a second person, and so on. The number of rows of X represents the dimension of the sample, i.e. the resolution of the image data (32X 32); the column number of X represents the total number of training samples, i.e., N.
And designing 68 class orthogonal vectors according to the constraint conditions in the step S3-2, and combining the vectors into a matrix, which is marked as R. The number of rows for R is the total number of training samples and the number of columns is 68. Wherein the class orthogonal vector is denoted as { r j J = 1.... M }, i.e. each column of the matrix R. Specifically, we can assign r when the image data for each person for training is equal j The design is as follows:
Figure BDA0002769155840000031
wherein at r j The element in (1) corresponding to the j-th class sample is 1, and the rest elements are 0. Will { r j J = 1.. M } is combined into a class orthogonal matrix R. Establishing a model according to the sample matrix X and the class orthogonal matrix R:
R=X T Q
the model is then trained. Unlike classical machine learning, which is trained by iterative optimization, the method calculates Q = X [ pinv (X) ] T X)]And R completes the training process, and the process of solving the model can be regarded as the training model. Where Q is the feature matrix, and each column of Q is referred to as a feature vector. The number of rows of Q represents the dimension of the feature vector, equal to the resolution of the image (32 × 32); the number of columns for Q represents the number of categories, equal to 68. Fig. 3 shows the results after the 68 feature vectors are visualized, and it can be seen that each feature vector extracts the face features corresponding to each type of person.
It should be noted that although the solution method of the feature matrix in step S3-3 is very similar to the classical least square fitting (or logistic regression), the method is fundamentally different from the least square fitting (or logistic regression). The main differences are: least square fitting (or logistic regression) marks different classes of samples by using one-hot coding, and meanwhile, the coding process is independent of the number of samples; however, the method constructs the class orthogonal matrix in step S3-2 based on the theory that the class units are orthogonal, and the design of the class orthogonal matrix is related to the number of samples. FIG. 4 shows the results of two classifications of test data using a least squares fit (or logistic regression) and the present method, respectively, for small samples, where black represents the projection of test data from the present method and white represents the projection of test data from a least squares fit (or logistic regression); circles and triangles represent two types of data, respectively; the straight lines in the figure represent the classification interfaces. It can be seen that the projection result of the test data obtained by the method can be completely classified by the classification interface.
Finally, the test sample is input into the model, denoted as z. Calculating v = Q according to the feature matrix Q obtained in the step S4 T z, where v is the column vector and has a length of 68, is referred to as the projection of vector z on Q. And squaring each element in v, wherein the subscript sequence number of the element with the maximum square value in v is the classification result of the test sample z. E.g., the 0 th element in v has the largest squared value, then the test vector z will be considered to belong to the 0 th person's face image.
Table 1 shows the results of the classification accuracy obtained on the test data by using the method as a function of the number of training samples. Table 2 shows the comparison of the operation speed of the method with other methods, in which Linear Discriminant Analysis (LDA) is a kind of classical linear discriminant method, but in the case of a small sample, LDA cannot be calculated due to singular matrix; fukunaga Koontz Transformation (FKT) belongs to a class of classical linear methods, but the solving process needs matrix decomposition, so that the operation time is long; convolutional Neural Networks (CNN) belong to a class of classical deep learning methods, but CNN cannot perform effective parameter training in the case of small samples.
TABLE 1 results obtained on test data for classification accuracy as a function of training sample number
Figure BDA0002769155840000041
TABLE 2 comparison of the operating speeds of the method with other methods
Figure BDA0002769155840000042
According to the table 1 and the table 2, the method can effectively establish a classification model for face classification under the condition of less face image training data (small samples); the concrete expression is as follows: the classic face classification method needs to extract features of a large number of samples and construct a feature library; in recent years, the widely sought deep learning method also needs a large number of samples to carry out model training; however, in practical situations, human face data as a relatively private data is difficult to obtain in large quantities, so that the method is more suitable for practical situations and can greatly reduce the cost of data collection.
As can be seen from table 1, the method is not limited to small sample data, and the classification performance of the method can be improved even if the number of sample data is increased, which is specifically represented as: the eigenvector solving method provided by the method is not limited by the number of the samples and the dimensionality of the samples, so that the problem of matrix singularity caused when the number of the samples is far less than the dimensionality of the samples is avoided.
As can be seen from table 2, the method can greatly improve the face classification speed, and further reduce the deployment cost, which is specifically represented as: the neural network and the deep learning method need to consume a large amount of computing resources, so in order to deploy the model and meet the real-time requirement, parallel computing equipment such as a GPU with excellent computing capability is independently configured; or uploading the local data to a cloud server for calculation; both of these methods, however, require high operating and maintenance costs; the model related to the method is a linear model, and the related operations are matrix operations; the two points enable the operation speed of the method to be far faster than that of the existing neural network and deep learning method. In addition, the linear model and the matrix operation can be deployed on some embedded devices with relatively low manufacturing cost at relatively low cost, and even can be independently designed into an algorithm chip, which is incomparable with methods such as deep learning and the like. For example, the average price of the GPU currently used for deep learning parallel computing is more than 3000 yuan, while the average price of the common embedded equipment is about 500 yuan, and if the method is designed into a chip, the cost is even lower.
The method has stronger mobility and lower migration cost. The concrete expression is as follows: application migration or scene migration by using methods such as deep learning and the like requires training a model on a new data set, and the migration cost is high due to the fact that a large amount of time and computing resources are consumed in the training process. The training process of the method does not involve a complex optimization algorithm, so that the method can finish model training more quickly after migration, thereby reducing the migration cost.
The method has stronger interpretability and is characterized in that: because methods such as deep learning are a black box model and lack of theoretical basis, hyper-parameter setting and network design of the method are very challenging; the classical linear model has stronger interpretability, and parameter adjustment and model modification are simpler.
In summary, the method provides a face classification method for small sample training data, aiming at the technical defect that the use requirement is difficult to meet when the training sample data is less and the real-time performance is higher in the existing face classification method. And real-time face classification under the condition of a small sample is realized by establishing a linear model and further optimizing a solving method. Meanwhile, the linear model with strong interpretability is introduced, so that the deployment cost and the migration cost of the algorithm on the embedded equipment can be further reduced, and even the algorithm can be independently designed into an algorithm chip in the future, so that the cost is further reduced and the operation speed is increased.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (6)

1. A face classification method aiming at small sample training data is characterized in that: the method comprises the following steps:
step S1: establishing a training data set, specifically: acquiring front face gray images of M persons as training data to generate a training data set; wherein, M individuals correspond to category labels 0,1, \8230;, M-1, the total number of images is N, and N > = M, respectively;
step S2: the data preprocessing specifically comprises the following steps: rearranging each image sample into a column vector, and then carrying out normalization processing on all the images;
and step S3: respectively establishing a sample matrix and a category orthogonal matrix, and establishing a model, and specifically comprising the following substeps:
step S3-1: combining the column vectors of all the samples obtained in the step S2 into a sample matrix X, wherein the samples are sequentially arranged according to the descending order of the class labels in the combining process;
step S3-2: according to the category to which each sample belongs, M category orthogonal vectors are designed and combined to form a matrix, and the matrix is marked as R which is called as a category orthogonal matrix;
wherein, the class orthogonal vector is denoted as { r j J = 1.. M }, satisfying the following constraint:
1) Normalization constraint: r is (i)T ·r (i) =1;
2) Constraint of orthogonality r (i)T ·r (j) =0,i≠j;
3) Intra-class normalization: (I) (i) r (i) ) T ·(I (i) r (i) )=1;
4) Inter-class orthogonality: (I) (i) r (i) ) T ·(I (i) r (j) )=0,i≠j;
Wherein, the class orthogonal vector r j Is N, represents the jth column in the class orthogonal matrix R, each element in the class orthogonal matrix R corresponds to a sample vector;
Figure FDA0003765705450000011
is represented by r j The column vector is formed by elements corresponding to the ith type of face data; i is an identity matrix of size N x N, meaning that each row thereof corresponds to a sample vector; i is (i) Representing a matrix formed by row vectors corresponding to the ith type of face data in the I;
step S3-3: according to the sample matrix in the step S3-1 and the class orthogonal matrix in the step S3-2, establishing a linear model in the following form, and solving a characteristic vector based on the linear model;
R=X T Q
the method comprises the following steps of (1) respectively corresponding to each type of human face, and calling a Q matrix as a characteristic matrix, wherein a column vector of the Q matrix is a characteristic vector;
and step S4: training and solving the model, specifically: by calculating Q = X [ pinv (X) T X)]R, solving a characteristic matrix Q;
wherein, pinv (X) T X) represents solving for X T A generalized inverse matrix of X;
step S5: classifying the human face through the test model, which specifically comprises the following substeps:
step S5-1: inputting a test sample into the model and recording as z;
step S5-2: calculating v = Q from the feature matrix Q obtained in step S4 T z;
Step S5-3: and squaring each element in v, wherein the subscript serial number of the element with the maximum result in v is the classification result of the test sample z.
2. The method for classifying a face of small sample training data according to claim 1, wherein: in step S1, the training data set includes M images and category labels, and there are at least 1 image of each person.
3. The method of claim 1, wherein the face classification method is based on small sample training data, and comprises the following steps: in step S2, the column vector is denoted as { x i ,i=1,...,N}x i Is shown asi samples, N represents the total number of samples, the same value as the total number of images in step S1.
4. The method of claim 3, wherein the face classification method is based on small sample training data, and comprises the following steps: in step S3-3, the Q matrix has M columns.
5. The method for classifying faces of small sample training data according to claim 4, wherein: v is a column vector in step S5-2.
6. The method of claim 5, wherein the face classification method is based on small sample training data, and comprises the following steps: in step S5-2, v is the projection of vector z on Q.
CN202011243596.XA 2020-11-05 2020-11-10 Face classification method for small sample training data Active CN112232298B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020112265297 2020-11-05
CN202011226529 2020-11-05

Publications (2)

Publication Number Publication Date
CN112232298A CN112232298A (en) 2021-01-15
CN112232298B true CN112232298B (en) 2022-11-15

Family

ID=74122225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011243596.XA Active CN112232298B (en) 2020-11-05 2020-11-10 Face classification method for small sample training data

Country Status (1)

Country Link
CN (1) CN112232298B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877065A (en) * 2009-11-26 2010-11-03 南京信息工程大学 Extraction and identification method of non-linear authentication characteristic of facial image under small sample condition
CN102982322A (en) * 2012-12-07 2013-03-20 大连大学 Face recognition method based on PCA (principal component analysis) image reconstruction and LDA (linear discriminant analysis)
CN107832786A (en) * 2017-10-31 2018-03-23 济南大学 A kind of recognition of face sorting technique based on dictionary learning
CN109325416A (en) * 2018-08-23 2019-02-12 广州智慧城市发展研究院 A kind of high-definition image fast face recognition method based on PCA and SRC

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8478005B2 (en) * 2011-04-11 2013-07-02 King Fahd University Of Petroleum And Minerals Method of performing facial recognition using genetically modified fuzzy linear discriminant analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877065A (en) * 2009-11-26 2010-11-03 南京信息工程大学 Extraction and identification method of non-linear authentication characteristic of facial image under small sample condition
CN102982322A (en) * 2012-12-07 2013-03-20 大连大学 Face recognition method based on PCA (principal component analysis) image reconstruction and LDA (linear discriminant analysis)
CN107832786A (en) * 2017-10-31 2018-03-23 济南大学 A kind of recognition of face sorting technique based on dictionary learning
CN109325416A (en) * 2018-08-23 2019-02-12 广州智慧城市发展研究院 A kind of high-definition image fast face recognition method based on PCA and SRC

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Nearest orthogonal matrix representation for face recognition;Jian Zhang等;《Neurocomputing》;20140928;第471-480页 *
增量式鉴别非负矩阵分解算法及其在人脸识别中的应用;蔡竞等;《图学学报》;20171015(第05期);第715-721页 *
采用Gabor变换和双方向LDA进行人脸识别;聂祥飞;《重庆三峡学院学报》;20080520(第03期);第15-20页 *

Also Published As

Publication number Publication date
CN112232298A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
Ramanathan et al. Face verification across age progression
Huo et al. Deep age distribution learning for apparent age estimation
Zhu et al. Learning a discriminative model for the perception of realism in composite images
Maalej et al. Shape analysis of local facial patches for 3D facial expression recognition
Zhang et al. Local derivative pattern versus local binary pattern: face recognition with high-order local pattern descriptor
CN104268593B (en) The face identification method of many rarefaction representations under a kind of Small Sample Size
CN103514456B (en) Image classification method and device based on compressed sensing multi-core learning
CN106126585B (en) The unmanned plane image search method combined based on quality grading with perceived hash characteristics
CN107808129A (en) A kind of facial multi-characteristic points localization method based on single convolutional neural networks
CN101551855B (en) Auxiliary diagnostic system for tracing self-adaptive kernel matching and auxiliary diagnostic method thereof
CN106897669A (en) A kind of pedestrian based on consistent iteration various visual angles transfer learning discrimination method again
Liu et al. Facial attractiveness computation by label distribution learning with deep CNN and geometric features
Perez et al. Local matching Gabor entropy weighted face recognition
CN103745242A (en) Cross-equipment biometric feature recognition method
Dong et al. Generic training set based multimanifold discriminant learning for single sample face recognition
Xia et al. Visual clustering factors in scatterplots
Zhang et al. 3D-guided facial shape clustering and analysis
Kukharev et al. Face recognition using two-dimensional CCA and PLS
Qiao Application of Gabor image recognition technology in intelligent clothing design
CN112232298B (en) Face classification method for small sample training data
WO2023214093A1 (en) Accurate 3d body shape regression using metric and/or semantic attributes
CN108256569B (en) Object identification method under complex background and used computer technology
Usgan et al. Deep learning pre-trained model as feature extraction in facial recognition for identification of electronic identity cards by considering age progressing
Wang et al. A study of convolutional sparse feature learning for human age estimate
Lv et al. Ethnicity classification by the 3D Discrete Landmarks Model measure in Kendall shape space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant