CN110826534B - Face key point detection method and system based on local principal component analysis - Google Patents

Face key point detection method and system based on local principal component analysis Download PDF

Info

Publication number
CN110826534B
CN110826534B CN201911208163.8A CN201911208163A CN110826534B CN 110826534 B CN110826534 B CN 110826534B CN 201911208163 A CN201911208163 A CN 201911208163A CN 110826534 B CN110826534 B CN 110826534B
Authority
CN
China
Prior art keywords
face
key points
principal component
local
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911208163.8A
Other languages
Chinese (zh)
Other versions
CN110826534A (en
Inventor
戴侃侃
李云夕
熊子瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xiaoying Innovation Technology Co ltd
Original Assignee
Hangzhou Xiaoying Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xiaoying Innovation Technology Co ltd filed Critical Hangzhou Xiaoying Innovation Technology Co ltd
Priority to CN201911208163.8A priority Critical patent/CN110826534B/en
Publication of CN110826534A publication Critical patent/CN110826534A/en
Application granted granted Critical
Publication of CN110826534B publication Critical patent/CN110826534B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a human face key point detection method and a system based on local principal component analysis, wherein the method comprises the following steps: s1, collecting a large amount of face image sample data, and marking face key points; s2, dividing the key points of the face into a plurality of local key points, and respectively processing each local key point by adopting principal component analysis to obtain principal component characteristics of each local key point; s3, calculating a combination coefficient of each key point of each face image under the principal component characteristics; s4, constructing a regression model, and training the model through the combination coefficient to generate a combination coefficient regression model; s5, inputting the face image to be detected into a combination coefficient regression model, and predicting to obtain the combination coefficient; and S6, restoring the key points of the human face based on the combination coefficients obtained by prediction and the principal component characteristics. The method provided by the invention can be used for carrying out local principal component analysis on the key points, predicting local principal component coefficients, reducing the complexity of directly carrying out principal component analysis on all the key points and improving the regression modeling precision.

Description

Face key point detection method and system based on local principal component analysis
Technical Field
The invention relates to the field of image processing, in particular to a human face key point detection method and system based on local principal component analysis
Background
In recent years, research on human face analysis is increasing, and the human face analysis is to identify the expression, position, identity and the like of a person by computer vision and pattern recognition theory on the basis of the human face. The face key point detection is an important basic link in a face recognition task, and the accurate detection of the face key points plays a key role in many practical applications and scientific research topics, such as face posture recognition and correction, expression recognition, mouth shape recognition and the like. Therefore, how to obtain high-precision face key points is a popular research problem in the fields of computer vision, image processing and the like. The research of the human face key point detection is also challenging under the influence of human face posture, shielding and other factors. The human face key point detection refers to the steps of giving a human face image, and positioning key area positions of the human face, including eyebrows, eyes, a nose, a mouth, a face contour and the like.
The publication number CN 107967456 a discloses a face multi-neural-network cascade recognition method based on face key points, which detects a face image through an MTCNN algorithm, and then performs rotation, translation and scaling on the face by using affine transformation for subsequent processing. Then, a convolutional neural network is used for respectively detecting key points of the face contour and key points inside the face, and then a Principal Component Analysis (PCA) algorithm is used for feature array dimension. During array dimension, a method based on a class mode can be adopted according to different classes, and the problems that the traditional PCA algorithm cannot effectively utilize class information and the robustness is poor under the condition of illumination and expression change can be solved.
However, the direct PCA of all face key points in the above method leads to a problem of low precision, because each part of the face varies widely, and thus the dimension after permutation and combination is higher. Therefore, how to realize the face key point detection with low complexity, high processing efficiency and high precision is an urgent problem to be solved in the field.
Disclosure of Invention
The invention aims to provide a human face key point detection method and system based on local principal component analysis aiming at the defects of the prior art. The method carries out local principal component analysis on the key points, extracts the features and reduces the dimension, thereby carrying out high-precision modeling on the key points of the face, reducing the difficulty of directly predicting the key points of the face by the model by predicting the local principal component coefficients, reducing the scale of the model and greatly improving the prediction speed.
In order to achieve the purpose, the invention adopts the following technical scheme:
a human face key point detection method based on local principal component analysis comprises the following steps:
s1, collecting a large amount of face image sample data, and marking face key points;
s2, dividing the key points of the face into a plurality of local key points, and respectively processing each local key point by adopting principal component analysis to obtain principal component characteristics of each local key point;
s3, calculating a combination coefficient of each key point of each face image under the principal component characteristics;
s4, constructing a regression model, and training the model through the combination coefficient to generate a combination coefficient regression model;
s5, inputting the face image to be detected into a combination coefficient regression model, and predicting to obtain the combination coefficient;
and S6, restoring the key points of the human face based on the combination coefficients obtained by prediction and the principal component characteristics.
Further, the step S2 includes:
respectively combining local key points of the gray level image of the sample data of the face image into a one-dimensional vector according to a row vector or a column vector, and recording the mth face image sample and the nth local key point set data as XmnIf the total number of the face image sample data is M, the face image sample data set matrix
Figure BDA0002297390390000021
Figure BDA0002297390390000022
N is the number of the key points of the face divided into the local key points;
Figure BDA0002297390390000023
for the nth local key point data mean of all face image samples:
Figure BDA0002297390390000024
respectively calculating covariance matrixes corresponding to the sample data set matrixes of the face images;
Figure BDA0002297390390000025
and decomposing the eigenvalues of each covariance matrix, arranging the eigenvalues from large to small, taking out eigenvectors corresponding to the maximum J eigenvalues, and taking the eigenvectors as principal components of corresponding local key points.
Further, the step S6 is specifically:
Figure BDA0002297390390000031
wherein Pts is the key point, CnjA j-th principal component, a, representing an n-th local keypointnjRepresents CnjThe corresponding correlation coefficient.
Further, the plurality of local key points includes a left eyebrow, a right eyebrow, a left eye, a right eye, a nose, a mouth, and a cheek.
Further, the face image sample data is derived from a Widerface, 300W, ibug, lfpw and CelebA public data set, and the key points of the face are marked manually.
The invention also provides a face key point detection system based on local principal component analysis, which comprises:
the acquisition module is used for acquiring a large amount of face image sample data and marking face key points;
the principal component analysis module is used for dividing the key points of the face into a plurality of local key points and respectively processing each local key point by adopting principal component analysis to obtain principal component characteristics of each local key point;
the combination coefficient generation module is used for calculating the combination coefficient of each key point of each human face image under the principal component characteristics;
the training module is used for constructing a regression model, training the model through the combination coefficient and generating a combination coefficient regression model;
the prediction module is used for inputting the face image to be detected into the combination coefficient regression model and predicting to obtain the combination coefficient;
and the reconstruction module is used for restoring the key points of the human face based on the combination coefficients obtained by prediction and the principal component characteristics.
Further, the principal component analysis module includes:
respectively combining local key points of the gray level image of the sample data of the face image into a one-dimensional vector according to a row vector or a column vector, and recording the mth face image sample and the nth local key point set data as XmnIf the total number of the face image sample data is M, the face image sample data set matrix
Figure BDA0002297390390000032
Figure BDA0002297390390000041
N is the number of the face key points divided into local key points;
Figure BDA0002297390390000042
for the nth local key point data mean of all face image samples:
Figure BDA0002297390390000043
respectively calculating covariance matrixes corresponding to the sample data set matrixes of the face images;
Figure BDA0002297390390000044
and decomposing the eigenvalues of each covariance matrix, arranging the eigenvalues from large to small, taking out eigenvectors corresponding to the maximum J eigenvalues, and taking the eigenvectors as principal components of corresponding local key points.
Further, the restoring the face key points specifically includes:
Figure BDA0002297390390000045
wherein Pts is the key point, CnjA j-th principal component, a, representing an n-th local keypointnjRepresents CnjThe corresponding correlation coefficient.
Further, the plurality of local key points includes a left eyebrow, a right eyebrow, a left eye, a right eye, a nose, a mouth, and a cheek.
Further, the face image sample data is derived from a Widerface, 300W, ibug, lfpw and CelebA public data set, and the key points of the face are marked manually.
The invention provides a method and a system for detecting key points of a human face based on local principal component analysis, which are used for dividing the key points of the human face into a plurality of local key points, respectively processing each local key point by adopting the principal component analysis, extracting the characteristics and reducing the dimension to obtain the principal component characteristics of each local key point, avoiding the problem of low precision caused by directly carrying out PCA on all key points of the human face, and realizing the high-precision modeling of the key points of the human face. In addition, the method reduces the difficulty of directly predicting the key points of the human face by the model by predicting the local principal component coefficients, can reduce the scale of the model, greatly improves the prediction speed, ensures certain stability, and is particularly suitable for detecting the dense key points of the human face at the mobile terminal.
Drawings
Fig. 1 is a flowchart of a face key point detection method based on local principal component analysis according to an embodiment;
fig. 2 is a structural diagram of a face key point detection system based on local principal component analysis according to a second embodiment.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
Example one
As shown in fig. 1, this embodiment provides a method for detecting a key point of a human face based on local principal component analysis, including:
s1, collecting a large amount of face image sample data, and marking face key points;
the detection of the key points of the human face comprises the detection and the positioning of the key points of the human face or the alignment of the human face, which means that given human face images, the key area positions of the human face, including eyebrows, eyes, a nose, a mouth, a face contour and the like, are positioned. The method detects key points of the human face based on local principal component analysis, and realizes the detection of the key points through the analysis of principal components and the reconstruction based on the principal components.
Specifically, the method firstly collects a large amount of face image sample data and marks key points of the face. The face image data is derived from public data sets such as Widerface, 300W, ibug, lfpw, CelebA and the like, and the marks of the face key points can be marked manually.
S2, dividing the key points of the face into a plurality of local key points, and respectively processing each local key point by adopting principal component analysis to obtain principal component characteristics of each local key point;
a face photo is often composed of a large number of pixels, if each pixel is taken as a 1-dimensional feature, a feature vector with very high dimensionality is obtained, and the calculation is very difficult; and there is typically a correlation between these pixels. Therefore, the invention utilizes PCA technology to reduce the dimension number and remove the correlation among the dimensions of the original features to a certain extent.
Because the number of the face key points is large, the data dimension required for directly processing all the face key points is high, and the processing complexity is high, the face key points are divided into a plurality of local key points, and the local key points are respectively processed. Specifically, the invention divides the face key points into 7 components: and carrying out PCA (principal component analysis) on each local key point respectively by the left eyebrow, the right eyebrow, the left eye, the right eye, the nose, the mouth and the cheek to obtain N main components of each local key point. The method specifically comprises the following steps:
respectively combining local key points of the gray level image of the sample data of the face image into a one-dimensional vector according to a row vector or a column vector, and recording the mth face image sample and the nth local key point set data as XmnIf the total number of the face image sample data is M, the face image sample data set matrix
Figure BDA0002297390390000061
Figure BDA0002297390390000062
N, N is the number of face key points divided into local key points, for example, when the face key points are divided into 7 components: left eyebrow, right eyebrow, left eye, right eye, nose, mouth and cheek, N is 7.
Wherein,
Figure BDA0002297390390000065
the nth local key point data mean value of all face image samples is as follows:
Figure BDA0002297390390000063
respectively calculating covariance matrixes corresponding to the sample data set matrixes of the face images;
Figure BDA0002297390390000064
and decomposing the eigenvalues of each covariance matrix, arranging the eigenvalues from large to small, taking out eigenvectors corresponding to the maximum J eigenvalues, and taking the eigenvectors as principal components of corresponding local key points. After all the eigenvectors are normalized, an eigenvector matrix W is formed.
For example, PCA feature extraction is performed on 7 local key points of the left eyebrow, the right eyebrow, the left eye, the right eye, the nose, the mouth and the cheek, respectively, to obtain the first 6 principal components, i.e., J ═ 6.
S3, calculating a combination coefficient of each key point of each face image under the principal component characteristics;
after the principal components of the key points are obtained, each key point can be represented in a dimensionality reduction mode through principal component features and corresponding combination coefficients. Therefore, after the principal component features of the local key points are obtained, the combination coefficient of each face image under the principal component features is calculated according to the face image sample data.
S4, constructing a regression model, and training the model through the combination coefficient to generate a combination coefficient regression model;
the invention adopts a regression model to predict the face image combination coefficient to be detected. The invention does not limit the concrete regression model, takes the convolution neural network as an example, the invention adopts the Shufflenet _ v2, the input layer inputs the face image needing to be processed, and the output layer outputs each combination coefficient.
The method trains the model based on a large amount of face image sample data and corresponding combination coefficients, calculates the loss function of the convolutional neural network, iterates and updates the convolutional neural network by using the loss function, and generates a combination coefficient regression model.
S5, inputting the face image to be detected into a combination coefficient regression model, and predicting to obtain the combination coefficient;
after the combination coefficient regression model is generated through training, the face image needing face key point detection can be processed, and the PCA combination coefficient corresponding to the face image is output.
And S6, restoring the key points of the human face based on the combination coefficients obtained by prediction and the principal component characteristics.
Since the principal component features are orthogonal to each other, the reconstruction of the keypoint can be completed through the principal component and the corresponding combination coefficient. The invention divides the key points of the face image into a plurality of local key points for principal component analysis, so that the reconstruction of the key points of the face image comprises the reconstruction of each local key point, namely:
Figure BDA0002297390390000071
wherein, CnjA j-th principal component, a, representing an n-th local keypointnjRepresents CnjThe corresponding correlation coefficient.
Since the acquired image is not necessarily a face image, detecting a face image by using a face detector may be further included before step S5. The face detector may be any of the existing face detectors such as an MTCNN face detector. The MTCNN face detector is formed by cascading three convolutional neural networks P-Net, R-Net and 0-Net, and the description is omitted here.
Example two
As shown in fig. 2, this embodiment proposes a face keypoint detection system based on local principal component analysis, which includes:
the acquisition module is used for acquiring a large amount of face image sample data and marking face key points;
the detection of the key points of the human face comprises the detection and the positioning of the key points of the human face or the alignment of the human face, which means that given human face images, the key area positions of the human face, including eyebrows, eyes, a nose, a mouth, a face contour and the like, are positioned. The method detects key points of the human face based on local principal component analysis, and realizes the detection of the key points through the analysis of principal components and the reconstruction based on the principal components.
Specifically, the method firstly collects a large amount of face image sample data and marks key points of the face. The face image data is derived from public data sets such as Widerface, 300W, ibug, lfpw, CelebA and the like, and the marks of the face key points can be marked manually.
The principal component analysis module is used for dividing the key points of the face into a plurality of local key points and respectively processing each local key point by adopting principal component analysis to obtain principal component characteristics of each local key point;
a face photo is often composed of a large number of pixels, if each pixel is taken as a 1-dimensional feature, a feature vector with very high dimensionality is obtained, and the calculation is very difficult; and there is typically a correlation between these pixels. Therefore, the invention utilizes PCA technology to reduce the dimension number and remove the correlation among the dimensions of the original features to a certain extent.
Because the number of the face key points is large, the data dimension required for directly processing all the face key points is high, and the processing complexity is high, the face key points are divided into a plurality of local key points, and the local key points are respectively processed. Specifically, the invention divides the face key points into 7 components: and carrying out PCA (principal component analysis) on each local key point respectively by the left eyebrow, the right eyebrow, the left eye, the right eye, the nose, the mouth and the cheek to obtain N main components of each local key point. The method specifically comprises the following steps:
respectively combining local key points of the gray level image of the sample data of the face image into a one-dimensional vector according to a row vector or a column vector, and recording the mth face image sample and the nth local key point set data as XmnIf the total number of the face image sample data is M, the face image sample data set matrix
Figure BDA0002297390390000081
Figure BDA0002297390390000082
N, N is the number of face key points divided into local key points, for example, when the face key points are divided into 7 components: left eyebrow, right eyebrow, left eye, right eye, nose, mouth and cheek, N is 7.
Wherein,
Figure BDA0002297390390000083
for all face image samples, the nth local key point dataThe values are:
Figure BDA0002297390390000091
respectively calculating covariance matrixes corresponding to the sample data set matrixes of the face images;
Figure BDA0002297390390000092
and decomposing the eigenvalues of each covariance matrix, arranging the eigenvalues from large to small, taking out eigenvectors corresponding to the maximum J eigenvalues, and taking the eigenvectors as principal components of corresponding local key points. After all the eigenvectors are normalized, an eigenvector matrix W is formed.
For example, PCA feature extraction is performed on 7 local key points of the left eyebrow, the right eyebrow, the left eye, the right eye, the nose, the mouth and the cheek, respectively, to obtain the first 6 principal components, i.e., J ═ 6.
The combination coefficient generation module is used for calculating the combination coefficient of each key point of each human face image under the principal component characteristics;
after the principal components of the key points are obtained, each key point can be represented in a dimensionality reduction mode through principal component features and corresponding combination coefficients. Therefore, after the principal component features of the local key points are obtained, the combination coefficient of each face image under the principal component features is calculated according to the face image sample data.
The training module is used for constructing a regression model, training the model through the combination coefficient and generating a combination coefficient regression model;
the invention adopts a regression model to predict the face image combination coefficient to be detected. The invention does not limit the concrete regression model, takes the convolution neural network as an example, the invention adopts the Shufflenet _ v2, the input layer inputs the face image needing to be processed, and the output layer outputs each combination coefficient.
The method trains the model based on a large amount of face image sample data and corresponding combination coefficients, calculates the loss function of the convolutional neural network, iterates and updates the convolutional neural network by using the loss function, and generates a combination coefficient regression model.
The prediction module is used for inputting the face image to be detected into the combination coefficient regression model and predicting to obtain the combination coefficient;
after the combination coefficient regression model is generated through training, the face image needing face key point detection can be processed, and the PCA combination coefficient corresponding to the face image is output.
And the reconstruction module is used for restoring the key points of the human face based on the combination coefficients obtained by prediction and the principal component characteristics.
Since the principal component features are orthogonal to each other, the reconstruction of the keypoint can be completed through the principal component and the corresponding combination coefficient. The invention divides the key points of the face image into a plurality of local key points for principal component analysis, so that the reconstruction of the key points of the face image comprises the reconstruction of each local key point, namely:
Figure BDA0002297390390000101
wherein, CnjA j-th principal component, a, representing an n-th local keypointnjRepresents CnjThe corresponding correlation coefficient.
The acquired image is not necessarily a face image, so the face key point detection system can also comprise a face detection module, the face detector is used for detecting the face image, and the face image is input into the prediction module to predict the combination coefficient when the detected image is the face image. The face detector may be any of the existing face detectors such as an MTCNN face detector. The MTCNN face detector is formed by cascading three convolutional neural networks P-Net, R-Net and 0-Net, and the description is omitted here.
Therefore, the method and the system for detecting the face key points based on the local principal component analysis divide the face key points into a plurality of local key points, respectively process each local key point by adopting the principal component analysis, extract the features and reduce the dimension to obtain the principal component features of each local key point, avoid the problem of low precision caused by directly carrying out PCA on all the face key points, and realize high-precision modeling on the face key points. In addition, the method reduces the difficulty of directly predicting the key points of the human face by the model by predicting the local principal component coefficients, can reduce the scale of the model, greatly improves the prediction speed, ensures certain stability, and is particularly suitable for detecting the dense key points of the human face at the mobile terminal.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (6)

1. A human face key point detection method based on local principal component analysis is characterized by comprising the following steps:
s1, collecting a large amount of face image sample data, and marking face key points;
s2, dividing the key points of the face into a plurality of local key points, and respectively processing each local key point by adopting principal component analysis to obtain principal component characteristics of each local key point;
s3, calculating a combination coefficient of each key point of each face image under the principal component characteristics;
s4, constructing a regression model, and training the model through the combination coefficient to generate a combination coefficient regression model;
s5, inputting the face image to be detected into the combination coefficient regression model, and predicting to obtain the combination coefficient;
s6, restoring key points of the human face based on the combination coefficients obtained by prediction and the principal component characteristics;
the step S2 includes:
respectively combining local key points of the gray level image of the sample data of the face image into a one-dimensional vector according to a row vector or a column vector, and recording the mth face image sample and the nth local key point set data as XmnIf the total number of the face image sample data is M, the face image sample data set matrix
Figure DEST_PATH_IMAGE002
N is the number of the face key points divided into local key points;
Figure DEST_PATH_IMAGE004
for the nth local key point data mean of all face image samples:
Figure DEST_PATH_IMAGE006
respectively calculating covariance matrixes corresponding to the sample data set matrixes of the face images;
Figure DEST_PATH_IMAGE008
performing eigenvalue decomposition on each covariance matrix, arranging the eigenvalues from large to small, taking out eigenvectors corresponding to the maximum J eigenvalues, and taking the eigenvectors as principal components of corresponding local key points;
the step S6 specifically includes:
Figure DEST_PATH_IMAGE010
wherein Pts is the key point, CnjA j-th principal component, a, representing an n-th local keypointnjRepresents CnjThe corresponding correlation coefficient.
2. The method of claim 1, wherein the plurality of local keypoints comprises a left eyebrow, a right eyebrow, a left eye, a right eye, a nose, a mouth, and a cheek.
3. The method according to claim 1, wherein the face image sample data is derived from a Widerface, 300W, ibug, lfpw, CelebA public data set, and the face key points are manually marked.
4. A face key point detection system based on local principal component analysis is characterized by comprising:
the acquisition module is used for acquiring a large amount of face image sample data and marking face key points;
the principal component analysis module is used for dividing the key points of the face into a plurality of local key points and respectively processing each local key point by adopting principal component analysis to obtain principal component characteristics of each local key point;
the combination coefficient generation module is used for calculating the combination coefficient of each key point of each human face image under the principal component characteristics;
the training module is used for constructing a regression model, training the model through the combination coefficient and generating a combination coefficient regression model;
the prediction module is used for inputting the face image to be detected into the combination coefficient regression model and predicting to obtain the combination coefficient of the face image to be detected;
the reconstruction module is used for restoring the key points of the human face based on the combination coefficients obtained by prediction and the principal component characteristics;
the principal component analysis module includes:
respectively combining local key points of the gray level image of the sample data of the face image into a one-dimensional vector according to a row vector or a column vector, and recording the mth face image sample and the nth local key point set data as XmnIf the total number of the face image sample data is M, the face image sample data set matrix
Figure DEST_PATH_IMAGE012
N is the number of the face key points divided into local key points;
Figure DEST_PATH_IMAGE013
for the nth local key point data mean of all face image samples:
Figure DEST_PATH_IMAGE006A
respectively calculating covariance matrixes corresponding to the sample data set matrixes of the face images;
Figure 780972DEST_PATH_IMAGE008
performing eigenvalue decomposition on each covariance matrix, arranging the eigenvalues from large to small, taking out eigenvectors corresponding to the maximum J eigenvalues, and taking the eigenvectors as principal components of corresponding local key points;
the restoring of the face key points specifically comprises the following steps:
Figure 432533DEST_PATH_IMAGE010
wherein Pts is the key point, CnjA j-th principal component, a, representing an n-th local keypointnjRepresents CnjThe corresponding correlation coefficient.
5. The face keypoint detection system of claim 4, wherein said plurality of local keypoints comprises a left eyebrow, a right eyebrow, a left eye, a right eye, a nose, a mouth, and a cheek.
6. The system according to claim 4, wherein the face image sample data is derived from a Widerface, 300W, ibug, lfpw, CelebA public data set, and the face key points are marked manually.
CN201911208163.8A 2019-11-30 2019-11-30 Face key point detection method and system based on local principal component analysis Active CN110826534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911208163.8A CN110826534B (en) 2019-11-30 2019-11-30 Face key point detection method and system based on local principal component analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911208163.8A CN110826534B (en) 2019-11-30 2019-11-30 Face key point detection method and system based on local principal component analysis

Publications (2)

Publication Number Publication Date
CN110826534A CN110826534A (en) 2020-02-21
CN110826534B true CN110826534B (en) 2022-04-05

Family

ID=69543621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911208163.8A Active CN110826534B (en) 2019-11-30 2019-11-30 Face key point detection method and system based on local principal component analysis

Country Status (1)

Country Link
CN (1) CN110826534B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633074B (en) * 2020-11-30 2024-01-30 浙江华锐捷技术有限公司 Pedestrian information detection method and device, storage medium and electronic equipment
CN115187822B (en) * 2022-07-28 2023-06-30 广州方硅信息技术有限公司 Face image dataset analysis method, live face image processing method and live face image processing device
CN115601484B (en) * 2022-11-07 2023-03-28 广州趣丸网络科技有限公司 Virtual character face driving method and device, terminal equipment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763503A (en) * 2009-12-30 2010-06-30 中国科学院计算技术研究所 Face recognition method of attitude robust
CN102214299A (en) * 2011-06-21 2011-10-12 电子科技大学 Method for positioning facial features based on improved ASM (Active Shape Model) algorithm
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN108830237A (en) * 2018-06-21 2018-11-16 北京师范大学 A kind of recognition methods of human face expression
CN109376618A (en) * 2018-09-30 2019-02-22 北京旷视科技有限公司 Image processing method, device and electronic equipment
CN109657583A (en) * 2018-12-10 2019-04-19 腾讯科技(深圳)有限公司 Face's critical point detection method, apparatus, computer equipment and storage medium
CN109800635A (en) * 2018-12-11 2019-05-24 天津大学 A kind of limited local facial critical point detection and tracking based on optical flow method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875422B (en) * 2017-02-06 2022-02-25 腾讯科技(上海)有限公司 Face tracking method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763503A (en) * 2009-12-30 2010-06-30 中国科学院计算技术研究所 Face recognition method of attitude robust
CN102214299A (en) * 2011-06-21 2011-10-12 电子科技大学 Method for positioning facial features based on improved ASM (Active Shape Model) algorithm
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN108830237A (en) * 2018-06-21 2018-11-16 北京师范大学 A kind of recognition methods of human face expression
CN109376618A (en) * 2018-09-30 2019-02-22 北京旷视科技有限公司 Image processing method, device and electronic equipment
CN109657583A (en) * 2018-12-10 2019-04-19 腾讯科技(深圳)有限公司 Face's critical point detection method, apparatus, computer equipment and storage medium
CN109800635A (en) * 2018-12-11 2019-05-24 天津大学 A kind of limited local facial critical point detection and tracking based on optical flow method

Also Published As

Publication number Publication date
CN110826534A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN109886121B (en) Human face key point positioning method for shielding robustness
CN103136516B (en) The face identification method that visible ray and Near Infrared Information merge and system
CN111160269A (en) Face key point detection method and device
CN108537743A (en) A kind of face-image Enhancement Method based on generation confrontation network
CN105138998B (en) Pedestrian based on the adaptive sub-space learning algorithm in visual angle recognition methods and system again
CN110826534B (en) Face key point detection method and system based on local principal component analysis
CN110659589B (en) Pedestrian re-identification method, system and device based on attitude and attention mechanism
WO2016138838A1 (en) Method and device for recognizing lip-reading based on projection extreme learning machine
CN109753891A (en) Football player's orientation calibration method and system based on human body critical point detection
CN111814661A (en) Human behavior identification method based on residual error-recurrent neural network
CN110245621B (en) Face recognition device, image processing method, feature extraction model, and storage medium
Sannidhan et al. Evaluating the performance of face sketch generation using generative adversarial networks
JP2010108494A (en) Method and system for determining characteristic of face within image
CN108229432A (en) Face calibration method and device
CN111612024A (en) Feature extraction method and device, electronic equipment and computer-readable storage medium
Vadlapati et al. Facial recognition using the OpenCV Libraries of Python for the pictures of human faces wearing face masks during the COVID-19 pandemic
CN113378812A (en) Digital dial plate identification method based on Mask R-CNN and CRNN
CN113869282A (en) Face recognition method, hyper-resolution model training method and related equipment
Zhang et al. Low-rank and joint sparse representations for multi-modal recognition
CN110503090B (en) Character detection network training method based on limited attention model, character detection method and character detector
CN114492634B (en) Fine granularity equipment picture classification and identification method and system
CN116091946A (en) Yolov 5-based unmanned aerial vehicle aerial image target detection method
CN111666976A (en) Feature fusion method and device based on attribute information and storage medium
CN103745242A (en) Cross-equipment biometric feature recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 22nd floor, block a, Huaxing Times Square, 478 Wensan Road, Xihu District, Hangzhou, Zhejiang 310000

Applicant after: Hangzhou Xiaoying Innovation Technology Co.,Ltd.

Address before: 16 / F, HANGGANG Metallurgical Science and technology building, 294 Tianmushan Road, Xihu District, Hangzhou City, Zhejiang Province, 310012

Applicant before: HANGZHOU QUWEI SCIENCE & TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant