CN106951840A - A kind of facial feature points detection method - Google Patents

A kind of facial feature points detection method Download PDF

Info

Publication number
CN106951840A
CN106951840A CN201710138179.0A CN201710138179A CN106951840A CN 106951840 A CN106951840 A CN 106951840A CN 201710138179 A CN201710138179 A CN 201710138179A CN 106951840 A CN106951840 A CN 106951840A
Authority
CN
China
Prior art keywords
feature
detection
face
image
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710138179.0A
Other languages
Chinese (zh)
Inventor
孙艳丰
赵爽
孔德慧
王少帆
尹宝才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201710138179.0A priority Critical patent/CN106951840A/en
Publication of CN106951840A publication Critical patent/CN106951840A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of facial feature points detection method, using attitude detection task as constraint, method of novel three-way road GEH (the Gray Edge Hog) mode images merged using multiclass feature figure as the facial feature points detection of input.Detection in view of face 3 d pose information to face global characteristic point, is especially detected in the case where attitude deflection is larger to features of human face images, with considerable influence;The Hog characteristic informations of reflection facial image local feature presentation and shape are added simultaneously and can effectively reduce the complexity of contour feature point detection for the edge image information of the Sobel operator extractions of rim detection, the present invention is by extracting image intensity value, marginal information and Hog features generate new GEH triple channels image and are used as input, the nonproductive task estimated simultaneously using 3 d pose is used as constraint information, progress facial feature points detection.

Description

Face characteristic point detection method
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a human face characteristic point detection method taking human face three-dimensional posture information as auxiliary constraint in a novel image mode, which has important application in human face recognition, human face posture expression analysis and human face synthesis.
Background
In recent years, with the development of deep learning, Convolutional Neural Networks (CNNs) have achieved a good effect in detecting facial feature points. The CNN takes a face original image as input, and features obtained by using a local receptive field strategy have better expression capability; the weight sharing structure reduces the number of weights so as to reduce the complexity of a network model; meanwhile, downsampling the feature map by using the image local correlation principle effectively reduces the data processing amount while retaining useful structural information, so the CNN is widely applied to feature extraction of face images.
Yi Sun et al, 2013, proposed a three-level Deep Convolutional neural Network cascaded face feature point detection model (Deep Convolutional Network Cascade, DCNN). The first stage of the network takes three different regions (all face regions, eye and nose regions, nose and lip regions) of a face image as input, respectively trains three convolutional neural networks to predict the positions of feature points, and fuses the predicted values of the three networks to obtain a more stable primary feature point detection result. And secondly, extracting features near each feature point in the third stage, and training a convolution neural network for each feature point to correct the positioning result, so as to realize the detection of five feature points including the left eye center, the right eye center, the nose tip, the left mouth corner and the right mouth corner. However, the result obtained by the method only roughly calibrates the positions of eyes, nose and mouth, and the facial attributes cannot be well expressed, and the network model is too complex.
The same year Erijin Zhou et al proposed a four-stage convolutional neural network cascade model for detecting 68 facial feature points of a person. The model considers that the complexity of the positioning of the external outline of the human face and the internal feature points of the five sense organs is different, and the detection is respectively carried out. The multi-feature point detection can show attributes such as the pose and the expression of the face in more detail. However, the method is complex in operation process, and 10 different networks are involved to be trained respectively.
Zhanpen et al proposed a Deep Convolutional neural Network (TCDCN) Constrained by the auxiliary task in 2015. The model enhances the capability of extracting features by a network by taking 18 auxiliary tasks related to facial attributes as constraints while performing 5-person face feature point detection, and is beneficial to improving the accuracy of feature point detection. However, the method only considers the situation that the human face pose deflects in the horizontal dimension, and in most cases, the situation that the pose deflects in other dimensions has certain influence on the feature point detection accuracy.
Disclosure of Invention
The invention provides a method for detecting human face characteristic points by taking a novel three-channel GEH (Gray-Edge-Hog) mode image fused with multiple types of characteristic graphs as input and a three-dimensional posture auxiliary task as constraint information. The invention considers that the human face can obviously see the change of the image outline structure when the posture is changed, so that the human face three-dimensional posture information has considerable influence on the calibration of the global characteristic points of the human face; meanwhile, because the detection difficulty of the external contour feature points of the human face is different from that of the internal organ feature points, the edge information extracted from the human face is used as an image mode variable, so that the detection difficulty of the external contour points can be reduced; the Hog feature map extracted from the face image is used as an image mode variable, the image contour structure is clearly reflected, and meanwhile, the regional features of each organ of the face are more effectively highlighted, so that a convolutional neural network structure model which takes a novel GEH mode image formed by fusing an image gray value, edge information and the Hog feature as input and facial three-dimensional posture information as auxiliary constraint and jointly trains facial feature points and facial postures is provided, and 68 feature points of the face are accurately positioned.
In order to achieve the purpose, the invention adopts the following technical scheme:
a face characteristic point detection method comprises the following steps:
step (ii) of1. Carrying out face detection positioning and cutting and three-channel multi-feature image fusion on the original image to obtain a three-channel GEH mode image PictureGEH
Step 2, taking a three-channel GEH mode image obtained by fusing the three feature images as the input of a convolutional neural network, and extracting the facial features of the network, wherein the facial features comprise: the method comprises the following steps that characteristic points and three-dimensional postures of a human face are detected, and a double-task loss function is designed according to a least square function corresponding to a linear regression problem;
and 3, performing network training on the double-task loss function by adopting a gradient back propagation algorithm, finally learning the detection weight of the human face characteristic points and the detection weight of the posture, and extracting a network through the same human face characteristics in the test process so as to realize the detection of the human face characteristic points and the detection of the three-dimensional posture of the human face.
Preferably, the network feature extraction is alternately completed by 3 convolutional layers and 3 pooling layers, and 2 full-connection layers;
firstly, a three-channel GEH mode diagram PictureGEHAs input to a first layer convolution operationOutput feature map yjThe calculation formula of (a) is shown as the following formula:
where f denotes convolution operation, l denotes the current number of network layers, i denotes the number of input profiles, j denotes the number of output profiles, wijFor the convolution kernel parameter to be solved, bjIs a bias parameter, wijAnd bjAcquiring by adopting a random normal initialization mode at the beginning of an experiment;
then, according to the result obtained in the convolution stage, the characteristics are sent to the function corresponding to the linear regression problem, and the designed dual-task loss function expression is shown as the following formula:
wherein N represents the number of training images,a tag value indicating the ith image feature point detection task,respectively representing label values, x corresponding to the detection of the three-dimensional posture of the human faceiFeatures of the ith image, W, representing convolutional neural network extractionfRepresenting the weight of the feature point detection task, Wyaw,Wpitch,WrollRespectively representing weight lambda corresponding to three-pose detection task of human faceYawPitchRollRepresenting a loss function loss weight;
performing network training through a back propagation algorithm to obtain the face characteristic point detection weight WfAnd attitude detection weight Wyaw,Wpitch,Wroll(ii) a In the testing process, the same face feature extraction network is used to finally obtain the face feature point detection result (W)f)TxiAnd three-dimensional attitude detection result (W)Yaw)Txi,(WPitch)Txi,(WRoll)Txi
Preferably, in step 1, performing gray processing on the face subgraph subjected to face detection positioning and cutting to obtain a gray feature graph G; extracting the Hog features from the face subgraph to obtain a Hog feature graph H; finally, extracting edge features to obtain an edge feature graph E; using RGB (Red-Green-Blue) color space as substrate, respectively mapping the feature map G (Gray), the feature map E (edge), and the feature map H (Hog) onto RGB rectangular coordinate system color space to generate novel GEH modeImage PictureGEHThe generation formula is as follows:
wherein,the Gray values representing the Gray feature map Gray are mapped to the r (red) color space in RGB,the representative Hog feature map feature values are mapped to the b (blue) color space,the feature values representing the Edge feature map are mapped to the g (green) color space.
Drawings
FIG. 1: the three-channel characteristic diagram generation process schematic diagram is disclosed;
FIG. 2 a: an original face image;
FIG. 2 b: the human face image is fused by multiple characteristic images;
FIG. 3: GEH-double-task convolution neural network structure model.
Detailed Description
The invention provides a human face characteristic point detection method, which takes a posture detection task as constraint and takes a novel three-channel GEH (Gray-Edge-Hog) mode image fused by multiple types of characteristic graphs as an input human face characteristic point detection method. The detection of the human face global feature points by considering the human face three-dimensional posture information has a considerable influence on the detection of the human face image feature points especially under the condition of large posture deflection; the invention can effectively reduce the complexity of contour feature point detection by adding the Hog feature information reflecting the local feature representation and shape of the face image and the edge image information extracted by the Sobel operator for edge detection.
The basic data used by the invention is from a 300-W face characteristic point detection competition platform, wherein four data sets of LFPW, AFW, HELEN and IBUG are included. The image labels in each dataset are 68 points, including the outer contour points and the inner five sense organs (eyebrows, eyes, nose, mouth). The corresponding three-dimensional attitude tags are generated by Interface software developed by a human perception laboratory and calculated according to feature point tags provided by a 300-W image, and represent the Yaw, Pitch and Roll three-dimensional information respectively.
The CNN structure for detecting the human face characteristic points comprises two tasks, namely a 68-point human face characteristic point detection task and a three-dimensional posture information detection task. By utilizing the related influence of the three-dimensional posture information on the global face characteristic point detection, the combined characteristics which can represent the position of the face characteristic point and reflect the face posture orientation are extracted through the CNN face characteristic point detection network, and finally the detection of the face characteristic point is realized.
1. Image pre-processing
The proper image preprocessing method can eliminate the environmental influences of weather, illumination and the like in the original image, so that the edge and color features of the image are more prominent, and the feature extraction of the convolutional neural network is facilitated. The method comprises the steps of firstly carrying out face detection positioning and cutting on an original image, carrying out normalization processing, then respectively carrying out graying processing, Hog feature extraction and visualization and Sobel operator edge feature extraction on the cut image, fusing the generated multi-class feature maps to form a novel GEH image mode, and taking a three-channel image in the mode as input.
1.1 face detection positioning and clipping
The invention aims to remove the interference of information such as image background, hair and clothes on a characteristic point detection task. According to the result of face detection and positioning, the distances between the detected image and the face image are expanded in a certain proportion, and then the expanded face image is cut and normalized in scale.
1.2 three-channel multiple feature map fusion
Firstly, carrying out gray level processing on the face subgraph obtained in the step 1.1 to obtain a gray level feature graph G; extracting the Hog Features from the face subgraph, wherein the Hog Features are extracted, and because the dimensions of the feature graph after the Hog Features are extracted are different from those of the face subgraph, an investing Visual Features method proposed by CarlVondrick in 2012 is adopted for carrying out feature visualization processing, and finally, a Hog feature graph H with the same size as the face subgraph is obtained; and finally, extracting edge features, and performing edge extraction on the RGB face sub-image subjected to scale normalization by using a 5-order Sobel operator to generate an edge feature image E.
Because the three characteristic images reflect the information of different attributes of the face image and are independent of each other, a new image mode can be formed by the comprehensive action of the three independent variables. Therefore, the invention uses RGB (Red-Green-Blue) color space as a substrate, and respectively maps the G (Gray), E (edge) and H (Hog) feature map variables to RGB rectangular coordinate system color space to generate a novel GEH mode image PictureGEH. The generation formula is as follows:
wherein,the Gray values representing the Gray feature map Gray are mapped to the r (red) color space in RGB,the representative Hog feature map feature values are mapped to the b (blue) color space,the feature values representing the Edge feature map are mapped to the g (green) color space.
The GEH three channel image generation process is shown in fig. 1.
The GEH three-channel image effect is shown in fig. 2a and 2 b.
2. Face characteristic point detection and three-dimensional posture detection double-task network model
The CNN network structure jointly considers a face image characteristic point detection task and a three-dimensional posture detection task, wherein the face characteristic point detection task is a main task, the three-dimensional posture detection task is an auxiliary task, and the network structure is shown in figure 3.
2.1 face feature extraction
Three-channel GEH mode Picture fused by three characteristic pictures in networkGEHAnd extracting the human face features as the input of the convolutional neural network. The feature point detection and the gesture detection task both use a least square function corresponding to a linear regression problem as a loss function, adopt a gradient back propagation algorithm, train network parameters, and finally realize the detection of the feature points of the human face and the detection of the three-dimensional gesture of the human face.
In the invention, the network feature extraction is alternately completed by 3 convolution layers, 3 pooling layers and 2 full-connection layers. The convolutional layer performs convolution operation through local receptive fields to extract visual features. In order to more completely reserve the image characteristics after the original image is converted into the GEH image mode, the invention takes the size close to the face subgraph of the original image as the input size, and the input of the convolution operation of the first layer(i.e., Picture)GEH) The image with the size of 224 × 224 is 2.3 times of the input sizes of DCNN and TCDCN, so that the integrity and the feature effectiveness of the image are ensured;the convolution kernel sizes are 7 × 7, 4 × 4 and 3 × 3 respectively, and the output characteristic diagram yjThe formula (2) is shown as follows:
where f denotes convolution operation, l denotes the current number of network layers, i denotes the number of input profiles, j denotes the number of output profiles, wijFor the convolution kernel parameter to be solved, bjIs a bias parameter, wijAnd bjAnd acquiring by adopting a random normal initialization mode at the beginning of an experiment.
The pooling layer adopts a maximum pooling method, aiming at the characteristics of the input size of the GEH mode image, the pooling ranges of the first layer and the third layer are set to be 3 multiplied by 3, the step length is 3, the completeness of the extracted features is ensured, the feature dimension is effectively reduced, and the complexity of network training is reduced; the pooling range of the second layer was 2 x 2 with a step size of 2.
2.2 design of the Dual-tasking Objective function
And then, according to the result obtained in the convolution stage, the characteristics are sent to a function corresponding to the linear regression problem. The detection problem of the human face feature points and the detection problem of the human face three-dimensional posture are both linear regression problems, a least square function is adopted as a loss function, and the expression is shown as a formula (3):
floss=||l-WTxi|| (3)
wherein l represents a label of the regression problem, xiRepresents the features extracted by the convolutional neural network, and W represents the weight coefficient corresponding to the linear regression problem.
The invention takes the detection of the human face characteristic points as a main task. The three-dimensional posture detection is an auxiliary task and is used for assisting the human face characteristic point detection task to extract characteristics capable of reflecting the three-dimensional posture more and accurately positioning the human face image with a larger posture; the three-dimensional attitude coordinates are respectively expressed by Pitch, Yaw and Roll, taking the right-hand cartesian coordinate system as an example in a three-dimensional rectangular coordinate system, Pitch representing rotation about the X axis, called Pitch angle, Yaw representing rotation about the Y axis, called Yaw angle, and Roll representing rotation about the Z axis, called Roll angle. In the three-dimensional posture detection task, according to statistics of experimental data, the variation amplitude of the Yaw posture in the three-dimensional posture is larger and is 5-6 times of the variation of Pitch posture and Roll posture, so that the influence of the Yaw is larger than that of Pitch and Roll, and the influence of the three-dimensional posture on the main task of face detection is set by setting the weight of a loss function. The expression of the dual task loss function designed by the invention is shown as a formula (4):
wherein N represents the number of training images,a label value (dimension 136) representing the ith image feature point detection task,respectively representing label values (the dimensionalities are all 1) corresponding to the detection of the three-dimensional posture of the human face, and xiFeatures of the ith image, W, representing convolutional neural network extractionfRepresenting the weight of the feature point detection task, Wyaw,Wpitch,WrollAnd respectively representing weights corresponding to the three-pose detection tasks of the human face. Lambda [ alpha ]YawPitchRollAnd the loss weights of the loss functions are respectively 0.3, 0.1 and 0.1.
2.3 network learning
The invention adopts a back propagation algorithm to carry out network training. Finally learning the face characteristic point detection weight WfAnd attitude detection weight Wyaw,Wpitch,Wroll. In the testing process, the same face feature extraction network is used to finally obtain the face feature pointsDetection result (W)f)TxiAnd three-dimensional attitude detection result (W)Yaw)Txi,(WPitch)Txi,(WRoll)Txi
The method is experimentally verified, and an obvious effect is achieved. The evaluation index measures the performance of the designed method by adopting an average estimation error index which is published on CVPR by Yi Sun et al in 2013 and is provided for detecting the characteristic points of the human face, and the index shows the accuracy and reliability of a characteristic point positioning algorithm.
The average estimation error calculation formula is as follows:
wherein, (x, y) and (x ', y') denote the feature point true value coordinates and the estimated coordinates, respectively, and l denotes the estimation error normalization factor. If the estimation error exceeds 10%, the estimation is considered to be invalid.
The experiment used a 300-W challenge platform of LFPW database, which is a multi-pose, multi-view database of faces. Most human face feature point detection methods are validated on the data set because the picture includes various aspects of pose, expression, illumination, etc. The data set contains 811 training images and 224 test images.
The first set of experiments was mainly performed: the performance of a double-task convolutional neural network (3 feature-D-CNN) taking a multi-feature fusion image as input, a double-task convolutional neural network (D-CNN) taking an original image as input and a traditional Convolutional Neural Network (CNN) taking the original image as input are compared with the performance of human face feature point detection. The structure of the convolutional layer, the pooling layer and the full-connection layer in the three networks are the same. The difference lies in the input image type, the output loss function and the output dimension of the network structure. The experimental comparison results are as follows:
table 1: comparison of three convolutional neural network models
Table 1 the columns represent the results of testing different network models on LFPW data. Lower values indicate better detection of the human face feature points. It can be seen that the average estimation error of the 3 feature-D-CNN network provided by the invention is respectively reduced by 14.02% and 11.6% compared with the original CNN network and D-CNN network. This shows that the method for detecting human face feature points is effective, wherein the method takes the posture detection task as constraint and takes the novel three-channel GEH (Gray-Edge-Hog) mode image fused by multiple types of feature maps as input.
The second set of experiments was mainly performed: the effects of the three networks on the detection of the outer contour points are compared. The results of the experiment are shown in table 2:
table 2: comparison of detection results of external contours of human faces detected by three network models
Table 2 each column of data represents the results of contour point detection on LFPW data for different network models. Lower values indicate better detection of the human face feature points. It can be seen that the 3 feature-D-CNN network provided by the invention has the average error respectively reduced by 21.52% and 4.95% in the aspect of external contour detection compared with the original CNN network and the D-CNN network. The detection effect of the invention on the contour points outside the human face is verified to be improved to a certain extent.

Claims (3)

1. A face feature point detection method is characterized by comprising the following steps:
step 1, carrying out face detection positioning and cutting and three-channel multi-feature image fusion on an original face image to obtain a three-channel GEH mode image PictureGEH
Step 2, taking a three-channel GEH mode image obtained by fusing the three feature images as the input of a convolutional neural network, and extracting the facial features of the network, wherein the facial features comprise: the method comprises the following steps that characteristic points and three-dimensional postures of a human face are detected, and a double-task loss function is designed according to a least square function corresponding to a linear regression problem;
and 3, performing network training on the double-task loss function by adopting a gradient back propagation algorithm, finally learning the detection weight of the human face characteristic points and the detection weight of the posture, and extracting a network through the same human face characteristics in the test process so as to realize the detection of the human face characteristic points and the detection of the three-dimensional posture of the human face.
2. The method of claim 1, wherein the network feature extraction is performed by alternating 3 convolutional layers and 3 pooling layers, with 2 fully-connected layers;
firstly, a three-channel GEH mode diagram PictureGEHAs input to a first layer convolution operationOutput feature map yjThe calculation formula of (a) is shown as the following formula:
x j l = f ( b j + Σ i w i j * x i l - 1 )
where f denotes convolution operation, l denotes the current number of network layers, i denotes the number of input profiles, j denotes the number of output profiles, wijFor the convolution kernel parameter to be solved, bjIs a bias parameter, wijAnd bjAcquiring by adopting a random normal initialization mode at the beginning of an experiment;
then, according to the result obtained in the convolution stage, the characteristics are sent to the function corresponding to the linear regression problem, and the designed dual-task loss function expression is shown as the following formula:
arg min W f , W Y a w , W P i t c h , W R o l l { Σ i = 1 N | | l i f - ( W f ) T x i | | 2 + λ Y a w Σ i = 1 N | | l i Y a w - ( W Y a w ) T x i | | 2 + λ P i t c h Σ i = 1 N | | l i P i t c h - ( W P i t c h ) T x i | | 2 + λ R o l l Σ i = 1 N | | l i R o l l - ( W R o l l ) T x i | | 2 }
wherein N represents the number of training images,a tag value indicating the ith image feature point detection task,respectively representing label values, x corresponding to the detection of the three-dimensional posture of the human faceiFeatures of the ith image, W, representing convolutional neural network extractionfRepresenting the weight of the feature point detection task, Wyaw,Wpitch,WrollRespectively representing weight lambda corresponding to three-pose detection task of human faceYawPitchRollRepresenting a loss function loss weight;
performing network training through a back propagation algorithm to obtain the face characteristic point detection weight WfAnd attitude detection weight Wyaw,Wpitch,Wroll(ii) a In the testing process, the same face feature extraction network is used to finally obtain the face feature point detection result (W)f)TxiAnd three-dimensional attitude detection result (W)Yaw)Txi,(WPitch)Txi,(WRoll)Txi
3. The method for detecting human face feature points as claimed in claim 1, wherein in step 1, the human face sub-image after human face detection positioning and clipping is processed with gray scale to obtain a gray scale feature image G; extracting the Hog features from the face subgraph to obtain a Hog feature graph H; finally, extracting edge features to obtain an edge feature graph E; respectively mapping the feature map variables of the feature map G (Gray), the feature map E (edge) and the feature map H (hog) onto the color space of an RGB rectangular coordinate system by using an RGB (Red-Green-Blue) color space as a substrate to generate a novel GEH mode image PictureGEHThe generation formula is as follows:
wherein,the Gray values representing the Gray feature map Gray are mapped to the r (red) color space in RGB,the representative Hog feature map feature values are mapped to the b (blue) color space,the feature values representing the Edge feature map are mapped to the g (green) color space.
CN201710138179.0A 2017-03-09 2017-03-09 A kind of facial feature points detection method Pending CN106951840A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710138179.0A CN106951840A (en) 2017-03-09 2017-03-09 A kind of facial feature points detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710138179.0A CN106951840A (en) 2017-03-09 2017-03-09 A kind of facial feature points detection method

Publications (1)

Publication Number Publication Date
CN106951840A true CN106951840A (en) 2017-07-14

Family

ID=59466826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710138179.0A Pending CN106951840A (en) 2017-03-09 2017-03-09 A kind of facial feature points detection method

Country Status (1)

Country Link
CN (1) CN106951840A (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463903A (en) * 2017-08-08 2017-12-12 北京小米移动软件有限公司 Face key independent positioning method and device
CN107527018A (en) * 2017-07-26 2017-12-29 湖州师范学院 Momentum method for detecting human face based on BP neural network
CN107704813A (en) * 2017-09-19 2018-02-16 北京飞搜科技有限公司 A kind of face vivo identification method and system
CN107808129A (en) * 2017-10-17 2018-03-16 南京理工大学 A kind of facial multi-characteristic points localization method based on single convolutional neural networks
CN108269250A (en) * 2017-12-27 2018-07-10 武汉烽火众智数字技术有限责任公司 Method and apparatus based on convolutional neural networks assessment quality of human face image
CN108596093A (en) * 2018-04-24 2018-09-28 北京市商汤科技开发有限公司 The localization method and device of human face characteristic point
CN108734139A (en) * 2018-05-24 2018-11-02 辽宁工程技术大学 Feature based merges and the newer correlation filtering tracking of SVD adaptive models
CN108764248A (en) * 2018-04-18 2018-11-06 广州视源电子科技股份有限公司 Image feature point extraction method and device
CN109359541A (en) * 2018-09-17 2019-02-19 南京邮电大学 A kind of sketch face identification method based on depth migration study
CN109753910A (en) * 2018-12-27 2019-05-14 北京字节跳动网络技术有限公司 Crucial point extracting method, the training method of model, device, medium and equipment
CN109766866A (en) * 2019-01-22 2019-05-17 杭州美戴科技有限公司 A kind of human face characteristic point real-time detection method and detection system based on three-dimensional reconstruction
CN109934058A (en) * 2017-12-15 2019-06-25 北京市商汤科技开发有限公司 Face image processing process, device, electronic equipment, storage medium and program
CN110033505A (en) * 2019-04-16 2019-07-19 西安电子科技大学 A kind of human action capture based on deep learning and virtual animation producing method
CN110047101A (en) * 2018-01-15 2019-07-23 北京三星通信技术研究有限公司 Gestures of object estimation method, the method for obtaining dense depth image, related device
CN110059707A (en) * 2019-04-25 2019-07-26 北京小米移动软件有限公司 Optimization method, device and the equipment of image characteristic point
CN110060296A (en) * 2018-01-18 2019-07-26 北京三星通信技术研究有限公司 Estimate method, electronic equipment and the method and apparatus for showing virtual objects of posture
CN110827394A (en) * 2018-08-10 2020-02-21 宏达国际电子股份有限公司 Facial expression construction method and device and non-transitory computer readable recording medium
WO2020037898A1 (en) * 2018-08-23 2020-02-27 平安科技(深圳)有限公司 Face feature point detection method and apparatus, computer device, and storage medium
WO2020093884A1 (en) * 2018-11-08 2020-05-14 北京灵汐科技有限公司 Attribute detection method and device
CN111507244A (en) * 2020-04-15 2020-08-07 阳光保险集团股份有限公司 BMI detection method and device and electronic equipment
CN111611917A (en) * 2020-05-20 2020-09-01 北京华捷艾米科技有限公司 Model training method, feature point detection device, feature point detection equipment and storage medium
CN112417947A (en) * 2020-09-17 2021-02-26 重庆紫光华山智安科技有限公司 Method and device for optimizing key point detection model and detecting face key points
CN112568992A (en) * 2020-12-04 2021-03-30 上海交通大学医学院附属第九人民医院 Eyelid parameter measuring method, device, equipment and medium based on 3D scanning
CN113963183A (en) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 Model training method, face recognition method, electronic device and storage medium
CN115761411A (en) * 2022-11-24 2023-03-07 北京的卢铭视科技有限公司 Model training method, living body detection method, electronic device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016026063A1 (en) * 2014-08-21 2016-02-25 Xiaoou Tang A method and a system for facial landmark detection based on multi-task
CN105469041A (en) * 2015-11-19 2016-04-06 上海交通大学 Facial point detection system based on multi-task regularization and layer-by-layer supervision neural networ

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016026063A1 (en) * 2014-08-21 2016-02-25 Xiaoou Tang A method and a system for facial landmark detection based on multi-task
CN105469041A (en) * 2015-11-19 2016-04-06 上海交通大学 Facial point detection system based on multi-task regularization and layer-by-layer supervision neural networ

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WANLI OUYANG ET AL.: "Joint deep Learning for Pedestrian Detection", 《ICCV 2013》 *
XIANGXIN ZHU ET AL.: "Face Detection, Pose Estimation, and Landmark Localization in the Wild", 《COMPUTER VISION & PATTERN RECOGNITION》 *
ZHANPENG ZHANG ET AL.: "Facial Landmark Detection by Deep Multi-task Learning", 《EUROPEAN CONFERENCE ON COMPUTER VISION》 *
崔少奇: "人脸姿态估计算法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527018A (en) * 2017-07-26 2017-12-29 湖州师范学院 Momentum method for detecting human face based on BP neural network
CN107463903B (en) * 2017-08-08 2020-09-04 北京小米移动软件有限公司 Face key point positioning method and device
CN107463903A (en) * 2017-08-08 2017-12-12 北京小米移动软件有限公司 Face key independent positioning method and device
CN107704813A (en) * 2017-09-19 2018-02-16 北京飞搜科技有限公司 A kind of face vivo identification method and system
CN107704813B (en) * 2017-09-19 2020-11-17 北京一维大成科技有限公司 Face living body identification method and system
CN107808129A (en) * 2017-10-17 2018-03-16 南京理工大学 A kind of facial multi-characteristic points localization method based on single convolutional neural networks
CN107808129B (en) * 2017-10-17 2021-04-16 南京理工大学 Face multi-feature point positioning method based on single convolutional neural network
CN109934058A (en) * 2017-12-15 2019-06-25 北京市商汤科技开发有限公司 Face image processing process, device, electronic equipment, storage medium and program
CN108269250A (en) * 2017-12-27 2018-07-10 武汉烽火众智数字技术有限责任公司 Method and apparatus based on convolutional neural networks assessment quality of human face image
CN110047101A (en) * 2018-01-15 2019-07-23 北京三星通信技术研究有限公司 Gestures of object estimation method, the method for obtaining dense depth image, related device
CN110060296A (en) * 2018-01-18 2019-07-26 北京三星通信技术研究有限公司 Estimate method, electronic equipment and the method and apparatus for showing virtual objects of posture
CN108764248A (en) * 2018-04-18 2018-11-06 广州视源电子科技股份有限公司 Image feature point extraction method and device
CN108596093A (en) * 2018-04-24 2018-09-28 北京市商汤科技开发有限公司 The localization method and device of human face characteristic point
US11314965B2 (en) 2018-04-24 2022-04-26 Beijing Sensetime Technology Development Co., Ltd. Method and apparatus for positioning face feature points
WO2019205605A1 (en) * 2018-04-24 2019-10-31 北京市商汤科技开发有限公司 Facial feature point location method and device
CN108596093B (en) * 2018-04-24 2021-12-03 北京市商汤科技开发有限公司 Method and device for positioning human face characteristic points
CN108734139B (en) * 2018-05-24 2021-12-14 辽宁工程技术大学 Correlation filtering tracking method based on feature fusion and SVD self-adaptive model updating
CN108734139A (en) * 2018-05-24 2018-11-02 辽宁工程技术大学 Feature based merges and the newer correlation filtering tracking of SVD adaptive models
CN110827394B (en) * 2018-08-10 2024-04-02 宏达国际电子股份有限公司 Facial expression construction method, device and non-transitory computer readable recording medium
CN110827394A (en) * 2018-08-10 2020-02-21 宏达国际电子股份有限公司 Facial expression construction method and device and non-transitory computer readable recording medium
WO2020037898A1 (en) * 2018-08-23 2020-02-27 平安科技(深圳)有限公司 Face feature point detection method and apparatus, computer device, and storage medium
CN109359541A (en) * 2018-09-17 2019-02-19 南京邮电大学 A kind of sketch face identification method based on depth migration study
WO2020093884A1 (en) * 2018-11-08 2020-05-14 北京灵汐科技有限公司 Attribute detection method and device
CN109753910A (en) * 2018-12-27 2019-05-14 北京字节跳动网络技术有限公司 Crucial point extracting method, the training method of model, device, medium and equipment
CN109753910B (en) * 2018-12-27 2020-02-21 北京字节跳动网络技术有限公司 Key point extraction method, model training method, device, medium and equipment
CN109766866A (en) * 2019-01-22 2019-05-17 杭州美戴科技有限公司 A kind of human face characteristic point real-time detection method and detection system based on three-dimensional reconstruction
CN109766866B (en) * 2019-01-22 2020-09-18 杭州美戴科技有限公司 Face characteristic point real-time detection method and detection system based on three-dimensional reconstruction
CN110033505A (en) * 2019-04-16 2019-07-19 西安电子科技大学 A kind of human action capture based on deep learning and virtual animation producing method
CN110059707A (en) * 2019-04-25 2019-07-26 北京小米移动软件有限公司 Optimization method, device and the equipment of image characteristic point
CN111507244A (en) * 2020-04-15 2020-08-07 阳光保险集团股份有限公司 BMI detection method and device and electronic equipment
CN111507244B (en) * 2020-04-15 2023-12-08 阳光保险集团股份有限公司 BMI detection method and device and electronic equipment
CN111611917A (en) * 2020-05-20 2020-09-01 北京华捷艾米科技有限公司 Model training method, feature point detection device, feature point detection equipment and storage medium
CN112417947B (en) * 2020-09-17 2021-10-26 重庆紫光华山智安科技有限公司 Method and device for optimizing key point detection model and detecting face key points
CN112417947A (en) * 2020-09-17 2021-02-26 重庆紫光华山智安科技有限公司 Method and device for optimizing key point detection model and detecting face key points
CN112568992A (en) * 2020-12-04 2021-03-30 上海交通大学医学院附属第九人民医院 Eyelid parameter measuring method, device, equipment and medium based on 3D scanning
CN113963183A (en) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 Model training method, face recognition method, electronic device and storage medium
CN115761411A (en) * 2022-11-24 2023-03-07 北京的卢铭视科技有限公司 Model training method, living body detection method, electronic device, and storage medium
CN115761411B (en) * 2022-11-24 2023-09-01 北京的卢铭视科技有限公司 Model training method, living body detection method, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN106951840A (en) A kind of facial feature points detection method
CN110832501B (en) System and method for pose invariant facial alignment
CN112800903B (en) Dynamic expression recognition method and system based on space-time diagram convolutional neural network
CN109859305B (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
CN108416266B (en) Method for rapidly identifying video behaviors by extracting moving object through optical flow
CN108062525B (en) Deep learning hand detection method based on hand region prediction
CN112270249A (en) Target pose estimation method fusing RGB-D visual features
WO2017219391A1 (en) Face recognition system based on three-dimensional data
CN101819628B (en) Method for performing face recognition by combining rarefaction of shape characteristic
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN112784736B (en) Character interaction behavior recognition method based on multi-modal feature fusion
KR20210028185A (en) Human posture analysis system and method
JP2023545200A (en) Parameter estimation model training method, parameter estimation model training apparatus, device, and storage medium
CN104123749A (en) Picture processing method and system
CN101499128A (en) Three-dimensional human face action detecting and tracing method based on video stream
CN114757904B (en) Surface defect detection method based on AI deep learning algorithm
CN114663502A (en) Object posture estimation and image processing method and related equipment
US11915362B2 (en) UV mapping on 3D objects with the use of artificial intelligence
CN109948454B (en) Expression database enhancing method, expression database training method, computing device and storage medium
CN110210426A (en) Method for estimating hand posture from single color image based on attention mechanism
CN110110603A (en) A kind of multi-modal labiomaney method based on facial physiologic information
JP2011060289A (en) Face image synthesis method and system
CN115205933A (en) Facial expression recognition method, device, equipment and readable storage medium
Rizwan et al. Automated Facial Expression Recognition and Age Estimation Using Deep Learning.
CN113361378B (en) Human body posture estimation method using adaptive data enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170714