CN109740426A - A kind of face critical point detection method based on sampling convolution - Google Patents

A kind of face critical point detection method based on sampling convolution Download PDF

Info

Publication number
CN109740426A
CN109740426A CN201811410129.4A CN201811410129A CN109740426A CN 109740426 A CN109740426 A CN 109740426A CN 201811410129 A CN201811410129 A CN 201811410129A CN 109740426 A CN109740426 A CN 109740426A
Authority
CN
China
Prior art keywords
face
key point
face key
training
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811410129.4A
Other languages
Chinese (zh)
Other versions
CN109740426B (en
Inventor
黄亮
徐滢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Pinguo Technology Co Ltd
Original Assignee
Chengdu Pinguo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Pinguo Technology Co Ltd filed Critical Chengdu Pinguo Technology Co Ltd
Priority to CN201811410129.4A priority Critical patent/CN109740426B/en
Publication of CN109740426A publication Critical patent/CN109740426A/en
Application granted granted Critical
Publication of CN109740426B publication Critical patent/CN109740426B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of face critical point detection method based on sampling convolution, belongs to technical field of image detection, method includes the following steps: S1, acquisition include the gray level image of face, and the face frame in the gray level image is obtained using Face datection algorithm;S2, prepare training set, faces all in training set image are subjected to Procrustes analysis, obtain average face key point;S3, after amplifying average face key point by the face frame size that step S1 is obtained, Initial Face key point is obtained;S4, face key point is updated using the network model that training generates, obtains final face key point;By carrying out a convolution near key point, continuous iteration updates as a result, while guaranteeing precision, further improves calculating speed.

Description

A kind of face critical point detection method based on sampling convolution
Technical field
The present invention relates to technical field of image detection, more particularly to a kind of face critical point detection based on sampling convolution Method.
Background technique
Deep learning developed recently is swift and violent, using neural network as representative, solves insoluble before numerous areas ask Topic.Face critical point detection is a most important step before face is aligned, and is being based on recognition of face (face recognizaton) skill In the application field of art, critical point detection plays an important role in recognition of face;Equally, the quality of key point is directly related to The efficiency of detector identification target.
Face critical point detection method is roughly divided into three kinds, is base ASM (Active Shape Model) and AAM respectively The conventional method of (Active Appearnce Model), based on the method that cascade shape returns, and based on deep learning Method.
Currently, best with deep learning effect in the detection algorithm of face key point, wherein most algorithm has used volume Product neural network, and convolution algorithm is usually than relatively time-consuming, a part of researcher starts to examine on small figure using convolutional network It surveys, to improve calculating speed, but what is sacrificed is precision.Therefore, there is presently no a kind of algorithms can preferably take into account calculating Speed and precision.
Summary of the invention
To solve the above-mentioned problems, the present invention provides a kind of face critical point detection method based on sampling convolution, passes through A convolution is carried out near key point, continuous iteration updates as a result, while guaranteeing precision, further improves calculating speed Degree.For this purpose, the technical solution adopted by the present invention is that:
A kind of face critical point detection method based on sampling convolution is provided, method includes the following steps:
S1, the gray level image comprising face is obtained, and obtains the face in the gray level image using Face datection algorithm Frame;
S2, prepare training set, faces all in training set image are subjected to Procrustes analysis, obtain average face key point Sstd
S3, by SstdAfter the face frame size amplification obtained by step S1, Initial Face key point S is obtained0
S4, face key point S is updated using the network model that training generatesi, obtain final face key point;Wherein i ∈ [1, It], i indicate i-th iteration, and It is the number of iterations, value range 1-10, SItAs final face key point;
Specific step is as follows for the update:
S41, using sampling convolution algorithm to Si-1The image at place carries out feature extraction, obtains feature vector
S42, face key point deviation delta S is calculatedi
S43, face key point S is updated using the face key point deviationi, i.e. Si=Si-1+ΔSi
Face datection algorithm in step S1 is Face datection algorithm and existing human-face detector commonly used in the art , such as the Face datection algorithm based on histogram coarse segmentation and singular value features and the face inspection based on AdaBoost algorithm Survey etc..
Further, in step S4, the training process of the network model includes:
Using the training set in the step S2 as training sample, training data in the training set be gray level image and Corresponding face key point S, with SnIndicate the key point information of n-th of face;The training process is distinguished according to the number of iterations Training after the completion of the training of each iteration, carries out the training of next iteration herein on the basis of result, the process trained every time is such as Under:
S51, S is used on the training setstdGenerate Initial Face key point data Sinit
S52, the parameter Kernel that i-th iteration is successively trained in a manner of end to endi,j,k,o、Wi、bi, that is, use gradient Descent method is askedWherein, Kerneli,j,k,oIndicate convolution kernel, InIndicate the N facial image, fiIndicate that the sampling convolution sum of i-th iteration connects calculating process entirely, SampleNum indicates people in training set Face number,Indicate the face key point information of n-th of face after the completion of (i-1)-th repetitive exercise, i.e.,
Further, the step of sampling convolution algorithm in the step S41 are as follows:
S411, the maximum extension rate d for calculating sampling convolutioni=Ei-1Scalei, wherein Ei-1Indicate Si-1In two away from From ScaleiIndicate zoom scale, value 0.1i~0.9i
S412, in Si-1Each of face key point position carry out expansion convolution algorithm with m convolution kernel, and by operation As a result it is spliced into one-dimensional characteristic vectorConvolution kernel is expressed as Kerneli,j,k,o, wherein j indicates j-th of face key point, k table Show that convolution kernel size, value are the odd number more than or equal to 3, o indicates spreading rateM=10-128.
Further, utilizing the face key point deviation delta S of full link block in step S42iCalculation formula are as follows:
ΔSi=Wiφi+bi,
Wherein, WiFor the weight that network model training obtains, biThe bias term obtained for network model training.
Further, the specific steps of step S51 are as follows:
S511, the face frame in training set image is obtained using Face datection algorithm;
S512, faces all in training set image are subjected to Procrustes analysis, obtain average face key point Sstd
S513, by SstdAfter the face frame size amplification obtained by step S1, Initial Face key point S is obtainedinit
The inventive principle of the method for the present invention is as follows:
The detection method of general face key point is then to use feature by hand-designed feature, such as hog, sift Carry out machine learning.But the feature of hand-designed the problem is that: and in order to obtain the advantage in speed, manual feature is often What can be designed is fairly simple, can not often portray the feature of face well;And the present invention is by the way of deep learning, and Feature extraction phases are by the way of the smaller sampling convolution of operand, by carrying out a convolution near key point, and And the parameter for sampling convolution learns to obtain by mode end to end, face can be portrayed very well by being obtained with simple calculations Feature has reached the balance in arithmetic speed and face critical point detection precision.
Using the technical program the utility model has the advantages that
1, in the present invention, sampling convolution algorithm only carries out a series of expansion convolution in certain point, can extract more directly, have The information of effect, and calculation amount is far smaller than traditional convolutional neural networks.
2, traditional algorithm based on deep learning is often detected using facial image smaller after scaling, is lost Fall the resolution information of original image, and detection algorithm of the invention carries out in original image, effectively remains the information of most original.
3, two eye distances when detection algorithm of the invention is according to current iteration set the spreading rate of expansion convolution from dynamic, make Detection algorithm effect is obtained not change with the variation of face size.
4, setting of the present invention to the maximum extension rate of the renewal process cooperation expansion convolution of face key point, so that entirely Detection process is by slightly to the process of essence, detection accuracy steps up, can arbitrarily customize the number of iterations, reach precision and speed Balance.
Detailed description of the invention
Fig. 1 is the flow chart of the method for the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, right in the following with reference to the drawings and specific embodiments The present invention is further elaborated.
In the present embodiment, as shown in Figure 1, a kind of face critical point detection method based on sampling convolution, this method packet Include following steps:
S1, the gray level image comprising face is obtained, and obtains the face in the gray level image using Face datection algorithm Frame;
Face frame is rectangle, and rectangular area is expressed as (x, y, w, h), and wherein x, y represent rectangular area top left co-ordinate, w, It is high that h represents rectangle region field width.
S2, prepare training set, faces all in training set image are subjected to Procrustes analysis, obtain average face key point Sstd
S3, by SstdAfter the face frame size amplification obtained by step S1, Initial Face key point S is obtained0
Average face key point Sstd, the coordinate vector comprising face key point, wherein including the points such as eyes, nose, mouth Coordinate, if any N number of face key point, then SstdFor the vector of 2N length, SstdThe coordinate of each point according to training set face Frame is normalized between [0,1], then S0=Sstd× (w, h), S0That is SstdBy the amplified coordinate vector of current face's frame.
S4, face key point S is updated using the network model that training generatesi, obtain final face key point;Wherein i =1,2,3,4,5, S5As final face key point;
Specific step is as follows for the update:
S41, using sampling convolution algorithm to S0The image at place carries out feature extraction, obtains feature vector
The step of convolution algorithm is sampled in the step S41 are as follows:
S411, the maximum extension rate d for calculating sampling convolutioni=Ei-1Scalei, wherein Ei-1Indicate Si-1In two away from From ScaleiIndicate zoom scale, value 0.5i
S412, in Si-1Each of face key point position carry out expansion convolution algorithm with m convolution kernel, and by operation As a result it is spliced into one-dimensional characteristic vectorConvolution kernel is expressed as Kerneli,j,k,o, wherein j indicates j-th of face key point, k table Show convolution kernel size, value 3, o indicates spreading rateM=128.
S42, the face key point deviation delta S using full link blockiCalculation formula are as follows:
ΔSi=Wiφi+bi,
Wherein, WiFor the weight that network model training obtains, biThe bias term obtained for network model training;
S43, face key point S is updated using the face key point deviationi, i.e. Si=Si-1+ΔSi
In the present embodiment, i=1,2,3,4,5, that is, it carries out five times after updating, obtains final face key point S5
Face datection algorithm in step S1 is the Face datection algorithm based on histogram coarse segmentation and singular value features.
The training process of the network model includes:
Using the training set in the step S2 as training sample, training data in the training set be gray level image and Corresponding face key point S, with SnIndicate the key point information of n-th of face;The training process is distinguished according to the number of iterations Training after the completion of the training of each iteration, carries out the training of next iteration herein on the basis of result, the process trained every time is such as Under:
S51, S is used on the training setstdGenerate Initial Face key point data Sinit
The specific steps of step S51 are as follows:
S511, the face frame in training set image is obtained using Face datection algorithm;
S512, faces all in training set image are subjected to Procrustes analysis, obtain average face key point Sstd
S513, by SstdAfter the face frame size amplification obtained by step S1, Initial Face key point S is obtainedinit
S52, the parameter Kernel that i-th iteration is successively trained in a manner of end to endi,j,k,o、Wi、bi, that is, use gradient Descent method is askedWherein, Kerneli,j,k,oIndicate convolution kernel, InIndicate n-th A facial image, fiIndicate that the sampling convolution sum of i-th iteration connects calculating process entirely, SampleNum indicates people in training set Face number,Indicate the face key point information of n-th of face after the completion of (i-1)-th repetitive exercise, i.e.,
The basic principles, main features and advantages of the present invention have been shown and described above.The technology of the industry Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and the above embodiments and description only describe this The principle of invention, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these changes Change and improvement all fall within the protetion scope of the claimed invention.The claimed scope of the invention by appended claims and its Equivalent thereof.

Claims (5)

1. a kind of face critical point detection method based on sampling convolution, which comprises the following steps:
S1, the gray level image comprising face is obtained, and obtains the face frame in the gray level image using Face datection algorithm;
S2, prepare training set, faces all in training set image are subjected to Procrustes analysis, obtain average face key point Sstd
S3, by SstdAfter the face frame size amplification obtained by step S1, Initial Face key point S is obtained0
S4, face key point S is updated using the network model that training generatesi, obtain final face key point;Wherein i ∈ [1, It], i indicates i-th iteration, and It is the number of iterations, value range 1-10;SItAs final face key point;
Specific step is as follows for the update:
S41, using sampling convolution algorithm to Si-1The image at place carries out feature extraction, obtains feature vector
S42, face key point deviation delta S is calculatedi
S43, face key point S is updated using the face key point deviationi, i.e. Si=Si-1+ΔSi
2. a kind of face critical point detection method based on sampling convolution according to claim 1, which is characterized in that step In S4, the training process of the network model includes:
Using the training set in the step S2 as training sample, the training data in the training set is gray level image and correspondence Face key point S, with SnIndicate the key point information of n-th of face;The training process is respectively trained according to the number of iterations, After the completion of the training of each iteration, the training of next iteration is carried out on the basis of result herein, trained process is as follows every time:
S51, S is used on the training setstdGenerate Initial Face key point data Sinit
S52, the parameter Kernel that i-th iteration is successively trained in a manner of end to endi,j,k,o、Wi、bi, i.e., declined using gradient Method is askedWherein, Kerneli,j,k,oIndicate convolution kernel, InIndicate n-th of people Face image, fiIndicate that the sampling convolution sum of i-th iteration connects calculating process entirely, SampleNum indicates face in training set Number,Indicate the face key point information of n-th of face after the completion of (i-1)-th repetitive exercise, i.e.,
3. a kind of face critical point detection method based on sampling convolution according to claim 1, which is characterized in that described The step of convolution algorithm is sampled in step S41 are as follows:
S411, the maximum extension rate d for calculating sampling convolutioni=Ei-1Scalei, wherein Ei-1Indicate Si-1In two distances, ScaleiIndicate zoom scale, value 0.1i~0.9i
S412, in Si-1Each of face key point position carry out expansion convolution algorithm with m convolution kernel, and by operation result It is spliced into one-dimensional characteristic vectorConvolution kernel is expressed as Kerneli,j,k,o, wherein j indicates j-th of face key point, and k indicates volume Product core size, value are the odd number more than or equal to 3, and o indicates spreading rateM=10-128.
4. a kind of face critical point detection method based on sampling convolution according to claim 1, which is characterized in that step In S42, face key point deviation delta SiCalculation formula are as follows:
ΔSi=Wiφi+bi,
Wherein, WiFor the weight that network model training obtains, biThe bias term obtained for network model training.
5. a kind of face critical point detection method based on sampling convolution according to claim 2, which is characterized in that step The specific steps of S51 are as follows:
S511, the face frame in training set image is obtained using Face datection algorithm;
S512, faces all in training set image are subjected to Procrustes analysis, obtain average face key point Sstd
S513, by SstdAfter the face frame size amplification obtained by step S1, Initial Face key point S is obtainedinit
CN201811410129.4A 2018-11-23 2018-11-23 Face key point detection method based on sampling convolution Active CN109740426B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811410129.4A CN109740426B (en) 2018-11-23 2018-11-23 Face key point detection method based on sampling convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811410129.4A CN109740426B (en) 2018-11-23 2018-11-23 Face key point detection method based on sampling convolution

Publications (2)

Publication Number Publication Date
CN109740426A true CN109740426A (en) 2019-05-10
CN109740426B CN109740426B (en) 2020-11-06

Family

ID=66358228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811410129.4A Active CN109740426B (en) 2018-11-23 2018-11-23 Face key point detection method based on sampling convolution

Country Status (1)

Country Link
CN (1) CN109740426B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175558A (en) * 2019-05-24 2019-08-27 北京达佳互联信息技术有限公司 A kind of detection method of face key point, calculates equipment and storage medium at device
CN111008589A (en) * 2019-12-02 2020-04-14 杭州网易云音乐科技有限公司 Face key point detection method, medium, device and computing equipment
CN114511882A (en) * 2022-01-28 2022-05-17 杭州师范大学 Auricular point positioning method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824050A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascade regression-based face key point positioning method
CN106127170A (en) * 2016-07-01 2016-11-16 重庆中科云丛科技有限公司 A kind of merge the training method of key feature points, recognition methods and system
CN106778531A (en) * 2016-11-25 2017-05-31 北京小米移动软件有限公司 Face detection method and device
CN106845398A (en) * 2017-01-19 2017-06-13 北京小米移动软件有限公司 Face key independent positioning method and device
CN108229418A (en) * 2018-01-19 2018-06-29 北京市商汤科技开发有限公司 Human body critical point detection method and apparatus, electronic equipment, storage medium and program
CN108596090A (en) * 2018-04-24 2018-09-28 北京达佳互联信息技术有限公司 Facial image critical point detection method, apparatus, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824050A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascade regression-based face key point positioning method
CN106127170A (en) * 2016-07-01 2016-11-16 重庆中科云丛科技有限公司 A kind of merge the training method of key feature points, recognition methods and system
CN106778531A (en) * 2016-11-25 2017-05-31 北京小米移动软件有限公司 Face detection method and device
CN106845398A (en) * 2017-01-19 2017-06-13 北京小米移动软件有限公司 Face key independent positioning method and device
CN108229418A (en) * 2018-01-19 2018-06-29 北京市商汤科技开发有限公司 Human body critical point detection method and apparatus, electronic equipment, storage medium and program
CN108596090A (en) * 2018-04-24 2018-09-28 北京达佳互联信息技术有限公司 Facial image critical point detection method, apparatus, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TONG YANG等: "MULTI-LABEL DILATED RECURRENT NETWORK FOR SEQUENTIAL FACE ALIGNMENT", 《2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175558A (en) * 2019-05-24 2019-08-27 北京达佳互联信息技术有限公司 A kind of detection method of face key point, calculates equipment and storage medium at device
CN110175558B (en) * 2019-05-24 2021-02-05 北京达佳互联信息技术有限公司 Face key point detection method and device, computing equipment and storage medium
CN111008589A (en) * 2019-12-02 2020-04-14 杭州网易云音乐科技有限公司 Face key point detection method, medium, device and computing equipment
CN111008589B (en) * 2019-12-02 2024-04-09 杭州网易云音乐科技有限公司 Face key point detection method, medium, device and computing equipment
CN114511882A (en) * 2022-01-28 2022-05-17 杭州师范大学 Auricular point positioning method

Also Published As

Publication number Publication date
CN109740426B (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN108304820B (en) Face detection method and device and terminal equipment
CN105354531B (en) A kind of mask method of face key point
CN108776975A (en) Visual tracking method based on semi-supervised feature and filter joint learning
CN107688829A (en) A kind of identifying system and recognition methods based on SVMs
CN108765506A (en) Compression method based on successively network binaryzation
CN107784288A (en) A kind of iteration positioning formula method for detecting human face based on deep neural network
CN109766838B (en) Gait cycle detection method based on convolutional neural network
CN109598220A (en) A kind of demographic method based on the polynary multiple dimensioned convolution of input
CN103778436B (en) A kind of pedestrian's attitude detecting method based on image procossing
CN111626246B (en) Face alignment method under mask shielding
CN108021869A (en) A kind of convolutional neural networks tracking of combination gaussian kernel function
CN113221956B (en) Target identification method and device based on improved multi-scale depth model
CN111368768A (en) Human body key point-based employee gesture guidance detection method
CN111461101A (en) Method, device and equipment for identifying work clothes mark and storage medium
CN109740426A (en) A kind of face critical point detection method based on sampling convolution
CN112633221A (en) Face direction detection method and related device
CN106683120B (en) image processing method for tracking and covering dynamic sticker
CN110826534B (en) Face key point detection method and system based on local principal component analysis
CN111126169B (en) Face recognition method and system based on orthogonalization graph regular nonnegative matrix factorization
CN110503090B (en) Character detection network training method based on limited attention model, character detection method and character detector
CN113723188B (en) Dressing uniform personnel identity verification method combining human face and gait characteristics
CN108876776B (en) Classification model generation method, fundus image classification method and device
CN110175548A (en) Remote sensing images building extracting method based on attention mechanism and channel information
Xu et al. Automatic segmentation of cervical vertebrae in X-ray images
CN107609565B (en) Indoor visual positioning method based on image global feature principal component linear regression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant