CN109376712A - A kind of recognition methods of face forehead key point - Google Patents
A kind of recognition methods of face forehead key point Download PDFInfo
- Publication number
- CN109376712A CN109376712A CN201811493855.7A CN201811493855A CN109376712A CN 109376712 A CN109376712 A CN 109376712A CN 201811493855 A CN201811493855 A CN 201811493855A CN 109376712 A CN109376712 A CN 109376712A
- Authority
- CN
- China
- Prior art keywords
- face
- key point
- network
- forehead
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of recognition methods of face forehead key point, comprising: determines the characteristic point and key point on face forehead;Fiji graphics standard tool is opened, coordinate is obtained to point interested in the label picture of offer by Fiji software, and be labeled to the positive face image data collection of the face being collected into;Label picture is pre-processed using Matlab;ResNet network is downloaded, and as the feature extractor in the identification of facial contour key point;Network is trained with the face forehead key point data determined, trained objective function is the cross entropy loss function of image, it is solved with gradient descent method so that loss function acquirement global minimum or the corresponding model parameter of local minimum, obtain the neural network model of training completion.It can quickly, accurately identify the positive face forehead of people.Other face identification missions such as this is face contour extraction, skin is tested and assessed provide good technical support.
Description
Technical field
The present invention relates to mode identification technology more particularly to a kind of pattern-recognition sides for detecting face forehead key point
Method.
Background technique
The algorithm of existing identification face key point identification often comes just for the key position of the lower jaw of face, face
It is identified, but without the algorithm of identification face forehead key point.Because face forehead is often influenced by the hair style of people, no
With hair style its fringe it is multifarious, this be that the key point of identification face forehead increases no small difficulty.It is proposed by the present invention
Currently advanced deep neural network is utilized in a kind of recognition methods of face forehead key point, this algorithm, has good
Image understanding ability solves the problems, such as identification face forehead along with the selection of reasonable face forehead key point.This
The technology of a identification forehead key point, for recognition of face in terms of multiple-task provide possibility, for example in conjunction under existing identification
The algorithm of jaw, face key point can achieve the purpose for taking off entire facial contour.
Summary of the invention
In order to solve the above technical problems, the object of the present invention is to provide a kind of pattern-recognition sides of face forehead key point
Method.
The purpose of the present invention is realized by technical solution below:
A kind of recognition methods of face forehead key point, comprising:
A determines characteristic point and key point on face forehead;
B opens Fiji graphics standard tool, is obtained and is sat to point interested in the label picture of offer by Fiji software
Mark, and the positive face image data collection of the face being collected into is labeled;
C pre-processes label picture using Matlab;
D downloads ResNet network, and as the feature extractor in the identification of facial contour key point;
E is trained network with the face forehead key point data determined, and trained objective function is the friendship of image
Entropy loss function is pitched, is solved with gradient descent method so that loss function obtains global minimum or the corresponding model of local minimum
Parameter obtains the neural network model of training completion.
Compared with prior art, one or more embodiments of the invention can have following advantage:
Realize that the profile to the upper forehead point of arbitrary face extracts, and provides basis for the step of subsequent recognition of face
The technical support of property.
Detailed description of the invention
Fig. 1 is forehead key point recognition methods flow diagram;
Fig. 2 is the infrastructure diagram of ResNet network.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with examples and drawings to this hair
It is bright to be described in further detail.
A kind of recognition methods of face forehead key point is present embodiments provided, this method is first to convert the picture of face
For a two-dimensional matrix, two-dimensional matrix is then encoded to by a tensor by ResNet pre-training network;The structure of RestNet
First layer be zero-padding layers, effect is to guarantee not change size after picture convolution.Then, by duplicate convolution
Layer, batch normalization layer, pond layer and active coating and the path shortcut are constituted.The good RestNet of pre-training has very strong
Image " understanding " ability, picture can be encoded.And then RestNet is accessed to the full connection of 20 dimensional vectors of output
Layer is to constitute whole network.Then, the significant key point of Rational choice and non-significant key point on the forehead of people, and use Fiji
Marker software is that every samples pictures are labeled.It uses again and the sample data of forehead has been marked to close network training at identification forehead
The dedicated network of key point.This network can quickly, accurately identify the positive face forehead of people.This is such as face contour extraction, flesh
Other face identification missions such as skin assessment provide good technical support.
As shown in Figure 1, being forehead key point recognition methods process, including face forehead coordinate determines the stage;Data markers
Stage;Picture pretreatment stage;Pre-training network establishment stage and network training stage;Specifically comprise the following steps:
Step 10 determines the characteristic point and key point on face forehead;
Step 20 opens Fiji graphics standard tool, is obtained by Fiji software to interested point in the label picture of offer
Coordinate is taken, and the positive face image data collection of the face being collected into is labeled;
Step 30 pre-processes label picture using Matlab;
Step 40 downloads ResNet network, and as the feature extractor in the identification of facial contour key point;
Step 50 is trained network with the face forehead key point data determined, and trained objective function is image
Cross entropy loss function, with gradient descent method solve so that loss function obtain global minimum or local minimum it is corresponding
Model parameter obtains the neural network model of training completion.
Above-mentioned steps 10 specifically include:
It first determines characteristic point readily discernible on face forehead, including forehead vertex, passes through face or so eyebrow high-quality green tea
The intersection point of normal and facial contour, and referred to as eyebrows method intersection point;
Then it determines less readily discernible key point, including is equidistantly taken by left eyebrows method intersection point between forehead vertex
5 points, right eyebrows method intersection point amount to ten points to 5 points are equidistantly taken between forehead vertex.Readily discernible characteristic point, being can
With accurately identified out by deep learning algorithm here, play the role of the right boundary of determining forehead profile.Less easily
In the key point of identification, the effect of building forehead profile is primarily served, this algorithm can guarantee this ten points substantially evenly minute
Cloth is on the contour line of forehead.
Above-mentioned steps 20 are to open Fiji graphics standard tool software, benefit under linux/ubuntu operating system environment
Interested point and the tool of its coordinate is obtained in the label picture provided with software, to the positive face image data of the face being collected into
Collection is labeled;And the key point of mark is facilitated into the reading of follow-up data with the storage of csv file format.
Above-mentioned steps 30 write program under the premise of the ratio for guaranteeing picture is immovable to the side of picture using Matlab
Edge carry out zero padding processing, keep all dimension of pictures identical, detailed process the following steps are included:
(1) maximum width and height of all pictures are found out
MaxWidth=max { pic (i) (width) }, i=1 ..., n
MaxHeight=max { pic (i) (height) }, i=1 ..., n
Wherein n indicates number of samples;
(2) lateral zero padding processing is carried out to every picture:
Anew=[BH×M,AH×W,BH×K]
Wherein, H is the height of picture, and W is width;K=MaxWidth-M, [] table
Show Gauss function function;
(3) longitudinal zero padding processing is carried out to every picture:
Anew=[BP×MaxWidth;AH×MaxWidth;BQ×MaxWidth]
Wherein, H is the height of picture, and W is width;Q=MaxHeight-P, []
Indicate Gauss function function.
Above-mentioned steps 40 specifically include: what downloading had had been built up from network passes through preparatory on data set ImageNet
Feature extractor of the trained ResNet network as entire algorithm;A basic structure of ResNet can use following function table
Show:
F=W2σ(W1x)
Y=F (x, W1,W2)+x
Wherein, x, y respectively indicate the input of network, output;σ indicates ReLU activation primitive, W1、W2Respectively indicate first and second
The weight of layer network;Thus basic structure is layering entire ResNet network structure;The basic knot of the network of ResNet
Structure is as shown in Figure 2.
The content of network training includes: in above-mentioned steps 50
The training of network can be by optimizing formula as follows:
Wherein poutThe model for indicating neural network, is the function about network weight;NapIndicate number of samples;Then sharp
It is solved with gradient descent method so that when above-mentioned equation value minimum corresponding neural network weight, obtains the mind that training is completed
Through network model.
Although disclosed herein embodiment it is as above, the content is only to facilitate understanding the present invention and adopting
Embodiment is not intended to limit the invention.Any those skilled in the art to which this invention pertains are not departing from this
Under the premise of the disclosed spirit and scope of invention, any modification and change can be made in the implementing form and in details,
But scope of patent protection of the invention, still should be subject to the scope of the claims as defined in the appended claims.
Claims (6)
1. a kind of recognition methods of face forehead key point, which is characterized in that the described method includes:
A determines characteristic point and key point on face forehead;
B opens Fiji graphics standard tool, obtains coordinate to point interested in the label picture of offer by Fiji software, and
The positive face image data collection of the face being collected into is labeled;
C pre-processes label picture using Matlab;
D downloads ResNet network, and as the feature extractor in the identification of facial contour key point;
E is trained network with the face forehead key point data determined, and trained objective function is the cross entropy of image
Loss function is solved with gradient descent method so that loss function obtains global minimum or the corresponding model of local minimum is joined
Number obtains the neural network model of training completion.
2. the recognition methods of face forehead key point as described in claim 1, which is characterized in that the step A is specifically included:
It first determines characteristic point readily discernible on face forehead, including forehead vertex, passes through the normal of face or so eyebrow high-quality green tea
With the intersection point of facial contour, and referred to as eyebrows method intersection point;
Then less readily discernible key point is determined, including by left eyebrows method intersection point to equidistantly taking 5 between forehead vertex
Point, right eyebrows method intersection point is to equidistantly taking 5 points between forehead vertex.
3. the recognition methods of face forehead key point as described in claim 1, which is characterized in that in the step B:
Under linux/ubuntu operating system environment, Fiji graphics standard tool software is opened, in the label picture provided using software
Interested point and the tool for obtaining its coordinate are labeled the positive face image data collection of the face being collected into, and by mark
Key point facilitates the reading of follow-up data with the storage of csv file format.
4. the recognition methods of face forehead key point as described in claim 1, which is characterized in that the step C is specifically included:
Write program using Matlab and zero padding processing carried out to the edge of picture, keep all dimension of pictures identical, detailed process include with
Lower step:
(1) maximum width and height of all pictures are found out
MaxWidth=max { pic (i) (width) }, i=1 ..., n
MaxHeight=max { pic (i) (height) }, i=1 ..., n
Wherein n indicates number of samples;
(2) lateral zero padding processing is carried out to every picture:
Anew=[BH×M,AH×W,BH×K]
Wherein, H is the height of picture, and W is width;K=MaxWidth-M, [] indicate high
This bracket function;
(3) longitudinal zero padding processing is carried out to every picture:
Anew=[BP×MaxWidth;AH×MaxWidth;BQ×MaxWidth]
Wherein, H is the height of picture, and W is width;Q=MaxHeight-P;[] indicates high
This bracket function.
5. the recognition methods of face forehead key point as described in claim 1, which is characterized in that the step D is specifically included:
What downloading had had been built up from network passes through ResNet network trained in advance as whole on data set ImageNet
The feature extractor of a algorithm;A basic structure of ResNet can use following function representation:
F=W2σ(W1x)
Y=F (x, W1,W2)+x
Wherein, x, y respectively indicate the input of network, output;σ indicates ReLU activation primitive, W1、W2Respectively indicate first and second layer of net
The weight of network;Thus basic structure is layering entire ResNet network structure.
6. the recognition methods of face forehead key point as described in claim 1, which is characterized in that network is instructed in the step E
Experienced content includes:
The training of network can be by optimizing formula as follows:
Wherein poutThe model for indicating neural network, is the function about network weight;NapIndicate number of samples;Then ladder is utilized
Degree descent method solves so that when above-mentioned equation value minimum corresponding neural network weight, obtains the nerve net that training is completed
Network model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811493855.7A CN109376712A (en) | 2018-12-07 | 2018-12-07 | A kind of recognition methods of face forehead key point |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811493855.7A CN109376712A (en) | 2018-12-07 | 2018-12-07 | A kind of recognition methods of face forehead key point |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109376712A true CN109376712A (en) | 2019-02-22 |
Family
ID=65372723
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811493855.7A Pending CN109376712A (en) | 2018-12-07 | 2018-12-07 | A kind of recognition methods of face forehead key point |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109376712A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110059760A (en) * | 2019-04-25 | 2019-07-26 | 北京工业大学 | Geometric figure recognition methods based on topological structure and CNN |
CN110188713A (en) * | 2019-06-03 | 2019-08-30 | 北京字节跳动网络技术有限公司 | Method and apparatus for output information |
CN110987189A (en) * | 2019-11-21 | 2020-04-10 | 北京都是科技有限公司 | Method, system and device for detecting temperature of target object |
CN111126344A (en) * | 2019-12-31 | 2020-05-08 | 杭州趣维科技有限公司 | Method and system for generating key points of forehead of human face |
CN111546345A (en) * | 2020-05-26 | 2020-08-18 | 广州纳丽生物科技有限公司 | Skin material mechanical property measuring method based on contact dynamics model |
CN112016447A (en) * | 2020-08-27 | 2020-12-01 | 华南理工大学 | Intelligent forehead temperature measurement method based on Yolo neural network and application thereof |
CN112233078A (en) * | 2020-10-12 | 2021-01-15 | 广州计量检测技术研究院 | Stacked kilogram group weight identification and key part segmentation method |
CN112241700A (en) * | 2020-10-15 | 2021-01-19 | 希望银蕨智能科技有限公司 | Multi-target forehead temperature measurement method for forehead accurate positioning |
CN112613459A (en) * | 2020-12-30 | 2021-04-06 | 深圳艾摩米智能科技有限公司 | Method for detecting face sensitive area |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824049A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded neural network-based face key point detection method |
CN106295533A (en) * | 2016-08-01 | 2017-01-04 | 厦门美图之家科技有限公司 | Optimization method, device and the camera terminal of a kind of image of autodyning |
CN107730444A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, readable storage medium storing program for executing and computer equipment |
US9978003B2 (en) * | 2016-01-25 | 2018-05-22 | Adobe Systems Incorporated | Utilizing deep learning for automatic digital image segmentation and stylization |
CN108629336A (en) * | 2018-06-05 | 2018-10-09 | 北京千搜科技有限公司 | Face value calculating method based on human face characteristic point identification |
-
2018
- 2018-12-07 CN CN201811493855.7A patent/CN109376712A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824049A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded neural network-based face key point detection method |
US9978003B2 (en) * | 2016-01-25 | 2018-05-22 | Adobe Systems Incorporated | Utilizing deep learning for automatic digital image segmentation and stylization |
CN106295533A (en) * | 2016-08-01 | 2017-01-04 | 厦门美图之家科技有限公司 | Optimization method, device and the camera terminal of a kind of image of autodyning |
CN107730444A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, readable storage medium storing program for executing and computer equipment |
CN108629336A (en) * | 2018-06-05 | 2018-10-09 | 北京千搜科技有限公司 | Face value calculating method based on human face characteristic point identification |
Non-Patent Citations (1)
Title |
---|
提浩: ""自然场景下的人脸检测及表情识别算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110059760A (en) * | 2019-04-25 | 2019-07-26 | 北京工业大学 | Geometric figure recognition methods based on topological structure and CNN |
CN110188713A (en) * | 2019-06-03 | 2019-08-30 | 北京字节跳动网络技术有限公司 | Method and apparatus for output information |
CN110987189A (en) * | 2019-11-21 | 2020-04-10 | 北京都是科技有限公司 | Method, system and device for detecting temperature of target object |
CN110987189B (en) * | 2019-11-21 | 2021-11-02 | 北京都是科技有限公司 | Method, system and device for detecting temperature of target object |
CN111126344A (en) * | 2019-12-31 | 2020-05-08 | 杭州趣维科技有限公司 | Method and system for generating key points of forehead of human face |
CN111126344B (en) * | 2019-12-31 | 2023-08-01 | 杭州趣维科技有限公司 | Method and system for generating key points of forehead of human face |
CN111546345A (en) * | 2020-05-26 | 2020-08-18 | 广州纳丽生物科技有限公司 | Skin material mechanical property measuring method based on contact dynamics model |
CN111546345B (en) * | 2020-05-26 | 2021-08-17 | 广州纳丽生物科技有限公司 | Skin material mechanical property measuring method based on contact dynamics model |
CN112016447A (en) * | 2020-08-27 | 2020-12-01 | 华南理工大学 | Intelligent forehead temperature measurement method based on Yolo neural network and application thereof |
CN112233078A (en) * | 2020-10-12 | 2021-01-15 | 广州计量检测技术研究院 | Stacked kilogram group weight identification and key part segmentation method |
CN112241700A (en) * | 2020-10-15 | 2021-01-19 | 希望银蕨智能科技有限公司 | Multi-target forehead temperature measurement method for forehead accurate positioning |
CN112613459A (en) * | 2020-12-30 | 2021-04-06 | 深圳艾摩米智能科技有限公司 | Method for detecting face sensitive area |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109376712A (en) | A kind of recognition methods of face forehead key point | |
CN107168527B (en) | The first visual angle gesture identification and exchange method based on region convolutional neural networks | |
CN105069746B (en) | Video real-time face replacement method and its system based on local affine invariant and color transfer technology | |
CN104978580B (en) | A kind of insulator recognition methods for unmanned plane inspection transmission line of electricity | |
CN104143079A (en) | Method and system for face attribute recognition | |
Zhi et al. | Using transfer learning with convolutional neural networks to diagnose breast cancer from histopathological images | |
WO2019090769A1 (en) | Human face shape recognition method and apparatus, and intelligent terminal | |
CN106127108B (en) | A kind of manpower image region detection method based on convolutional neural networks | |
CN104346617B (en) | A kind of cell detection method based on sliding window and depth structure extraction feature | |
CN109635727A (en) | A kind of facial expression recognizing method and device | |
CN109376636A (en) | Eye ground image classification method based on capsule network | |
CN105205449B (en) | Sign Language Recognition Method based on deep learning | |
CN109325398A (en) | A kind of face character analysis method based on transfer learning | |
CN109410219A (en) | A kind of image partition method, device and computer readable storage medium based on pyramid fusion study | |
CN108334848A (en) | A kind of small face identification method based on generation confrontation network | |
CN107665492A (en) | Colon and rectum panorama numeral pathological image tissue segmentation methods based on depth network | |
CN107133616A (en) | A kind of non-division character locating and recognition methods based on deep learning | |
CN108053398A (en) | A kind of melanoma automatic testing method of semi-supervised feature learning | |
CN108961675A (en) | Fall detection method based on convolutional neural networks | |
CN108717524A (en) | It is a kind of based on double gesture recognition systems and method for taking the photograph mobile phone and artificial intelligence system | |
CN110263768A (en) | A kind of face identification method based on depth residual error network | |
CN110110650A (en) | Face identification method in pedestrian | |
CN110516575A (en) | GAN based on residual error domain richness model generates picture detection method and system | |
CN106910188A (en) | The detection method of airfield runway in remote sensing image based on deep learning | |
CN107292314A (en) | A kind of lepidopterous insects species automatic identification method based on CNN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190222 |
|
RJ01 | Rejection of invention patent application after publication |