CN107679447A - Facial characteristics point detecting method, device and storage medium - Google Patents
Facial characteristics point detecting method, device and storage medium Download PDFInfo
- Publication number
- CN107679447A CN107679447A CN201710709109.6A CN201710709109A CN107679447A CN 107679447 A CN107679447 A CN 107679447A CN 201710709109 A CN201710709109 A CN 201710709109A CN 107679447 A CN107679447 A CN 107679447A
- Authority
- CN
- China
- Prior art keywords
- face
- feature point
- point
- facial
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The invention discloses a kind of facial characteristics point detecting method, this method includes:Shoot to obtain a realtime graphic using camera device, a real-time face image is extracted from the realtime graphic using face recognition algorithms;By the good facial averaging model of real-time face image input training in advance, t face feature point is identified from the real-time face image using the facial averaging model.The present invention can identify multiple characteristic points of the position feature point including eyeball from real-time face image, and the characteristic point identified more comprehensively, can make recognition of face and the judgement of the micro- expression of face more accurate.The invention also discloses a kind of electronic installation and computer-readable recording medium.
Description
Technical field
The present invention relates to computer vision processing technology field, more particularly to a kind of facial characteristics point detecting method, device
And computer-readable recording medium.
Background technology
Recognition of face is a kind of biological identification technology that the facial feature information based on people carries out user's identification.At present, people
The application field of face identification is very extensive, plays very important effect in various fields such as access control and attendance, identifications, gives people
Life bring convenience.Recognition of face, the way of common product are to use deep learning method, pass through deep learning
Face feature point identification model is trained, then identifies face feature using face feature point identification model.
Have in recognition of face including the micro- Expression Recognition of face, micro- Expression Recognition is widely used in psychology, advertising results are commented
Estimate, the field such as human factor engineering and man-machine interaction, therefore it is most important how to accurately identify the micro- expression of face.
However, can detect 5,68 characteristic points at present in the industry, 5 feature point detections include two eyeballs, nose and
Corners of the mouth both sides;68 feature point detections do not include eyeball, come word, the above-mentioned spy identified for the micro- Expression Recognition of face
Sign point is not enough.
The content of the invention
The present invention provides a kind of facial characteristics point detecting method, device and computer-readable recording medium, its main purpose
It is to identify more fully characteristic point, recognition of face and the judgement of the micro- expression of face can be made more accurate.
To achieve the above object, the present invention provides a kind of electronic installation, and the device includes:Memory, processor and shooting
Device, the memory include face feature point detection program, and the face feature point detection program is held by the processor
Following steps are realized during row:
Real-time face image acquisition step:Shoot to obtain a realtime graphic using camera device, calculated using recognition of face
Method extracts a real-time face image from the realtime graphic;
Feature point recognition step:By the good facial averaging model of real-time face image input training in advance, the face is utilized
Portion's averaging model identifies t face feature point from the real-time face image.
Alternatively, each 4 position feature points of eyeball mark.
Alternatively, the training step of the facial averaging model includes:
A Sample Storehouse for there are n face sample images is established, marks t face special in every face sample image
Point is levied, the t face feature point includes:Eyes, eyebrow, nose, face, wherein the position feature point of face's outline, eye
The position feature point of eyeball includes the position feature point of eyeball;And
Face characteristic identification model is trained using the face sample image that marked t face feature point,
Obtain the facial averaging model on face feature point.
Alternatively, the Feature point recognition step also includes:
The real-time face image is alignd with the facial averaging model, it is real-time at this using feature extraction algorithm
Search and t face feature point of t facial characteristics Point matching of the facial averaging model in face image.
In addition, to achieve the above object, the present invention also provides a kind of facial characteristics point detecting method, and this method includes:
Real-time face image acquisition step:Shoot to obtain a realtime graphic using camera device, calculated using recognition of face
Method extracts a real-time face image from the realtime graphic;
Feature point recognition step:By the good facial averaging model of real-time face image input training in advance, the face is utilized
Portion's averaging model identifies t face feature point from the real-time face image.
Alternatively, each 4 position feature points of eyeball mark.
Alternatively, the training step of the facial averaging model includes:
A Sample Storehouse for there are n face sample images is established, marks t face special in every face sample image
Point is levied, the t face feature point includes:Eyes, eyebrow, nose, face, wherein the position feature point of face's outline, eye
The position feature point of eyeball includes the position feature point of eyeball;And
Face characteristic identification model is trained using the face sample image that marked t face feature point,
The facial averaging model on face feature point is obtained, wherein, the face characteristic identification model is ERT algorithms, and formula is such as
Under:
Wherein t represents cascade sequence number, τt() represents the recurrence device when prime, and each device that returns is by many recurrence
(tree) composition is set, S (t) is that the shape of "current" model is estimated, each to return device τt() is according to the present image I of input
An increment is predicted with S (t)During model training, from every samples pictures of n samples pictures
T characteristic point in take a part of characteristic point to train first regression tree, by the predicted value of first regression tree and the portion
The residual error of the actual value of point characteristic point is used for training second tree ..., until training the N predicted value set
Actual value with the Partial Feature point obtains all regression trees of ERT algorithms, closed according to these regression trees close to 0
In the facial averaging model of face feature point.
Alternatively, the Feature point recognition step also includes:
The real-time face image is alignd with the facial averaging model, it is real-time at this using feature extraction algorithm
Search and t face feature point of t facial characteristics Point matching of the facial averaging model in face image.
Alternatively, the feature extraction algorithm includes:SIFT algorithms, SURF algorithm, LBP algorithms, HOG algorithms.
In addition, to achieve the above object, the present invention also provides a kind of computer-readable recording medium, and the computer can
Reading storage medium includes face feature point detection program, when the face feature point detection program is executed by processor, realizes
Arbitrary steps in facial characteristics point detecting method as described above.
Facial characteristics point detecting method, device and computer-readable recording medium proposed by the present invention, by from real-time face
Multiple characteristic points of the position feature point including eyeball are identified in portion's image, the characteristic point identified more comprehensively, can make face
Identification and the judgement of the micro- expression of face are more accurate.
Brief description of the drawings
Fig. 1 is the running environment schematic diagram of facial characteristics point detecting method preferred embodiment of the present invention;
Fig. 2 is the functional block diagram of Fig. 1 septum reset feature point detection programs;
Fig. 3 is the flow chart of facial characteristics point detecting method preferred embodiment of the present invention.
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to limit this hair
It is bright.
The present invention provides a kind of facial characteristics point detecting method.Shown in reference picture 1, detected for face feature point of the present invention
The running environment schematic diagram of method preferred embodiment.
In the present embodiment, facial characteristics point detecting method is applied to a kind of electronic installation 1, and the electronic installation 1 can be
Server, smart mobile phone, tablet personal computer, pocket computer, desktop PC etc. have the terminal device of calculation function.
The electronic installation 1 includes:Processor 12, memory 11, camera device 13, network interface 14 and communication bus 15.
Wherein, camera device 13 is installed on particular place, real-time to the target into the particular place such as office space, monitor area
Shooting obtains realtime graphic, is transmitted by network by obtained realtime graphic is shot to processor 12.Network interface 14 is alternatively
Wireline interface, the wave point (such as WI-FI interfaces) of standard can be included.Communication bus 15 is used to realize between these components
Connection communication.
Memory 11 includes the readable storage medium storing program for executing of at least one type.The readable storage medium of at least one type
Matter can be such as flash memory, hard disk, multimedia card, the non-volatile memory medium of card-type memory.In certain embodiments, institute
State the internal storage unit that readable storage medium storing program for executing can be the electronic installation 1, such as the hard disk of the electronic installation 1.Another
In a little embodiments, the readable storage medium storing program for executing can also be the external memory storage of the electronic installation 1, such as electronics dress
Put the plug-in type hard disk being equipped with 1, intelligent memory card (Smart Media Card, SMC), secure digital (Secure
Digital, SD) card, flash card (Flash Card) etc..
In the present embodiment, the readable storage medium storing program for executing of the memory 11 is generally used for storage and is installed on electronics dress
Facial averaging model put 1 face feature point detection program 10, facial image Sample Storehouse and structure and trained etc..It is described
Memory 11 can be also used for temporarily storing the data that has exported or will export.
Processor 12, can be in certain embodiments a central processing unit (Central Processing Unit,
CPU), microprocessor or other data processing chips, for the program code or processing data stored in run memory 11,
Such as perform face feature point detection program 10 etc..
Fig. 1 illustrate only the electronic installation 1 with component 11-15 and face feature point detection program 10, but should manage
Solution is, it is not required that implements all components shown, the more or less component of the implementation that can be substituted.
Alternatively, the electronic installation 1 can also include user interface, and user interface can include input block such as key
Disk (Keyboard), speech input device such as microphone (microphone) etc. have the equipment of speech identifying function, voice
Output device such as sound equipment, earphone etc., alternatively user interface can also include wireline interface, the wave point of standard.
Alternatively, the electronic installation 1 can also include display, and what display can also be suitably is referred to as display screen or aobvious
Show unit.Can be light-emitting diode display, liquid crystal display, touch-control liquid crystal display and OLED in certain embodiments
(Organic Light-Emitting Diode, Organic Light Emitting Diode) touches device etc..Display is used to be shown in the electronics
The information that is handled in device 1 and for showing visual user interface.
Alternatively, the electronic installation 1 also includes touch sensor.What the touch sensor was provided is carried out for user
The region of touch operation is referred to as touch area.In addition, touch sensor described here can be resistive touch sensor,
Capacitive touch sensors etc..Moreover, the touch sensor not only includes the touch sensor of contact, it may also comprise and connect
Touch sensor of nearly formula etc..In addition, the touch sensor can be single sensor, or for example array is arranged
Multiple sensors.
In addition, the area of the display of the electronic installation 1 can be identical with the area of the touch sensor, can also
It is different.Alternatively, display and touch sensor stacking are set, to form touch display screen.The device is based on touching
The touch control operation of display screen detecting user's triggering.
Alternatively, the electronic installation 1 can also include RF (Radio Frequency, radio frequency) circuit, sensor, audio
Circuit etc., it will not be repeated here.
In the device embodiment shown in Fig. 1, as in a kind of memory 11 of computer-readable storage medium can include behaviour
Make system and face feature point detection program 10;Processor 12 performs the face feature point detection journey stored in memory 11
Following steps are realized during sequence 10:
The realtime graphic that camera device 13 is shot is obtained, processor 12 is using face recognition algorithms from the realtime graphic
Real-time face image is extracted, calls the model file of the facial averaging model from memory 11, and by the real-time face
Image-input face portion averaging model, identify and eyes, eyebrow, nose, face, face's foreign steamer are represented in the real-time face image
The position feature point of wide position.
In other embodiments, face feature point detection program 10 can also be divided into one or more module, and one
Individual or multiple modules are stored in memory 11, and are performed by processor 12, to complete the present invention.Alleged by the present invention
Module is the series of computation machine programmed instruction section for referring to complete specific function.
It is the function of Fig. 1 septum reset feature point detections program 10 module map shown in reference picture 3.
The face feature point detection program 10 can be divided into:Acquisition module 110, identification module 120 and calculating mould
Block 130.
Acquisition module 110, for obtaining the realtime graphic of the shooting of camera device 13, using face recognition algorithms from the reality
When image in extract a real-time face image.When camera device 13 photographs a realtime graphic, camera device 13 by this
Realtime graphic is sent to processor 12, and after processor 12 receives the realtime graphic, the acquisition module 110 first obtains figure
The size of piece, establish the gray level image of a formed objects;By the coloured image of acquisition, gray level image is converted into, is created simultaneously
One memory headroom;It by gray level image histogram equalization, can reduce gray level image information amount, to accelerate detection speed,
Then the training storehouse of Intel Company is loaded, detects the face in picture, and returns to an object for including face information, is obtained
The data of face position are obtained, and record number;Finally obtain the region of head portrait and preserve, this completes one
The process of secondary real-time face image zooming-out.
Specifically, the face recognition algorithms of extraction real-time face image can also be from the realtime graphic:Based on geometry
The method of feature, Local Features Analysis method, eigenface method, the method based on elastic model, neural net method, etc..
Identification module 120, for by the real-time face image-input face portion averaging model, utilizing the facial averaging model
T face feature point is identified from the real-time face image.
In the present embodiment, t=76, in the facial averaging model Sample Storehouse, marked in every face sample image
76 face feature points, therefore also have 76 face feature points in facial averaging model, the identification module 120 is from memory 11
After the facial averaging model that middle calling trains, real-time face image is alignd with facial averaging model, then utilizes spy
Sign extraction algorithm is searched for and 76 faces of 76 facial characteristics Point matchings of the facial averaging model in the real-time face image
Portion's characteristic point.Wherein, the lip averaging model of the face builds and trained in advance, and embodiment will be under
State and illustrated in facial characteristics point detecting method.
76 face feature points that the identification module 120 identifies from the real-time face image are designated as P1~P76,
The coordinate of 76 face feature points is respectively:(x1、y1)、(x2、y2)、(x3、y3)、…、 (x76、y76)。
Wherein, facial outline has 17 characteristic points (P1~P17, being evenly distributed on the outline of face), left and right eyebrow
Hair has 5 characteristic points (being designated as P18~P22, P23~P27 respectively, be evenly distributed on eyebrow upper end) respectively, and nose has 9 spies
Point (P28~P36) is levied, left and right eye socket has 6 characteristic points (being designated as P37~P42, P43~P48 respectively), left and right eyeball point respectively
There are not 4 characteristic points (being designated as P49~P52, P53~P56 respectively), lip there are 20 characteristic points (P57~P76), lip
Upper and lower lip has 8 characteristic points (being designated as P57~P64, P65~P72 respectively) respectively, and left and right labial angle has 2 characteristic points respectively
(being designated as P73~P74, P75~P76 respectively).In 8 characteristic points of upper lip, 5 positioned at upper lip outer contour (P57~
61), 3 are located at upper lip inner outline (P62~P64, P63 are upper lip medial center characteristic point);8 spies of lower lip
In sign point, 5 positioned at lower lip outer contour (P65~P69), 3, (P70~P72, P71 are positioned at lower lip inner outline
Lower lip medial center characteristic point).In respective 2 characteristic points of left and right labial angle, 1 positioned at lip outer contour (such as P74,
P76, outer lip corner characteristics point can be referred to as), 1 is located at lip outer contour (such as P73, P75, can be referred to as epipharynx corner characteristics point).
In the present embodiment, this feature extraction algorithm is SIFT (scale-invariant feature transform)
Algorithm.SIFT algorithms extract the local feature of each face feature point after the facial averaging model of face feature point, select one
Individual face feature point is fixed reference feature point, in real-time face image search it is identical with the local feature of the fixed reference feature point or
Similar characteristic point (for example, the difference of the local feature of two characteristic points is within a preset range), principle is until real-time according to this
All face feature points are found out in face image.In other embodiments, this feature extraction algorithm can also be SURF
(Speeded Up Robust Features) algorithm, LBP (Local Binary Patterns) algorithm, HOG
(Histogram of Oriented Gridients) algorithm etc..
The electronic installation 1 that the present embodiment proposes, real-time face image is extracted from realtime graphic, utilize the average mould of face
Type identifies the face feature point in the real-time face image, and the characteristic point identified more comprehensively, can make recognition of face and face
The judgement of micro- expression is more accurate.
In addition, the present invention also provides a kind of facial characteristics point detecting method.It is facial characteristics of the present invention shown in reference picture 3
The flow chart of point detecting method preferred embodiment.This method can be performed by device, the device can by software and/or
Hardware is realized.
In the present embodiment, facial characteristics point detecting method includes:
Step S10, shoot to obtain a realtime graphic using camera device, using face recognition algorithms from the real-time figure
A real-time face image is extracted as in.When camera device photographs a realtime graphic, camera device is by this realtime graphic
Processor is sent to, after processor receives the realtime graphic, the size of picture is first obtained, establishes formed objects
Gray level image;By the coloured image of acquisition, gray level image is converted into, while create a memory headroom;By gray level image Nogata
Figure equalization, can reduce gray level image information amount, to accelerate detection speed, then load the training storehouse of Intel Company,
The face in picture is detected, and returns to an object for including face information, obtains the data of face position, and is recorded
Number;Finally obtain the region of head portrait and preserve, this completes the process of a real-time face image zooming-out.
Specifically, the face recognition algorithms of extraction real-time face image can also be from the realtime graphic:Based on geometry
The method of feature, Local Features Analysis method, eigenface method, the method based on elastic model, neural net method, etc..
Step S20, the real-time face image is inputted into the good facial averaging model of training in advance, it is average using the face
Model identifies t face feature point from the real-time face image.
A Sample Storehouse for there are n face sample images is established, marks t face special in every face sample image
Point is levied, the t face feature point includes:Eyes, eyebrow, nose, face, wherein the position feature point of face's outline, eye
The position feature point of eyeball includes the position feature point of eyeball.A Sample Storehouse for there are n facial images is established, in every face
T face feature point of handmarking in image, the position feature point of the eyes include:The position feature point and eyeball of eye socket
Position feature point.
Face characteristic identification model is trained using the face sample image that marked t face feature point,
Obtain the facial averaging model on face feature point.The face characteristic identification model is Ensemble of
Regression Tress (abbreviation ERT) algorithm.ERT algorithms are formulated as follows:
Wherein t represents cascade sequence number, τt() represents the recurrence device when prime.Each device that returns is by many recurrence
(tree) composition is set, the purpose of training is exactly to obtain these regression trees.
Wherein S (t) is that the shape of "current" model is estimated;It is each to return device τt() is according to input picture I and S (t)
To predict an incrementThis increment is added in current shape estimation to improve "current" model.Each of which
Level recurrence device is predicted according to characteristic point.Training dataset is:(I1, S1) ..., (In, Sn) wherein I is input
Sample image, S be feature point group in sample image into shape eigenvectors.
During model training in the present embodiment, each samples pictures have 76 human face characteristic points, take all
The Partial Feature point (such as taking 70 characteristic points at random in 76 characteristic points of each sample image) of sample image trains
First regression tree, by the actual value of the predicted value of first regression tree and the Partial Feature point, (each samples pictures are taken
70 characteristic points weighted average) residual error be used for training second tree ..., until training the N tree
Predicted value and the Partial Feature point actual value close to 0, all regression trees of ERT algorithms are obtained, according to these recurrence
Tree obtains the averaging model of facial markers point, and model file and Sample Storehouse are preserved into memory.
In the present embodiment, because marked 76 face feature points in every face sample image in Sample Storehouse,
Therefore also have 76 face feature points in facial averaging model, will be real after the facial averaging model trained is called from memory
When face image alignd with facial averaging model, then searched for using feature extraction algorithm in the real-time face image
With 76 face feature points of 76 facial characteristics Point matchings of the facial averaging model, and will identify that 76 faces are special
Sign point is still designated as P1~P76, and the coordinate of 76 face feature points is respectively:(x1、y1)、(x2、y2)、(x3、y3)、…、
(x76、y76)。
Wherein, facial outline has 17 characteristic points (P1~P17, being evenly distributed on the outline of face), left and right eyebrow
Hair has 5 characteristic points (being designated as P18~P22, P23~P27 respectively, be evenly distributed on eyebrow upper end) respectively, and nose has 9 spies
Point (P28~P36) is levied, left and right eye socket has 6 characteristic points (being designated as P37~P42, P43~P48 respectively), left and right eyeball point respectively
There are not 4 characteristic points (being designated as P49~P52, P53~P56 respectively), lip there are 20 characteristic points (P57~P76), lip
Upper and lower lip has 8 characteristic points (being designated as P57~P64, P65~P72 respectively) respectively, and left and right labial angle has 2 characteristic points respectively
(being designated as P73~P74, P75~P76 respectively).In 8 characteristic points of upper lip, 5 positioned at upper lip outer contour (P57~
61), 3 are located at upper lip inner outline (P62~P64, P63 are upper lip medial center characteristic point);8 spies of lower lip
In sign point, 5 positioned at lower lip outer contour (P65~P69), 3, (P70~P72, P71 are positioned at lower lip inner outline
Lower lip medial center characteristic point).In respective 2 characteristic points of left and right labial angle, 1 positioned at lip outer contour (such as P74,
P76, outer lip corner characteristics point can be referred to as), 1 is located at lip outer contour (such as P73, P75, can be referred to as epipharynx corner characteristics point).
Specifically, this feature extraction algorithm can also be SIFT algorithms, SURF algorithm, LBP algorithms, HOG algorithms etc..
The facial characteristics point detecting method that the present embodiment proposes, real-time face image is extracted from realtime graphic, utilizes face
Portion's averaging model identifies the face feature point in the real-time face image, and the characteristic point identified more comprehensively, can know face
Other and the micro- expression of face judgement is more accurate.
In addition, the embodiment of the present invention also proposes a kind of computer-readable recording medium, the computer-readable recording medium
Include face feature point detection program, the face feature point detection program realizes following operation when being executed by processor:
Real-time face image acquisition step:Shoot to obtain a realtime graphic using camera device, calculated using recognition of face
Method extracts a real-time face image from the realtime graphic;And
Feature point recognition step:By the good facial averaging model of real-time face image input training in advance, the face is utilized
Portion's averaging model identifies t face feature point from the real-time face image.
Alternatively, the training step of the facial averaging model includes:
A Sample Storehouse for there are n face sample images is established, marks t face special in every face sample image
Point is levied, the t face feature point includes:Eyes, eyebrow, nose, face, wherein the position feature point of face's outline, eye
The position feature point of eyeball includes the position feature point of eyeball;And
Face characteristic identification model is trained using the face sample image that marked t face feature point,
The facial averaging model on face feature point is obtained, wherein, the face characteristic identification model is ERT algorithms, and formula is such as
Under:
Wherein t represents cascade sequence number, τt() represents the recurrence device when prime, and each device that returns is by many recurrence
(tree) composition is set, S (t) is that the shape of "current" model is estimated, each to return device τt() is according to the present image I of input
An increment is predicted with S (t)During model training, from every samples pictures of n samples pictures
T characteristic point in take a part of characteristic point to train first regression tree, by the predicted value of first regression tree and the portion
The residual error of the actual value of point characteristic point is used for training second tree ..., until training the N predicted value set
Actual value with the Partial Feature point obtains all regression trees of ERT algorithms, closed according to these regression trees close to 0
In the facial averaging model of face feature point.
The embodiment of the computer-readable recording medium of the present invention and the tool of above-mentioned facial characteristics point detecting method
Body embodiment is roughly the same, will not be repeated here.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-
It is exclusive to include, so that process, device, article or method including a series of elements not only include those key elements,
But also the other element including being not expressly set out, or also include for this process, device, article or method institute
Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that
Other identical element also be present in process, device, article or method including the key element.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.Embodiment party more than
The description of formula, it is required logical that those skilled in the art can be understood that above-described embodiment method can add by software
Realized with the mode of hardware platform, naturally it is also possible to which by hardware, but the former is more preferably embodiment in many cases.
Based on such understanding, the part that technical scheme substantially contributes to prior art in other words can be with soft
The form of part product embodies, the computer software product be stored in a storage medium as described above (such as ROM/RAM,
Magnetic disc, CD) in, including some instructions to cause a station terminal equipment (can be mobile phone, computer, server, or
Network equipment etc.) perform method described in each embodiment of the present invention.
The preferred embodiments of the present invention are these are only, are not intended to limit the scope of the invention, it is every to utilize this hair
The equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other correlations
Technical field, it is included within the scope of the present invention.
Claims (10)
1. a kind of electronic installation, it is characterised in that described device includes:Memory, processor and camera device, the memory
Include face feature point detection program, the face feature point detection program is realized following step during the computing device
Suddenly:
Real-time face image acquisition step:Shoot to obtain a realtime graphic using camera device, using face recognition algorithms from
A real-time face image is extracted in the realtime graphic;
Feature point recognition step:It is flat using the face by the good facial averaging model of real-time face image input training in advance
Equal model identifies t face feature point from the real-time face image.
2. electronic installation according to claim 1, it is characterised in that each eyeball 4 position feature points of mark.
3. electronic installation according to claim 1, it is characterised in that the training step of the facial averaging model includes:
A Sample Storehouse for there are n face sample images is established, t face feature point is marked in every face sample image,
The t face feature point includes:Eyes, eyebrow, nose, face, wherein the position feature point of face's outline, the position of eyes
Putting characteristic point includes the position feature point of eyeball;And
Face characteristic identification model is trained using the face sample image that marked t face feature point, obtained
Facial averaging model on face feature point.
4. electronic installation according to claim 1, it is characterised in that the Feature point recognition step also includes:
The real-time face image is alignd with the facial averaging model, using feature extraction algorithm in the real-time face figure
Search and t face feature point of t facial characteristics Point matching of the facial averaging model as in.
5. a kind of facial characteristics point detecting method, it is characterised in that methods described includes:
Real-time face image acquisition step:Shoot to obtain a realtime graphic using camera device, using face recognition algorithms from
A real-time face image is extracted in the realtime graphic;
Feature point recognition step:It is flat using the face by the good facial averaging model of real-time face image input training in advance
Equal model identifies t face feature point from the real-time face image.
6. facial characteristics point detecting method according to claim 5, it is characterised in that each eyeball 4 position spies of mark
Sign point.
7. facial characteristics point detecting method according to claim 5, it is characterised in that the training of the facial averaging model
Step includes:
A Sample Storehouse for there are n face sample images is established, t face feature point is marked in every face sample image,
The t face feature point includes:Eyes, eyebrow, nose, face, wherein the position feature point of face's outline, the position of eyes
Putting characteristic point includes the position feature point of eyeball;And
Face characteristic identification model is trained using the face sample image that marked t face feature point, obtained
On the facial averaging model of face feature point, wherein, the face characteristic identification model is ERT algorithms, and formula is as follows:
<mrow>
<msup>
<mover>
<mi>S</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>=</mo>
<msup>
<mover>
<mi>S</mi>
<mo>^</mo>
</mover>
<mi>t</mi>
</msup>
<mo>+</mo>
<msub>
<mi>&tau;</mi>
<mi>t</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>I</mi>
<mo>,</mo>
<msup>
<mover>
<mi>S</mi>
<mo>^</mo>
</mover>
<mi>t</mi>
</msup>
<mo>)</mo>
</mrow>
</mrow>
Wherein t represents cascade sequence number, τt() represents the recurrence device when prime, and each device that returns is by many regression trees
(tree) form, S (t) is that the shape of "current" model is estimated, each to return device τt() is according to the present image I and S of input
(t) increment is predictedDuring model training, from t of every samples pictures of n samples pictures
A part of characteristic point is taken to train first regression tree in characteristic point, by the predicted value of first regression tree and the Partial Feature
The residual error of the actual value of point is used for training second tree ..., until the predicted value for training the N tree and the portion
The actual value of characteristic point is divided to obtain all regression trees of ERT algorithms close to 0, obtained according to these regression trees on facial special
Levy the facial averaging model of point.
8. facial characteristics point detecting method according to claim 5, it is characterised in that the Feature point recognition step is also wrapped
Include:
The real-time face image is alignd with the facial averaging model, using feature extraction algorithm in the real-time face figure
Search and t face feature point of t facial characteristics Point matching of the facial averaging model as in.
9. facial characteristics point detecting method according to claim 8, it is characterised in that the feature extraction algorithm includes:
SIFT algorithms, SURF algorithm, LBP algorithms, HOG algorithms.
10. a kind of computer-readable recording medium, it is characterised in that the computer-readable recording medium includes facial characteristics
Point detection program, when the face feature point detection program is executed by processor, realize such as any one of claim 5 to 9 institute
The step of facial characteristics point detecting method stated.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710709109.6A CN107679447A (en) | 2017-08-17 | 2017-08-17 | Facial characteristics point detecting method, device and storage medium |
PCT/CN2017/108750 WO2019033571A1 (en) | 2017-08-17 | 2017-10-31 | Facial feature point detection method, apparatus and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710709109.6A CN107679447A (en) | 2017-08-17 | 2017-08-17 | Facial characteristics point detecting method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107679447A true CN107679447A (en) | 2018-02-09 |
Family
ID=61136036
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710709109.6A Pending CN107679447A (en) | 2017-08-17 | 2017-08-17 | Facial characteristics point detecting method, device and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107679447A (en) |
WO (1) | WO2019033571A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564531A (en) * | 2018-05-08 | 2018-09-21 | 麒麟合盛网络技术股份有限公司 | A kind of image processing method and device |
CN108597074A (en) * | 2018-04-12 | 2018-09-28 | 广东汇泰龙科技有限公司 | A kind of door opening method and system based on face registration Algorithm and face lock |
CN108629278A (en) * | 2018-03-26 | 2018-10-09 | 深圳奥比中光科技有限公司 | The system and method that information security is shown is realized based on depth camera |
CN109117716A (en) * | 2018-06-28 | 2019-01-01 | 众安信息技术服务有限公司 | A kind of makings similarity acquisition methods and device |
CN109255327A (en) * | 2018-09-07 | 2019-01-22 | 北京相貌空间科技有限公司 | Acquisition methods, face's plastic operation evaluation method and the device of face characteristic information |
CN109308584A (en) * | 2018-09-27 | 2019-02-05 | 深圳市乔安科技有限公司 | A kind of noninductive attendance system and method |
CN109376621A (en) * | 2018-09-30 | 2019-02-22 | 北京七鑫易维信息技术有限公司 | A kind of sample data generation method, device and robot |
CN109389069A (en) * | 2018-09-28 | 2019-02-26 | 北京市商汤科技开发有限公司 | Blinkpunkt judgment method and device, electronic equipment and computer storage medium |
CN109657550A (en) * | 2018-11-15 | 2019-04-19 | 中科院微电子研究所昆山分所 | A kind of fatigue strength detection method and device |
CN109886213A (en) * | 2019-02-25 | 2019-06-14 | 湖北亿咖通科技有限公司 | Fatigue state judgment method, electronic equipment and computer readable storage medium |
WO2019223102A1 (en) * | 2018-05-22 | 2019-11-28 | 平安科技(深圳)有限公司 | Method and apparatus for checking validity of identity, terminal device and medium |
CN110610131A (en) * | 2019-08-06 | 2019-12-24 | 平安科技(深圳)有限公司 | Method and device for detecting face motion unit, electronic equipment and storage medium |
CN111839519A (en) * | 2020-05-26 | 2020-10-30 | 合肥工业大学 | Non-contact respiratory frequency monitoring method and system |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109901716B (en) * | 2019-03-04 | 2022-08-26 | 厦门美图之家科技有限公司 | Sight point prediction model establishing method and device and sight point prediction method |
CN111860047A (en) * | 2019-04-26 | 2020-10-30 | 美澳视界(厦门)智能科技有限公司 | Face rapid identification method based on deep learning |
CN112102146B (en) * | 2019-06-18 | 2023-11-03 | 北京陌陌信息技术有限公司 | Face image processing method, device, equipment and computer storage medium |
CN110334643B (en) * | 2019-06-28 | 2023-05-23 | 知鱼智联科技股份有限公司 | Feature evaluation method and device based on face recognition |
CN110516626A (en) * | 2019-08-29 | 2019-11-29 | 上海交通大学 | A kind of Facial symmetry appraisal procedure based on face recognition technology |
CN111191571A (en) * | 2019-12-26 | 2020-05-22 | 新绎健康科技有限公司 | Traditional Chinese medicine facial diagnosis face partitioning method and system based on face feature point detection |
CN112052730B (en) * | 2020-07-30 | 2024-03-29 | 广州市标准化研究院 | 3D dynamic portrait identification monitoring equipment and method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426867A (en) * | 2015-12-11 | 2016-03-23 | 小米科技有限责任公司 | Face identification verification method and apparatus |
CN105512627A (en) * | 2015-12-03 | 2016-04-20 | 腾讯科技(深圳)有限公司 | Key point positioning method and terminal |
CN106295566A (en) * | 2016-08-10 | 2017-01-04 | 北京小米移动软件有限公司 | Facial expression recognizing method and device |
CN106295602A (en) * | 2016-08-18 | 2017-01-04 | 无锡天脉聚源传媒科技有限公司 | A kind of face identification method and device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845327B (en) * | 2015-12-07 | 2019-07-02 | 展讯通信(天津)有限公司 | Training method, face alignment method and the device of face alignment model |
CN106650682B (en) * | 2016-12-29 | 2020-05-01 | Tcl集团股份有限公司 | Face tracking method and device |
-
2017
- 2017-08-17 CN CN201710709109.6A patent/CN107679447A/en active Pending
- 2017-10-31 WO PCT/CN2017/108750 patent/WO2019033571A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105512627A (en) * | 2015-12-03 | 2016-04-20 | 腾讯科技(深圳)有限公司 | Key point positioning method and terminal |
CN105426867A (en) * | 2015-12-11 | 2016-03-23 | 小米科技有限责任公司 | Face identification verification method and apparatus |
CN106295566A (en) * | 2016-08-10 | 2017-01-04 | 北京小米移动软件有限公司 | Facial expression recognizing method and device |
CN106295602A (en) * | 2016-08-18 | 2017-01-04 | 无锡天脉聚源传媒科技有限公司 | A kind of face identification method and device |
Non-Patent Citations (6)
Title |
---|
MATTHIAS DANTONE等: "Real-time Facial Feature Detection using Conditional Regression Forests", 《PROCEEDINGS OF THE 2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
SHIZHAN ZHU等: "Face Alignment by Coarse-to-Fine Shape Searching", 《 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
VAHID KAZEMI等: "One Millisecond Face Alignment with an Ensemble of Regression Trees", 《PROCEEDINGS OF THE 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
范玉华: "基于ASM的人脸面部关键特征点定位算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
谢郑楠: "基于多任务特征选择和自适应模型的人脸特征点检测", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
陈良仁: "基于深度卷积神经网络的颜值计算研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108629278A (en) * | 2018-03-26 | 2018-10-09 | 深圳奥比中光科技有限公司 | The system and method that information security is shown is realized based on depth camera |
CN108597074A (en) * | 2018-04-12 | 2018-09-28 | 广东汇泰龙科技有限公司 | A kind of door opening method and system based on face registration Algorithm and face lock |
CN108564531A (en) * | 2018-05-08 | 2018-09-21 | 麒麟合盛网络技术股份有限公司 | A kind of image processing method and device |
CN108564531B (en) * | 2018-05-08 | 2022-07-08 | 麒麟合盛网络技术股份有限公司 | Image processing method and device |
WO2019223102A1 (en) * | 2018-05-22 | 2019-11-28 | 平安科技(深圳)有限公司 | Method and apparatus for checking validity of identity, terminal device and medium |
CN109117716A (en) * | 2018-06-28 | 2019-01-01 | 众安信息技术服务有限公司 | A kind of makings similarity acquisition methods and device |
CN109255327A (en) * | 2018-09-07 | 2019-01-22 | 北京相貌空间科技有限公司 | Acquisition methods, face's plastic operation evaluation method and the device of face characteristic information |
CN109308584A (en) * | 2018-09-27 | 2019-02-05 | 深圳市乔安科技有限公司 | A kind of noninductive attendance system and method |
CN109389069B (en) * | 2018-09-28 | 2021-01-05 | 北京市商汤科技开发有限公司 | Gaze point determination method and apparatus, electronic device, and computer storage medium |
CN109389069A (en) * | 2018-09-28 | 2019-02-26 | 北京市商汤科技开发有限公司 | Blinkpunkt judgment method and device, electronic equipment and computer storage medium |
US11295474B2 (en) | 2018-09-28 | 2022-04-05 | Beijing Sensetime Technology Development Co., Ltd. | Gaze point determination method and apparatus, electronic device, and computer storage medium |
CN109376621A (en) * | 2018-09-30 | 2019-02-22 | 北京七鑫易维信息技术有限公司 | A kind of sample data generation method, device and robot |
CN109657550A (en) * | 2018-11-15 | 2019-04-19 | 中科院微电子研究所昆山分所 | A kind of fatigue strength detection method and device |
CN109886213B (en) * | 2019-02-25 | 2021-01-08 | 湖北亿咖通科技有限公司 | Fatigue state determination method, electronic device, and computer-readable storage medium |
CN109886213A (en) * | 2019-02-25 | 2019-06-14 | 湖北亿咖通科技有限公司 | Fatigue state judgment method, electronic equipment and computer readable storage medium |
CN110610131A (en) * | 2019-08-06 | 2019-12-24 | 平安科技(深圳)有限公司 | Method and device for detecting face motion unit, electronic equipment and storage medium |
CN110610131B (en) * | 2019-08-06 | 2024-04-09 | 平安科技(深圳)有限公司 | Face movement unit detection method and device, electronic equipment and storage medium |
CN111839519A (en) * | 2020-05-26 | 2020-10-30 | 合肥工业大学 | Non-contact respiratory frequency monitoring method and system |
CN111839519B (en) * | 2020-05-26 | 2021-05-18 | 合肥工业大学 | Non-contact respiratory frequency monitoring method and system |
Also Published As
Publication number | Publication date |
---|---|
WO2019033571A1 (en) | 2019-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107679447A (en) | Facial characteristics point detecting method, device and storage medium | |
CN107633204B (en) | Face occlusion detection method, apparatus and storage medium | |
CN107633207B (en) | AU characteristic recognition methods, device and storage medium | |
CN107679448B (en) | Eyeball action-analysing method, device and storage medium | |
Jiang et al. | DeepFood: food image analysis and dietary assessment via deep model | |
CN108460338B (en) | Human body posture estimation method and apparatus, electronic device, storage medium, and program | |
Zeng et al. | MobileDeepPill: A small-footprint mobile deep learning system for recognizing unconstrained pill images | |
CN110738101B (en) | Behavior recognition method, behavior recognition device and computer-readable storage medium | |
CN109388807A (en) | The method, apparatus and storage medium of electronic health record name Entity recognition | |
CN107862292A (en) | Personage's mood analysis method, device and storage medium | |
CN107958230B (en) | Facial expression recognition method and device | |
CN107977633A (en) | Age recognition methods, device and the storage medium of facial image | |
WO2019033573A1 (en) | Facial emotion identification method, apparatus and storage medium | |
CN107633205B (en) | lip motion analysis method, device and storage medium | |
US10489636B2 (en) | Lip movement capturing method and device, and storage medium | |
CN109637664A (en) | A kind of BMI evaluating method, device and computer readable storage medium | |
CN106295591A (en) | Gender identification method based on facial image and device | |
CN108205684A (en) | Image disambiguation method, device, storage medium and electronic equipment | |
CN111401192B (en) | Model training method and related device based on artificial intelligence | |
WO2019033567A1 (en) | Method for capturing eyeball movement, device and storage medium | |
CN111709398A (en) | Image recognition method, and training method and device of image recognition model | |
CN112419326B (en) | Image segmentation data processing method, device, equipment and storage medium | |
CN112489129A (en) | Pose recognition model training method and device, pose recognition method and terminal equipment | |
CN111553838A (en) | Model parameter updating method, device, equipment and storage medium | |
CN108664909A (en) | A kind of auth method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1246922 Country of ref document: HK |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180209 |