CN109086711A - Facial Feature Analysis method, apparatus, computer equipment and storage medium - Google Patents
Facial Feature Analysis method, apparatus, computer equipment and storage medium Download PDFInfo
- Publication number
- CN109086711A CN109086711A CN201810844936.0A CN201810844936A CN109086711A CN 109086711 A CN109086711 A CN 109086711A CN 201810844936 A CN201810844936 A CN 201810844936A CN 109086711 A CN109086711 A CN 109086711A
- Authority
- CN
- China
- Prior art keywords
- face
- point
- feature
- region
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of Facial Feature Analysis method, system, computer equipment and storage mediums.The described method includes: detecting by the five features point for carrying out whole face on target facial image, the rough position region of each face part in the target facial image is determined;By carrying out corresponding five features point detection respectively on the corresponding face position image in each rough position region, the exact position region of each face part is determined;Facial Feature Analysis is carried out according to each exact position region, obtains face characteristic information or/and statistic analysis result.The accuracy of signature analysis result is able to ascend using this method.
Description
Technical field
The present invention relates to image analysis technology fields, more particularly to a kind of Facial Feature Analysis method, apparatus, computer
Equipment and storage medium.
Background technique
Facial Feature Analysis is the most key one of the process of recognition of face, is carrying out identifying it to target facial image
Before, it is necessary first to analyze the face characteristic in image.The quality of Facial Feature Analysis will directly determine the effect of recognition of face
Fruit.
However, being to carry out face on face scale in face alignment algorithm in current Facial Feature Analysis mode
Characteristic point detection, this mode are easy to be interfered by uncorrelated region, influence the accuracy of signature analysis result.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of accuracy for being able to ascend signature analysis result
Facial Feature Analysis method, apparatus, computer equipment and storage medium.
A kind of Facial Feature Analysis method, which comprises by carrying out whole face on target facial image
The detection of five features point, determines the rough position region of each face part in the target facial image;By each described
Corresponding five features point detection is carried out on the corresponding face position image in rough position region respectively, determines each face
Partial exact position region;Carry out Facial Feature Analysis according to each exact position region, obtain face characteristic information or
Person/and statistic analysis result.
A kind of Facial Feature Analysis device, described device include:
First area detection module is examined for the five features point by carrying out whole face on target facial image
It surveys, determines the rough position region of each face part in the target facial image;
Second area detection module, for by the corresponding face position image in each rough position region respectively
Corresponding five features point detection is carried out, determines the exact position region of each face part;
Characteristics analysis module obtains face characteristic for carrying out Facial Feature Analysis according to each exact position region
Information or/and statistic analysis result.
A kind of computer equipment can be run on a memory and on a processor including memory, processor and storage
Computer program, the processor perform the steps of when executing the computer program by enterprising in target facial image
The five features point detection of whole face of row, determines the rough position region of each face part in the target facial image;
By carrying out corresponding five features point detection respectively on the corresponding face position image in each rough position region, really
The exact position region of fixed each face part;Facial Feature Analysis is carried out according to each exact position region, obtains people
Face characteristic information or/and statistic analysis result.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
The five features point detection by carrying out whole face on target facial image is performed the steps of when row, determines the mesh
Mark the rough position region of each face part in facial image;By in the corresponding face portion in each rough position region
Corresponding five features point detection is carried out on bit image respectively, determines the exact position region of each face part;According to each
The exact position region carries out Facial Feature Analysis, obtains face characteristic information or/and statistic analysis result.
Above-mentioned Facial Feature Analysis method, apparatus, computer equipment and storage medium are by target facial image
The five features point detection for carrying out whole face, determines the rough position area of each face part in the target facial image
Domain is detected by carrying out corresponding five features point respectively on the corresponding face position image in each rough position region,
The exact position region for determining each face part carries out Facial Feature Analysis according to each exact position region, obtains
Face characteristic information or/and statistic analysis result.In the program, due to orienting the approximate location of each face part (i.e.
Rough position region) after, the fine granularity feature of face scale is carried out on the corresponding face position image in each rough position region
Point detection, further can precisely confirm in face position region (exact position region), can reduce uncorrelated region or
Interference of the noise to subsequent each face position image characteristics extraction, lifting feature analyze the accuracy of result.
Detailed description of the invention
Fig. 1 is the schematic diagram of internal structure of terminal in one embodiment;
Fig. 2 is the flow diagram of Facial Feature Analysis method in one embodiment;
Fig. 3 is the flow diagram of Facial Feature Analysis method in another embodiment;
Fig. 4 is that the process for carrying out Facial Feature Analysis step according to each exact position region in one embodiment is illustrated
Figure;
Fig. 5 is the composed structure and schematic illustration of the Facial Feature Analysis device in one embodiment;
Fig. 6 is the composed structure and schematic illustration of the facial feature points detection device in one embodiment;
Fig. 7 is the process signal that original shape is determined with didactic characteristic point initial method in one embodiment
Figure;
Fig. 8 is the composed structure and principle of the eye feature spot detector and mouth feature point detector in one embodiment
Schematic diagram;
Fig. 9 is eyebrow feature point detector, nose feature point detector and the detection of ear characteristic point in one embodiment
The training of device and testing process schematic diagram;
Figure 10 is the principle that omnidirectional images feature extraction is carried out using gray level co-occurrence matrixes in one embodiment;
Figure 11 is the structural block diagram of Facial Feature Analysis device in one embodiment;
Figure 12 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the present invention, and
It is not used in the restriction present invention.
It should be noted that term " first " in specification of the invention, claims and Figure of description,
" second " and " third " etc. are to be used to distinguish similar objects, without for describing specific sequence or precedence relationship.It should
Understand that the data that use in this way are interchangeable under appropriate circumstances, so that the embodiment of the present invention described herein can be in addition to
The sequence other than those of diagram or description is implemented herein.In addition, term " or ", a kind of only description association pair
The incidence relation of elephant indicates may exist three kinds of relationships, for example, " A or/and B ", can indicate: individualism A is deposited simultaneously
In A and B, these three situations of individualism B.In addition, it is a kind of "or" that character "/", which typicallys represent forward-backward correlation object, herein
Relationship.
Facial Feature Analysis method provided by the invention, can be applied in terminal as shown in Figure 1.The terminal includes logical
Cross processor, the non-volatile memory medium, network interface, built-in storage, input unit of system bus connection.The wherein end
The non-volatile memory medium at end is stored with operating system, further includes a kind of Facial Feature Analysis device, and the face of the terminal is special
Analytical equipment is levied for realizing a kind of Facial Feature Analysis method.The processor supports whole for providing calculating and control ability
The operation of a terminal.Built-in storage in terminal is that the operation of the touch operation control device in non-volatile memory medium mentions
For environment, network interface with server or other terminals for being communicated, such as when terminal response clicking operation can produce
Control command is sent to server or other terminals etc..Specifically, the Facial Feature Analysis device of terminal can be by target
The five features point detection that whole face is carried out on facial image, determines each face part in the target facial image
Rough position region, by carrying out corresponding face respectively on the corresponding face position image in each rough position region
Characteristic point detection, determines the exact position region of each face part, and it is special to carry out face according to each exact position region
Sign analysis, obtains face characteristic information or/and statistic analysis result.Wherein, terminal can be not limited to various individual calculus
Machine, laptop, smart phone, tablet computer and portable wearable device.It should be noted that Fig. 1 is only provided
The a kind of of Facial Feature Analysis method of the invention applies example.Facial Feature Analysis method of the invention also can be applied to
Server.The server can be realized with the server set of independent server either multiple servers composition is gregarious.
In one embodiment, as shown in Fig. 2, providing a kind of Facial Feature Analysis method, it is applied to Fig. 1 in this way
In terminal for be illustrated, comprising the following steps:
Step S201: by carrying out key feature points detection on target facial image, the target facial image is determined
In each face part rough position region;
Here, target facial image includes the image of face;Five features point may include that eye feature point (or is
Eye feature point), supercilium characteristic point (or being eyebrow characteristic point), mouth feature point (or being mouth characteristic point), nose it is special
Levy any one in point (or being nose characteristic point) and ear's characteristic point (or being ear characteristic point) or multiple groups
It closes;It can also include face feature point.
This step is that the detection of the five features point on face scale is carried out to target facial image, substantially to determine each five
The position of official part is the five features point detection of the rough grade carried out on face scale.
Step S202: by carrying out corresponding five respectively on the corresponding face position image in each rough position region
The detection of official's characteristic point, determines the exact position region of each face part;
For example, the detection of eye feature point is carried out on face position image corresponding for the rough position region of eyes,
The detection that ear characteristic point is carried out on face position image corresponding for the rough position region of ear, for the thick of eyebrow
The detection that eyebrow characteristic point is slightly carried out on the corresponding face position image in the band of position, for the rough position region pair of mouth
The detection of mouth characteristic point, face position corresponding for the rough position region of nose are carried out on the face position image answered
The detection of nose characteristic point is carried out on image.
This step is the detection of the key feature points carried out on the scale of face part, is the five features point inspection of thin precision
It surveys.
Step S203: carrying out Facial Feature Analysis according to each exact position region, obtain face characteristic information or/
And statistic analysis result;
Specifically, one piece of human face region can be determined according to the exact position region of each face part, according to this
The Pixel Information of human face region determines face characteristic statistic.
It is the five features point by carrying out whole face on target facial image in above-mentioned Facial Feature Analysis method
Detection, determines the rough position region of each face part in the target facial image, by each rough position area
Corresponding five features point detection is carried out on the corresponding face position image in domain respectively, determines the accurate of each face part
The band of position carries out Facial Feature Analysis according to each exact position region, obtains face characteristic information or/and statistics
Analyze result.In the program, due to after orienting the approximate location (i.e. rough position region) of each face part, each thick
The fine granularity characteristic point detection that face scale is slightly carried out on the corresponding face position image in the band of position, can further precisely
Confirm in face position region (exact position region), uncorrelated region or noise can be reduced to subsequent each face portion
The interference of bit image feature extraction, lifting feature analyze the accuracy of result.Also it can be realized in face scale and face scale
The comprehensive extraction of the multiple dimensioned face characteristic of even any local facial region and comprehensively analysis.
It should be noted that " rough " and " accurate " in the present embodiment is only for the corresponding position area of opposite differentiation
The levels of precision in domain, but be not used to limit specific levels of precision value.
It is above-mentioned by the corresponding face position image in each rough position region in one of the embodiments,
Corresponding five features point detection is carried out respectively, determines the exact position region of each face part, may include: will be each described
It is special that the corresponding face position image in rough position region is input to different face according to the classification of corresponding face part respectively
It levies spot detector and carries out corresponding five features point detection, each five features spot detector of acquisition is exported corresponding
The exact position region of face part.
In the present embodiment, the corresponding face position image in each rough position region is separately input to different face
Distinguishing each five features spot detector can be arranged in feature point detector, to meet different detection demands, promoted suitable
Flexibility.
It, can in one of the embodiments, in view of the posture at different face positions and the situation of change of expression are different
To use different types of five features spot detector according to the situation of change of the posture at face position and expression, for example, eyes
With this attitudes vibration of mouth face position more abundant, a kind of and this attitudes vibration more abundant five can choose
The matched five features spot detector in official position, and eyebrow, nose and the relatively fixed face position of this shape of ear, can
With selection and the matched five features spot detector in the relatively fixed face position of this shape.
Specifically, above-mentioned five features spot detector may include eye feature spot detector, the detection of eyebrow characteristic point
Device, mouth feature point detector, nose feature point detector and ear feature point detector;The eye feature spot detector
The five features point detection model of the preset first kind, the eyebrow characteristic point are used with the mouth feature point detector
Detector, the nose feature point detector and the ear feature point detector are special using the face of preset Second Type
Sign point detection model.
Wherein, the five features point detection model of the first kind specifically can be in conjunction with convolutional neural networks
(Convolutional Neural Network, CNN) and deepness belief network (DBN, Deep Belief Networks)
Two stages neural network characteristics point detection model, preferably to be modeled to characteristic point change in shape abundant;Second class
The five features point detection model of type can be the active appearance models (active based on shape and texture information
Appearance model, AMM), characteristic point position is modeled using active appearance models, efficiently to detect pass
Key characteristic point.
The two stages neural network detection model includes the convolutional Neural of first level in one of the embodiments,
The deepness belief network of network and the second level;The convolutional neural networks of first level are for learning original face figure
Piece to original shape mapping;The deepness belief network of second level is used to provide a depth for each characteristic point
The amendment of belief network fit characteristic point initial position to final position changes;Wherein, the convolutional Neural of first level
Network includes two convolutional layers, two maximum pond layers and a full articulamentum, and activation primitive uses Relu function;Each spy
Sign point does position correction using including the deepness belief network of three hidden layers, and each deepness belief network only has last
Layer is full articulamentum, and the hidden layer before full articulamentum is limited Boltzmann machine;Each limited Boltzmann machine uses
The maximum likelihood method of unsupervised learning carries out layer-by-layer pre-training, and the final result of the layer-by-layer pre-training is complete by the last one
Articulamentum is finely adjusted.
Using the scheme of the present embodiment, can simultaneously the corresponding Boltzmann machine layer of all characteristic points can with parallel training,
Training effectiveness can be improved.Wherein, neural network used in the five features point detection model of the first kind can be used
Keras builds and trains.
In one embodiment, as shown in figure 3, providing a kind of Facial Feature Analysis method, it is applied to Fig. 1 in this way
In computer equipment for be illustrated, comprising the following steps:
Step S301: facial angle detection is carried out to the target facial image and obtains facial angle value;
Step S302: selection is worth matched facial features localization device with the facial angle;
Specifically, the facial features localization that multiple facial images for different angle are detected can be preset
Device, this multiple facial features localization device are associated with from different facial angle range respectively, for example, (- 15 °, 15 °) are associated is
First face property detector, it is the second facial features localization device that [- 60 °, -15 °] and [15 °, 60 °] associated, (60 °,
90 °] and [- 90 °, -60 °) it is associated be third face property detector.Here, () indicates open interval, and [] indicates closed zone
Between, (] and [) indicate half-open intervals.But the division mode and the number after division of facial angle range are not limited to
This.
After obtaining facial angle value, the corresponding facial angle range of the face angle value is determined, further according to the face angle
Spend facial features localization device associated by range query.
The target facial image: being input to selected facial features localization device by step S303, is obtained selected
The rough position region of each face part in the target facial image of facial features localization device output;
Step S304: by carrying out corresponding five respectively on the corresponding face area image in each rough position region
The detection of official's characteristic point, determines the exact position region of each face part;
Step S305: carrying out Facial Feature Analysis according to each exact position region, obtain face characteristic information or/
And statistic analysis result.
In the present embodiment, before carrying out facial features localization, facial angle inspection first is carried out to the target facial image
It surveys and obtains facial angle value, and select to be worth matched facial features localization device with the facial angle, by the target face figure
As being input to selected facial features localization device, since different face characteristics is arranged for different facial angle ranges
Detector can make everyone detection of face property detector more targeted, more enough further standards for promoting testing result
True property.
It is above-mentioned in one of the embodiments, that facial angle detection acquisition face angle is carried out to the target facial image
Angle value may include: that the target facial image is input to preset multi-orientation Face detection model to carry out facial angle inspection
It surveys, obtains the facial angle value of the multi-orientation Face detection model output, the multi-orientation Face detection model includes multiple
The face classification device of different faces angle.
Above-mentioned facial features localization device generally returns device, the grade using cascade shape in one of the embodiments,
It includes the integrated of two levels that connection shape, which returns device, and the cascade shape returns device using original shape as input, by multiple
The weak amendment for returning device cascade and completing the original shape, obtains final feature dot shape;Wherein, the local grain of characteristic point
Feature is fitted using multiple random forests, the corresponding random forest of the Local textural feature vector of each characteristic point, special
The mode of each characteristic value is calculated by a random tree on sign vector;The occlusion state information of characteristic point then passes through shallowly
Layer model logistic regression is learnt, and the occlusion state information of each characteristic point is described using unified binary feature vector.
Wherein, if characteristic point occlusion state vector shows that current signature point has been blocked, the part of this feature point
Textural characteristics will not be used for the fitting of characteristic point position amendment variation.Facial feature points detection device used in the present embodiment
The texture information and occlusion state for returning forest and all characteristic points by xgboost are modified come fit characteristic point position
Part changes.Xgboost returns forest using the loss function for being added to multiple regular terms, has weighed regression tree knot well
Structure complexity and regression accuracy control the modified amplitude of characteristic point position and precision, improve the receipts of feature point detection algorithm
Hold back speed.Cascade shape returns machine learning recurrence device used in device and passes through sklearn machine learning java standard library reality
It is existing.
In view of returning device using cascade shape, it is necessary first to input an original shape (i.e. initial characteristics point position-order
Column), traditional approach is to generate an original shape at random as input in human face region, but research shows that this mode is easy
Make final characteristic point testing result unstable, the quality of testing result is often depending on the quality of original shape.For this purpose,
In the present embodiment, a kind of method of determination of the original shape of cascade shape recurrence device input, the cascade in the present embodiment are proposed
The determination process of original shape that shape returns device input includes:
Firstly, the average shape information that load is obtained by training sample, determines initial according to the average shape information
The position of characteristic point and the initial characteristics point;
Wherein, average shape information includes the normalization phase of marked four endpoints of first characteristic point and human face region
(normalized relative distance, relative angle are passed through to the relative position information of position and first characteristic point and other feature point
Degree, pixel difference triple are described).
Secondly, the position and initial characteristics point using the initial characteristics point are opposite with other each characteristic points
Information determines the position of other each characteristic points one by one;
Here, other characteristic points refer to the characteristic point in addition to first characteristic point.
Finally, determining characteristic point position sequence according to the position of the position of the initial characteristics point and each other characteristic points
Column, using the characteristic point position sequence as the original shape.
Using the scheme of the present embodiment, a more reasonable original shape can be generated, improve the stability of testing result.
In one of the embodiments, as shown in figure 4, above-mentioned carry out face characteristic according to each exact position region
Analysis obtains face characteristic information or/and statistic analysis result, may include:
Step S401: determine that human face region, the human face region include whole face according to each exact position region
The exact position region in partial exact position region or part face part;
Specifically, one piece of human face region is determined according to each exact position region, this block human face region may include
The information of entire face can also only (for example, only including ocular, or include nose comprising part face information simultaneously
Region and mouth region).
Step S402: gray level co-occurrence matrixes are determined according to the Pixel Information of the human face region;
Further, it is possible to use the eigenmatrix template of different angle is calculated, with increase characteristic diversity and
Improve the characteristic matching ability of matrix template.
Step S403: each of the gray level co-occurrence matrixes is determined according to the gray level co-occurrence matrixes and multiple default operators
Statistic;
The type of these statistics can be chosen according to actual needs, in one of the embodiments, used statistics
Amount includes 10 kinds of contrast, energy, entropy, inverse variance, correlation, uniformity, otherness and average and variance etc., according to need
It wants, also may include more or less type.
Step S404: extracting the characteristics of image of default type according to each statistic, by the described image extracted spy
Sign is converted to image feature vector, described image feature vector is saved in the form of tag file, or to the figure
As feature vector rule it is for statistical analysis, obtain statistical result, statistical result is subjected to visualization processing;
Comprehensive feature extraction can be carried out to face or face according to the statistic of above-mentioned 10 seed type, be based on
The mode that statistic carries out feature extraction can use any achievable mode, such as: texture is described using grey-scale contrast
The depth of rill, the correlation of using area pixel describe the localized variation of face color and describe face using gradation uniformity
Smooth degree of cheek etc. is not repeated one by one herein.
In addition, the characteristics of image in order to intuitively show target facial image, it can also be in the described image feature extracted
Afterwards, by these characteristics of image by being graphically shown after statistical analysis.The type of chart can be but not limited to
It is bar chart, sector diagram, line chart, network diagramming and histogram etc..
In the present embodiment, the comprehensive feature extraction and analysis of face or face are carried out based on gray level co-occurrence matrixes, it can
The previous difficulty needed specifically for a kind of unique feature extraction algorithm of each characteristics of image design is overcome to overcome.
Scheme to facilitate the understanding of the present invention below explains the present invention program with a preferred embodiment in detail
It states.It is to be said by taking the Facial Feature Analysis device that Facial Feature Analysis method according to the present invention is realized as an example in the embodiment
It is bright.
Facial Feature Analysis device in the embodiment can shine into the multiple dimensioned five features point of row to the face of angle multiplicity
Detection, and five features is carried out automatically in many levels to extract and analyze comprehensively.The face feature analyzer is first to figure
The face of different angle is detected on piece (being equivalent to above-mentioned target facial image), determines face region, then root
Positioning extraction is carried out to the characteristic point of face on multiple scales according to the angle of face in region, eventually by five features point
Determining precise region information carries out the extraction of multiple type characteristics of image using unified eigenmatrix template, realizes full side
The human face five-sense-organ signature analysis of position.
Facial Feature Analysis device in the embodiment is mainly: completing comprehensive point of face characteristic by three phases
It analyses, the core function in the first two stage is realized by the Reusable Model based on machine learning or deep learning training, last
A stage then realizes the comprehensive extraction and analysis of characteristics of image by traditional image procossing strategy.Facial Feature Analysis device
Face is detected firstly the need of on picture, at this stage, the multiple people detected specifically for different angle face
Face classifier is trained to, and for finding different angle and various sizes of face on the image.Rank is detected in five features point
Section, these faces will carry out face key feature points according to being sent to different facial feature points detection device the characteristics of itself
It extracts, determines position of the face in face using the human face characteristic point of these coarsenesses, will be normalized by level of resolution
Different face positions send to corresponding five features spot detector the extraction for carrying out fine-grained face key feature points.
By the five features point of different scale, the feature extraction and analysis of the region-wide or different regional areas of face may be implemented,
The independent analysis of different face genius locis can also be realized on fine granularity scale.Different feature point detectors are according to it
Detection scale uses different algorithm special training detection models.In order to extract and analyze different types of characteristics of image, institute
There are the human face region determined by characteristic point or face region, is come using the different statistics of gray level co-occurrence matrixes comprehensive
Different types of image feature information is calculated, the comprehensive analysis and statistics of human face five-sense-organ feature are realized in different levels.
Traditional most of image characteristics extraction algorithms all lack a full-automatic process as support.It completes to scheme
A series of feature extraction of certain objects as in, it is necessary to pretreatments including filtering, decentralization etc. first be carried out to image, also
Detect the specific location of corresponding object in the picture, image characteristics extraction operation and the every image processing operations for supporting it
Separation, brings great inconvenience to the use of user, even need sometimes the user effort a large amount of time go to design it is a set of rationally
Operative combination complete efficient image object feature extraction.The present embodiment is by integration from facial image pretreatment to people
The operation such as comprehensive extraction of face five features proposes a set of effective, full-automatic Facial Feature Analysis scheme, simplifies
The overall flow of facial image feature extraction provides the user with a convenient, unified interface, mitigates the exploitation of relative program
Burden.
Automatically extracting and analyzing for face characteristic is completed, carries out Face datection on the image first.Due to existing people
The scope of application of face detection algorithm is relatively simple, than if any be only applicable to the detection of positive face, some is then only applicable to pure side face
Detection, the classifier effect of multi-orientation Face is poor, so combining multiple angle in the Face datection stage in this embodiment scheme
Different face classification devices is spent, by integrating the advantage of different angle face classification device, constructs a unified multi-orientation Face
Detector group realizes the easy detection of multi-orientation Face in image, has widened the use scope of existing Face datection algorithm.
After the specific location for determining face using Face datection, this embodiment scheme will be carried out in face region
The detection of multiple dimensioned face key feature points.Existing facial feature points detection algorithm only accounts for thick on face scale
The detection of granularity five features point, the characteristic information of entire face be used to calculate, so as to cause the spy in local face region
Sign point quantity is very few, and the five features for influencing some subsequent specific region is extracted.Therefore, this embodiment scheme is in face characteristic
On the basis of point detection, the approximate region at each face position is first determined using detected coarseness five features point, so
The fine granularity characteristic point detection of face scale is carried out in these regions afterwards, further precisely confirms the area where face position
Domain is reduced the interference of uncorrelated region or noise to subsequent each face position image characteristics extraction, is realized in face scale
Comprehensive extraction and analysis comprehensively with the multiple dimensioned face characteristic of the even any local facial region of face scale.
This embodiment scheme calculates different types of image using unified eigenmatrix template, that is, gray level co-occurrence matrixes
Feature overcomes the previous difficulty needed specifically for a kind of unique feature extraction algorithm of each characteristics of image design.It will
A variety of characteristics of image are unified in the repetition meter for being calculated on same eigenmatrix template and also greatly reducing Pixel Information
Expense is calculated, when calculating different types of characteristics of image to same picture, is not needed since the Pixel Information of bottom again
It calculates.The underlying pixel data information of same picture only needs to be calculated as a gray level co-occurrence matrixes, then total by the gray scale
The extraction and analysis of multiple types characteristics of image can be realized in the calculating of raw matrix difference statistic.Simultaneously as gray scale symbiosis square
Battle array has a large amount of different statistics, therefore facial image feature extracting method of the invention is than being individually for every a kind of image
Characteristic Design feature extracting method is more broad, more fully, and the analysis for characteristics of image and statistics provide it is more comprehensive
Characteristic information.
The Facial Feature Analysis device in the present embodiment is described in detail below.As shown in figure 5, for the people in embodiment
The composed structure and schematic illustration of face feature analyzer.
As shown in figure 5, it includes three main flows that the Facial Feature Analysis device in the embodiment, which carries out Facial Feature Analysis,
Composition, be respectively as follows: multi-orientation Face detection-phase, multiple dimensioned five features point detection-phase and comprehensive image feature extraction with
Analysis phase.In multi-orientation Face detection-phase, the face point detected by the multiple faces for different angle of training
Class device constructs an effective Multi-angle human face detector group, realizes the accurate identification of most of face.
In multiple dimensioned five features point detection-phase, it is crucial special that varigrained face will be carried out on two scale levels
Sign point detection.Firstly, the face detected on last stage will be integrated into three human face datas according to different angular ranges
Collection, and be sent to the different facial feature points detection device of three angles and carry out the detection of face key feature points.The angled people of institute
Face characteristic point detector is that improved cascade shape returns device, is trained using different human face data collection, training
All faces in preceding data set all carried out according to first human face photo the translation of facial orientation be aligned, reduce and trained
To the interference of human face characteristic point position initialization in journey.Facial feature points detection device proposed by the invention, by comprehensively considering
Pixel difference information between the local grain information of characteristic point, occlusion state and characteristic point is based purely on shape letter than traditional
The facial feature points detection device of breath more accurately determines face key feature points in the position of face.By in face scale
The upper five features point detection for carrying out coarseness, can substantially determine the rough position at each face position and region on face,
Corresponding fine granularity five features spot detector is sent to after alignment in these face positions respectively, can be obtained more accurate
Characteristic point testing result, provide the face information of different scale for subsequent feature extraction operation.Different face position roots
Different five features spot detectors is used according to the situation of change of its posture and expression, such as: eyes and this posture of mouth become
Change face position more abundant, uses the two stages nerve net for combining convolutional neural networks CNN and deepness belief network DBN
Network feature point detector, preferably to be modeled to characteristic point change in shape abundant;And eyebrow, nose and ear this
The relatively fixed face position of kind shape, then using traditional active appearance models AAM based on shape and texture information to spy
Sign point position is modeled, efficiently to detect key feature points.
Using the five features point information on different scale obtained, the present invention is eventually by unified multi-angle gray scale
Co-occurrence matrix carries out the comprehensive feature extraction and analysis of face or face.For arbitrary accurate people determined by characteristic point
Face region calculates a gray level co-occurrence matrixes first against the region, then passes through the different statistics of the gray level co-occurrence matrixes
Different types of face characteristic is calculated, comprehensive face characteristic is completed and extracts.The characteristic information of face or face obtained can
To use the feature visualization interface of opencv to carry out the image conversion of variety classes feature, can be provided by matplotlib
Data analysis interface carry out characteristic information statistical analysis, can also by the characteristic information of entire human face data collection be written magnetic
Disk generates standard feature vector file, for the use of other modules.
Fig. 6 shows the composed structure and schematic illustration of facial feature points detection device.In the present embodiment, using improvement
Device is returned based on the cascade shape of local binary feature to carry out the detections of face key feature points, which returns
Device includes the integrated of two levels, using original shape as input, then completes shape by multiple weak recurrence device cascades and repairs
Just, accurate characteristic point position is finally obtained.Each weak recurrence device only learns initial characteristics point position to final characteristic point
A part amendment variation of position, by multiple weak multiple amendments for returning device, available one relatively reasonable final spy
Levy dot shape.Weak recurrence device is by constructing Local textural feature and occlusion state information for each characteristic point come Modelling feature point
The increments of change of position correction.The Local textural feature of characteristic point is fitted using a series of random forest, each feature
The Local textural feature vector of point corresponds to a random forest, and the mode of each characteristic value is random by one in feature vector
Tree is calculated.Since textural characteristics are usually discrete message, so the model using tree can be fitted this well
Category feature.In addition, the occlusion state information of characteristic point is then learnt by relatively simple shallow Model logistic regression, institute
There is the occlusion state information of characteristic point to be described using unified binary feature vector, if characteristic point occlusion state vector
Display current signature point has been blocked, and becomes then the Local textural feature of this feature point will not be used for characteristic point position amendment
The fitting of change.In the present embodiment, used facial feature points detection device returns forest and all characteristic points by xgboost
Texture information and occlusion state carry out the variation of the modified part in fit characteristic point position.Xgboost returns forest and uses addition
The loss function of multiple regular terms, has weighed regression tree structure complexity and regression accuracy well, controls feature point
Modified amplitude and precision are set, the convergence rate of feature point detection algorithm is improved.It cascades shape and returns machine used in device
Device study returns device and passes through the realization of sklearn machine learning java standard library.
Before returning device using cascade shape and carrying out facial feature points detection, it is necessary first to input an original shape (i.e.
Initial characteristics point position sequence), the past usually generates an original shape as input in human face region at random, but studies table
Bright this method is easy to make final characteristic point testing result unstable, and the quality of testing result is often depending on original shape
Quality.Therefore, in the present embodiment, a didactic characteristic point initial method is proposed, by using characteristic point in training set
Priori knowledge generate a more reasonable original shape as guidance, to improve the stabilization of characteristic point testing result
Property.The process of this method is as shown in fig. 7, before executing this method, it is necessary first to calculate in the training stage close with characteristic point position
Relevant average shape information is cut, average shape information includes marked first characteristic point and four endpoints of human face region
Normalization relative position and first characteristic point and other feature point relative position information (by normalized opposite
Distance, relative angle, pixel difference triple are described).Using average shape information, this method is first true on human face region
Determine the position (first characteristic point) of initial characteristics point, then according to the location information of initial characteristics point and other feature point, by
The initial position of a all characteristic points of determination, and be input to using obtained initial characteristics point position sequence as original shape
Shape amendment is carried out in facial feature points detection device.
Fig. 8 and Fig. 9 is the schematic diagram of fine granularity five features spot detector, and wherein Fig. 8 represents the detection of eye feature point
The structure of device and mouth feature point detector, and Fig. 9 then represents eyebrow feature point detector, nose feature point detector and ear
The training and detection process of piece feature point detector.
Since eyes and mouth possess more posture and feature dot shape with expression shape change, their feature
Spot detector is intended respectively using original shape and makeover process of the stronger deep neural network of versatility to characteristic point
It closes.Eye feature spot detector and mouth feature point detector are made of the neural network of two levels, and first level is volume
Product neural network, second level are deepness belief network.The convolutional neural networks of first level are mainly used for learning original
For face picture to the mapping of original shape, second level is that each characteristic point provides a DBN fit characteristic point initial bit
It sets to the amendment variation of final position.Since the Pixel Information that face area includes is often less, for this purpose, in the implementation
Example in first level convolutional neural networks structure it is relatively simple, network is shallower, only comprising two convolutional layers, two most
Great Chiization layer and a full articulamentum, for the purposes of mitigating the influence of gradient disappearance and accelerating to restrain, activation primitive is used uniformly
Line rectification function (Rectified Linear Unit, ReLU) can all carry out point that activation can be made to input before activating simultaneously
Cloth is changed into local acknowledgement's normalization of normal distribution, and final full articulamentum exports the position coordinates of each initial characteristics point.Separately
Outside, because the amendment variation of each characteristic point only with Pixel Information near this feature point and opposite with other feature point
Location information is related, so each characteristic point does position using one only includes the deepness belief network of three hidden layers
Amendment.All deepness belief networks all only have the last layer be full articulamentum, all hidden layers of front be limited Bohr hereby
Graceful machine, the maximum likelihood method that unsupervised learning can be used in these Boltzmann machines carry out layer-by-layer pre-training, and final result only needs
To be finely adjusted by the last one full articulamentum, at the same the corresponding Boltzmann machine layer of all characteristic points can with parallel training,
To improve training effectiveness.Neural network used in eye feature spot detector and mouth feature point detector equally uses
Keras builds and trains, wherein keras is a high level neural network API (Application Programming
Interface, application programming interface).
Eyebrow, nose and ear are since its shape is smaller with the amplitude of expression and attitudes vibration, using based on appearance
The active appearance models of information, which carry out characteristic point detection, can also obtain good effect.Active appearance models pass through instruction first
The average face model got calculates the original shape of five features point, then using characteristic point correction model to initial shape
Shape is modified, and obtains the final position of characteristic point.As shown in figure 9, the training process of active appearance models is broadly divided into two
Stage, first stage, first will be in training sets according to the average face model of five features point training being labeled in training set
The direction of all face images, all to first face image alignment, then initializes average face model simultaneously with relative position
Face model all in training set is described using average face model (because all face models in training set all may be used
Obtained with carrying out affine transformation to average face model), iterate through all face models finally to calculate average face model
And using all face models of average face model modification being calculated, until average face model convergence.Second stage
Then calculating resulting original shape with average face model is input, and the characteristic point position to be marked is instructed as optimization aim
Practice characteristic point correction model.The modified learning process of original shape are as follows: chosen near each characteristic point first a series of
Candidate point, the Gradient Features for then constructing all candidate points (calculate pixel nearby along the normal direction of adjacent characteristic point line
The pixel difference of point) and Local textural feature (being described using local binary patterns) and using mahalanobis distance calculating candidate point
The distance relation of the characteristic information of characteristic information and institute's marker characteristic point with assess candidate point and target feature point close to journey
Degree, selects the candidate point closest to institute's marker characteristic point finally to update characteristic point, aforementioned process continues to characteristic point to converge to
Only.Every to take turns the modified study of original shape by one, the parameter of characteristic point correction model can be updated, and the optimization of model can be held
Continue until meeting termination condition (parameter threshold being arranged in advance).
Figure 10 illustrates to carry out omnidirectional images feature using gray level co-occurrence matrixes based on face or face key feature points
The principle of extraction.In the present embodiment, after completing key feature points detection, one piece of face is determined first with these characteristic points
Region, this block human face region may include the information of entire face, can also only comprising part face information (such as: only include
Eyes include nose and mouth simultaneously);Then a gray level co-occurrence matrixes are calculated according to the Pixel Information of human face region, it can be with
It is calculated using the eigenmatrix template of different angle, to increase the diversity of characteristic and improve the spy of matrix template
Levy matching capacity;And then the various statistics of gray level co-occurrence matrixes are calculated using different operators;Finally utilize these statistics
Amount is to extract different types of characteristics of image.Gray level co-occurrence matrixes possess a variety of statistics for image characteristics extraction, this reality
It applies in example, chooses 10 kinds therein (contrast, energy, entropy, inverse variance, correlation, uniformity, otherness and average and sides
Difference) comprehensive feature extraction is carried out to face or face, such as: the depth of texture rill is described using grey-scale contrast
Shallowly, the correlation of using area pixel describes the localized variation of face color and describes the smooth of cheek using gradation uniformity
Degree etc..The function packet that gray level co-occurrence matrixes are provided using opencv is calculated, and part statistic is provided using matlab
Operation interface calculate, the operation of part statistic realizes alone.Calculate resulting characteristic information can be used for statistically analyze or
Person generates feature vector file and uses for other modules.
It should be understood that although each step in the flow chart of Fig. 2-4 is successively shown according to the instruction of arrow,
These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-4
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in same a period of time to multiple sub-steps
Quarter executes completion, but can execute at different times, the execution in these sub-steps or stage be sequentially also not necessarily according to
Secondary progress, but in turn or can be handed over at least part of the sub-step or stage of other steps or other steps
Alternately execute.
In one embodiment, as shown in figure 11, a kind of Facial Feature Analysis device is provided, comprising: first area inspection
Survey module 1101, second area detection module 1102 and characteristics analysis module 1103, in which:
First area detection module 1101, for the five features point by carrying out whole face on target facial image
Detection, determines the rough position region of each face part in the target facial image;
Second area detection module 1102, for by the corresponding face position image in each rough position region
Corresponding five features point detection is carried out respectively, determines the exact position region of each face part;
Characteristics analysis module 1103 obtains face for carrying out Facial Feature Analysis according to each exact position region
Characteristic information or/and statistic analysis result.
In one embodiment, second area detection module 1102 can be by the corresponding face in each rough position region
Position image is input to different five features spot detectors according to the classification of corresponding face part respectively and carries out corresponding five
The detection of official's characteristic point, obtains the exact position region for the corresponding face part that each five features spot detector is exported.
Above-mentioned facial features localization device is that cascade shape returns device in one of the embodiments, which returns
The determination process for the original shape for returning device to input includes: the average shape information that load is obtained by training sample, according to described
Average shape information determines the position of initial characteristics point and the initial characteristics point;Using the position of the initial characteristics point, with
And the initial characteristics point and the relative information of other each characteristic points determine the position of other each characteristic points one by one;According to described
The position of initial characteristics point and the position of each other characteristic points determine characteristic point position sequence, by the characteristic point position
Sequence is as the original shape.
Specific about Facial Feature Analysis device limits the limit that may refer to above for Facial Feature Analysis method
Fixed, details are not described herein.Modules in above-mentioned Facial Feature Analysis device can fully or partially through software, hardware and
A combination thereof is realized.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also
Be stored in the memory in computer equipment in a software form, the above modules pair are executed in order to which processor calls
The operation answered.
In one embodiment, a kind of computer equipment is provided, which can be terminal, internal structure
Figure is shown in Fig.12.The computer equipment includes the processor connected by system bus, memory, network interface, shows
Display screen and input unit.Wherein, the processor of the computer equipment is for providing calculating and control ability.The computer equipment
Memory include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system and calculating
Machine program.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.It should
The network interface of computer equipment is used to communicate with external terminal by network connection.The computer program is held by processor
To realize a kind of Facial Feature Analysis method when row.The display screen of the computer equipment can be liquid crystal display or electronics
Ink display screen, the input unit of the computer equipment can be the touch layer covered on display screen, are also possible to computer and set
Key, trace ball or the Trackpad being arranged on standby shell, can also be external keyboard, Trackpad or mouse etc..
It will be understood by those skilled in the art that structure shown in Figure 12, only part relevant to the present invention program
The block diagram of structure, does not constitute the restriction for the computer equipment being applied thereon to the present invention program, and specific computer is set
Standby may include perhaps combining certain components or with different component cloth than more or fewer components as shown in the figure
It sets.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory
And the computer program that can be run on a processor, processor perform the steps of when executing computer program by target
The five features point detection that whole face is carried out on facial image, determines each face part in the target facial image
Rough position region;By carrying out corresponding face respectively on the corresponding face position image in each rough position region
Characteristic point detection, determines the exact position region of each face part;It is special that face is carried out according to each exact position region
Sign analysis, obtains face characteristic information or/and statistic analysis result.
Computer program is executed in processor in one of the embodiments, to realize by each rough position region
Corresponding five features point detection is carried out on the image of corresponding face position respectively, determines the exact position area of each face part
When the step in domain, following steps are implemented: by the corresponding face position image in each rough position region respectively according to right
The classification for the face part answered is input to different five features spot detectors and carries out corresponding five features point detection, obtains
The exact position region for the corresponding face part that each five features spot detector is exported.
It also performs the steps of when processor executes computer program in one of the embodiments, to the target person
Face image carries out facial angle detection and obtains facial angle value;It is described by carrying out whole face on target facial image
The detection of five features point, determines that the rough position region of each face part in the target facial image includes: selection and institute
The matched facial features localization device of facial angle value is stated, the target facial image is input to selected face characteristic and is examined
Device is surveyed, the rough position of each face part in the target facial image of selected facial features localization device output is obtained
Set region.
In one embodiment, it is realized when processor executes computer program described according to each exact position area
When the step of domain progress Facial Feature Analysis, acquisition face characteristic information or statistic analysis result, specific implementation is following
Step: determine that human face region, the human face region include the accurate position of whole face parts according to each exact position region
Set the exact position region in region or part face part;Gray scale symbiosis is determined according to the Pixel Information of the human face region
Matrix;Each statistic of the gray level co-occurrence matrixes is determined according to the gray level co-occurrence matrixes and multiple default operators;According to
Each statistic extracts the characteristics of image of default type, and the described image feature extracted is counted as the face characteristic
Amount.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of the five features by carrying out whole face on target facial image when being executed by processor
Point detection, determines the rough position region of each face part in the target facial image;By in each rough position
Corresponding five features point detection is carried out on the corresponding face position image in region respectively, determines the essence of each face part
The true band of position;Facial Feature Analysis is carried out according to each exact position region, obtains face characteristic information or/and system
Meter analysis result.
Realization is executed by processor by each rough position area in computer program in one of the embodiments,
Corresponding five features point detection is carried out on the corresponding face position image in domain respectively, determines the exact position of each face part
When the step in region, following steps are implemented: by the corresponding face position image in each rough position region basis respectively
The classification of corresponding face part is input to different five features spot detectors and carries out corresponding five features point detection, obtains
The exact position region for the corresponding face part for taking each five features spot detector to be exported.
It also performs the steps of when computer program is executed by processor in one of the embodiments, to the target
Facial image carries out facial angle detection and obtains facial angle value;It is described by carrying out whole face on target facial image
Five features point detection, determine each face part in the target facial image rough position region include: selection with
The facial angle is worth matched facial features localization device, and the target facial image is input to selected face characteristic
Detector obtains the rough of each face part in the target facial image of selected facial features localization device output
The band of position.
In one embodiment, computer program be executed by processor realize it is described according to each exact position area
When the step of domain progress Facial Feature Analysis, acquisition face characteristic information or statistic analysis result, specific implementation is following
Step: determine that human face region, the human face region include the accurate position of whole face parts according to each exact position region
Set the exact position region in region or part face part;Gray scale symbiosis is determined according to the Pixel Information of the human face region
Matrix;Each statistic of the gray level co-occurrence matrixes is determined according to the gray level co-occurrence matrixes and multiple default operators;According to
Each statistic extracts the characteristics of image of default type, and the described image feature extracted is counted as the face characteristic
Amount.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile calculating
In machine read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Its
In, to any reference of memory, storage, database or other media used in each embodiment provided by the present invention,
It may each comprise non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), may be programmed
ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory can
Including random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is in a variety of forms
It can obtain, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram
(DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus
(Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram
(RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The embodiments described above only express several embodiments of the present invention, and the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention
Protect range.Therefore, the scope of protection of the patent of the invention shall be subject to the appended claims.
Claims (10)
1. a kind of Facial Feature Analysis method, which is characterized in that the described method includes:
Five features point by carrying out whole face on target facial image detects, and determines in the target facial image
The rough position region of each face part;
By carrying out corresponding five features point detection respectively on the corresponding face position image in each rough position region,
Determine the exact position region of each face part;
Facial Feature Analysis is carried out according to each exact position region, obtains face characteristic information or statistical analysis knot
Fruit.
2. Facial Feature Analysis method according to claim 1, which is characterized in that described by each rough position
Corresponding five features point detection is carried out on the corresponding face position image in region respectively, determines the exact position of each face part
Region, comprising:
The corresponding face position image in each rough position region is input to according to the classification of corresponding face part respectively
Different five features spot detectors carries out corresponding five features point detection, and it is defeated to obtain each five features spot detector institute
The exact position region of corresponding face part out;
Wherein, the five features spot detector includes eye feature spot detector, eyebrow feature point detector, mouth characteristic point
Detector, nose feature point detector and ear feature point detector;The eye feature spot detector and the mouth feature
Spot detector uses the five features point detection model of the preset first kind, the eyebrow feature point detector, the nose
Feature point detector and the ear feature point detector use the five features point detection model of preset Second Type.
3. Facial Feature Analysis method according to claim 2, which is characterized in that the five features point of the first kind
Detection model is the two stages neural network detection model in conjunction with convolutional neural networks and deepness belief network;The Second Type
Five features point detection model be the active appearance models based on shape and texture information.
4. Facial Feature Analysis method according to claim 3, which is characterized in that the two stages neural network detects mould
Type includes the convolutional neural networks of first level and the deepness belief network of the second level;
The convolutional neural networks of first level are for learning the mapping of original face picture to original shape;
The deepness belief network of second level is used to provide a deepness belief network fit characteristic for each characteristic point
The amendment of point initial position to final position changes;
Wherein, the convolutional neural networks of first level connect entirely including two convolutional layers, two maximum pond layers and one
Layer is connect, activation primitive uses Relu function;Each characteristic point does position using including the deepness belief network of three hidden layers
Amendment, it is full articulamentum that each deepness belief network, which only has the last layer, and the hidden layer before full articulamentum is limited Bohr
Hereby graceful machine;Each limited Boltzmann machine carries out layer-by-layer pre-training using the maximum likelihood method of unsupervised learning, described layer-by-layer
The final result of pre-training is finely adjusted by the last one full articulamentum.
5. Facial Feature Analysis method according to claim 4, which is characterized in that described according to each exact position area
Domain carries out Facial Feature Analysis, obtains face characteristic information or/and statistic analysis result, comprising:
Determine that human face region, the human face region include the exact position of whole face parts according to each exact position region
The exact position region of region or part face part;
Gray level co-occurrence matrixes are determined according to the Pixel Information of the human face region;
Each statistic of the gray level co-occurrence matrixes is determined according to the gray level co-occurrence matrixes and multiple default operators;
The described image Feature Conversion extracted is that image is special by the characteristics of image that default type is extracted according to each statistic
Vector is levied, described image feature vector is saved in the form of tag file, or to the rule of described image feature vector
Restrain it is for statistical analysis, obtain statistical result, statistical result is subjected to visualization processing.
6. Facial Feature Analysis method according to claim 5, which is characterized in that the facial feature points detection device is grade
Join shape and returns device;
It includes the integrated of two levels that the cascade shape, which returns device, and the cascade shape recurrence device is using original shape as defeated
Enter, by multiple weak amendments for returning device cascade and completing the original shape, obtains final feature dot shape;Wherein, characteristic point
Local textural feature be fitted using multiple random forests, corresponding one of the Local textural feature vector of each characteristic point with
Machine forest, the mode of each characteristic value is calculated by a random tree in feature vector;The occlusion state information of characteristic point
Then learnt by shallow Model logistic regression, the occlusion state information of each characteristic point using unified binary feature vector into
Row description.
7. Facial Feature Analysis method according to claim 6, which is characterized in that the cascade shape returns device input
The determination process of original shape includes:
Load the average shape information that obtains by training sample, according to the average shape information determine initial characteristics point and
The position of the initial characteristics point;
It is true one by one using the position of the initial characteristics point and initial characteristics point and the relative information of other each characteristic points
The position of other fixed each characteristic points;
Characteristic point position sequence is determined according to the position of the position of the initial characteristics point and each other characteristic points, it will be described
Characteristic point position sequence is as the original shape.
8. a kind of Facial Feature Analysis device, which is characterized in that described device includes:
First area detection module detects, really for the five features point by carrying out whole face on target facial image
The rough position region of each face part in the fixed target facial image;
Second area detection module, for by being carried out respectively on the corresponding face position image in each rough position region
Corresponding five features point detection, determines the exact position region of each face part;
Characteristics analysis module obtains face characteristic information for carrying out Facial Feature Analysis according to each exact position region
Or/and statistic analysis result.
9. a kind of computer equipment including memory, processor and stores the meter that can be run on a memory and on a processor
Calculation machine program, which is characterized in that the processor realizes any one of claims 1 to 7 institute when executing the computer program
The step of stating method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 7 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810844936.0A CN109086711B (en) | 2018-07-27 | 2018-07-27 | Face feature analysis method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810844936.0A CN109086711B (en) | 2018-07-27 | 2018-07-27 | Face feature analysis method and device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109086711A true CN109086711A (en) | 2018-12-25 |
CN109086711B CN109086711B (en) | 2021-11-16 |
Family
ID=64831240
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810844936.0A Active CN109086711B (en) | 2018-07-27 | 2018-07-27 | Face feature analysis method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109086711B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902635A (en) * | 2019-03-04 | 2019-06-18 | 司法鉴定科学研究院 | A kind of portrait signature identification method based on example graph |
CN109919081A (en) * | 2019-03-04 | 2019-06-21 | 司法鉴定科学研究院 | A kind of automation auxiliary portrait signature identification method |
CN109994202A (en) * | 2019-03-22 | 2019-07-09 | 华南理工大学 | A method of the face based on deep learning generates prescriptions of traditional Chinese medicine |
CN110222651A (en) * | 2019-06-10 | 2019-09-10 | Oppo广东移动通信有限公司 | A kind of human face posture detection method, device, terminal device and readable storage medium storing program for executing |
CN110298225A (en) * | 2019-03-28 | 2019-10-01 | 电子科技大学 | A method of blocking the human face five-sense-organ positioning under environment |
CN111444887A (en) * | 2020-04-30 | 2020-07-24 | 北京每日优鲜电子商务有限公司 | Mask wearing detection method and device, storage medium and electronic equipment |
CN112000538A (en) * | 2019-05-10 | 2020-11-27 | 百度在线网络技术(北京)有限公司 | Page content display monitoring method, device and equipment and readable storage medium |
CN112200005A (en) * | 2020-09-15 | 2021-01-08 | 青岛邃智信息科技有限公司 | Pedestrian gender identification method based on wearing characteristics and human body characteristics under community monitoring scene |
CN112700427A (en) * | 2021-01-07 | 2021-04-23 | 哈尔滨晓芯科技有限公司 | Automatic hip joint X-ray evaluation method |
CN113233266A (en) * | 2021-06-03 | 2021-08-10 | 昆山杜克大学 | Non-contact elevator interaction system and method thereof |
CN113553963A (en) * | 2021-07-27 | 2021-10-26 | 广联达科技股份有限公司 | Detection method and device of safety helmet, electronic equipment and readable storage medium |
CN113963428A (en) * | 2021-12-23 | 2022-01-21 | 北京的卢深视科技有限公司 | Model training method, occlusion detection method, system, electronic device, and medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102495005A (en) * | 2011-11-17 | 2012-06-13 | 江苏大学 | Method for diagnosing crop water deficit through hyperspectral image technology |
CN103593654A (en) * | 2013-11-13 | 2014-02-19 | 智慧城市系统服务(中国)有限公司 | Method and device for face location |
US20150347822A1 (en) * | 2014-05-29 | 2015-12-03 | Beijing Kuangshi Technology Co., Ltd. | Facial Landmark Localization Using Coarse-to-Fine Cascaded Neural Networks |
CN105787448A (en) * | 2016-02-28 | 2016-07-20 | 南京信息工程大学 | Facial shape tracking method based on space-time cascade shape regression |
CN106446766A (en) * | 2016-07-25 | 2017-02-22 | 浙江工业大学 | Stable detection method for human face feature points in video |
CN107146196A (en) * | 2017-03-20 | 2017-09-08 | 深圳市金立通信设备有限公司 | A kind of U.S. face method of image and terminal |
CN107679497A (en) * | 2017-10-11 | 2018-02-09 | 齐鲁工业大学 | Video face textures effect processing method and generation system |
CN108108808A (en) * | 2018-01-08 | 2018-06-01 | 北京邮电大学 | A kind of position predicting method and device based on depth belief network |
CN108229291A (en) * | 2017-07-28 | 2018-06-29 | 北京市商汤科技开发有限公司 | Characteristic point detection, network training method, device, electronic equipment and storage medium |
-
2018
- 2018-07-27 CN CN201810844936.0A patent/CN109086711B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102495005A (en) * | 2011-11-17 | 2012-06-13 | 江苏大学 | Method for diagnosing crop water deficit through hyperspectral image technology |
CN103593654A (en) * | 2013-11-13 | 2014-02-19 | 智慧城市系统服务(中国)有限公司 | Method and device for face location |
US20150347822A1 (en) * | 2014-05-29 | 2015-12-03 | Beijing Kuangshi Technology Co., Ltd. | Facial Landmark Localization Using Coarse-to-Fine Cascaded Neural Networks |
CN105787448A (en) * | 2016-02-28 | 2016-07-20 | 南京信息工程大学 | Facial shape tracking method based on space-time cascade shape regression |
CN106446766A (en) * | 2016-07-25 | 2017-02-22 | 浙江工业大学 | Stable detection method for human face feature points in video |
CN107146196A (en) * | 2017-03-20 | 2017-09-08 | 深圳市金立通信设备有限公司 | A kind of U.S. face method of image and terminal |
CN108229291A (en) * | 2017-07-28 | 2018-06-29 | 北京市商汤科技开发有限公司 | Characteristic point detection, network training method, device, electronic equipment and storage medium |
CN107679497A (en) * | 2017-10-11 | 2018-02-09 | 齐鲁工业大学 | Video face textures effect processing method and generation system |
CN108108808A (en) * | 2018-01-08 | 2018-06-01 | 北京邮电大学 | A kind of position predicting method and device based on depth belief network |
Non-Patent Citations (3)
Title |
---|
XAVIER PBA ET AL: "《Robust Face Landmark Estimation Under Occlusion》", 《IEEE》 * |
王丹丹: "《基于深度学习混合模型的人脸检测算法研究》", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
赵磊等: "《基于ASM局部定位和特征三角形的列车驾驶员头部姿态估计》", 《铁道学报》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919081A (en) * | 2019-03-04 | 2019-06-21 | 司法鉴定科学研究院 | A kind of automation auxiliary portrait signature identification method |
CN109902635A (en) * | 2019-03-04 | 2019-06-18 | 司法鉴定科学研究院 | A kind of portrait signature identification method based on example graph |
CN109994202A (en) * | 2019-03-22 | 2019-07-09 | 华南理工大学 | A method of the face based on deep learning generates prescriptions of traditional Chinese medicine |
CN110298225A (en) * | 2019-03-28 | 2019-10-01 | 电子科技大学 | A method of blocking the human face five-sense-organ positioning under environment |
CN112000538A (en) * | 2019-05-10 | 2020-11-27 | 百度在线网络技术(北京)有限公司 | Page content display monitoring method, device and equipment and readable storage medium |
CN112000538B (en) * | 2019-05-10 | 2023-09-15 | 百度在线网络技术(北京)有限公司 | Page content display monitoring method, device and equipment and readable storage medium |
CN110222651A (en) * | 2019-06-10 | 2019-09-10 | Oppo广东移动通信有限公司 | A kind of human face posture detection method, device, terminal device and readable storage medium storing program for executing |
CN111444887A (en) * | 2020-04-30 | 2020-07-24 | 北京每日优鲜电子商务有限公司 | Mask wearing detection method and device, storage medium and electronic equipment |
CN112200005A (en) * | 2020-09-15 | 2021-01-08 | 青岛邃智信息科技有限公司 | Pedestrian gender identification method based on wearing characteristics and human body characteristics under community monitoring scene |
CN112700427A (en) * | 2021-01-07 | 2021-04-23 | 哈尔滨晓芯科技有限公司 | Automatic hip joint X-ray evaluation method |
CN112700427B (en) * | 2021-01-07 | 2024-04-16 | 哈尔滨晓芯科技有限公司 | Automatic evaluation method for hip joint X-ray |
CN113233266A (en) * | 2021-06-03 | 2021-08-10 | 昆山杜克大学 | Non-contact elevator interaction system and method thereof |
CN113553963A (en) * | 2021-07-27 | 2021-10-26 | 广联达科技股份有限公司 | Detection method and device of safety helmet, electronic equipment and readable storage medium |
CN113963428A (en) * | 2021-12-23 | 2022-01-21 | 北京的卢深视科技有限公司 | Model training method, occlusion detection method, system, electronic device, and medium |
CN113963428B (en) * | 2021-12-23 | 2022-03-25 | 北京的卢深视科技有限公司 | Model training method, occlusion detection method, system, electronic device, and medium |
Also Published As
Publication number | Publication date |
---|---|
CN109086711B (en) | 2021-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109086711A (en) | Facial Feature Analysis method, apparatus, computer equipment and storage medium | |
CN104572804B (en) | A kind of method and its system of video object retrieval | |
CN106897658B (en) | Method and device for identifying human face living body | |
CN108710866A (en) | Chinese mold training method, Chinese characters recognition method, device, equipment and medium | |
CN108038474A (en) | Method for detecting human face, the training method of convolutional neural networks parameter, device and medium | |
Barra et al. | Web-shaped model for head pose estimation: An approach for best exemplar selection | |
CN109508638A (en) | Face Emotion identification method, apparatus, computer equipment and storage medium | |
CN108985232A (en) | Facial image comparison method, device, computer equipment and storage medium | |
CN108229330A (en) | Face fusion recognition methods and device, electronic equipment and storage medium | |
Theagarajan et al. | Soccer: Who has the ball? Generating visual analytics and player statistics | |
Lucio et al. | Fully convolutional networks and generative adversarial networks applied to sclera segmentation | |
US20170293354A1 (en) | Calculation method of line-of-sight direction based on analysis and match of iris contour in human eye image | |
Liu et al. | Feature disentangling machine-a novel approach of feature selection and disentangling in facial expression analysis | |
CN104123543B (en) | A kind of eye movement recognition methods based on recognition of face | |
CN108470354A (en) | Video target tracking method, device and realization device | |
CN106529499A (en) | Fourier descriptor and gait energy image fusion feature-based gait identification method | |
CN106447625A (en) | Facial image series-based attribute identification method and device | |
Yu et al. | An object-based visual attention model for robotic applications | |
CN108447061A (en) | Merchandise information processing method, device, computer equipment and storage medium | |
CN109598234A (en) | Critical point detection method and apparatus | |
CN107886062B (en) | Image processing method, system and server | |
CN106778489A (en) | The method for building up and equipment of face 3D characteristic identity information banks | |
CN103942535B (en) | Multi-target tracking method and device | |
CN105095867A (en) | Rapid dynamic face extraction and identification method based deep learning | |
CN104680545B (en) | There is the detection method of well-marked target in optical imagery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |