Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that the described embodiments are merely a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other
Embodiment shall fall within the protection scope of the present invention.
Referring to Fig. 1, the process that Fig. 1 is a kind of acquisition methods of face 3D characteristic information provided in an embodiment of the present invention is shown
It is intended to.
The acquisition methods of the face 3D characteristic information of the present embodiment the following steps are included:
S101: RGBD facial image is obtained.
Specifically, RGBD facial image includes the colouring information (RGB) and depth information (Depth) of face, RGBD face
Image can be obtained by Kinect sensor.Wherein, which is specially an image set, including for example same
Multiple RGBD images of personal multiple angles.
S102: pass through the characteristic point of RGBD man face image acquiring face.
In step S102, after obtaining RGBD facial image, on the RGBD facial image, by acquiring face element
To carry out the acquisition of characteristic point, wherein face element includes: one of eyebrow, eyes, nose, mouth, cheek and lower Palestine and China
Or it is multiple.
The acquisition methods of characteristic point can be it is a variety of, for example, passing through the face, face such as the eyes of handmarking's face, nose
The characteristic points such as cheek, lower jaw and its edge can also be compatible with the human face characteristic point labeling method of RGB (2D) to determine the feature of face
For example, 9 characteristic points of face, point of these characteristic points the localization method of face key feature points: are chosen
Cloth has angle invariability, respectively 2 eyeball central points, 4 canthus points, the midpoint in two nostrils and 2 corners of the mouth points.In this base
Other characteristic point positions of each organ characteristic of related face and extension can be readily available and identified on plinth, for into one
The recognizer of step.
When carrying out face characteristic extraction, since the marginal information of part effectively can not be organized, traditional side
Edge detective operators cannot reliably extract the feature (profiles of eyes or mouth) of face, but from human visual system, sufficiently
The positioning that face key feature points are carried out using the feature of edge and angle point, then can greatly improve its reliability.
Wherein selection Susan (Smallest Univalue Segment Assimilating Nucleus) operator is used for
Extract the edge and corner feature of regional area.According to the characteristic of Susan operator, it not only can be used to detect edge, but also can be used to
Extract angle point.Therefore it compares with edge detection operators such as Sobel, Canny, Susan operator is more suitable for carrying out face eye
The extraction of the features such as portion and mouth, the especially automatic positioning to canthus point and corners of the mouth point.
It is the introduction of Susan operator below:
Image is traversed with a circular shuttering, if the gray value of any other pixel and template center's pixel (core) in template
The difference of gray value be less than certain threshold value, be considered as the gray value of the point and core with identical (or close), meet such condition
Pixel composition region be known as the similar area of core value (Univalue Segment Assimilating Nucleus, USAN).?
It is the base of SUSAN criterion that each pixel in image is associated with the regional area with close gray value.
When specific detection, it is to scan whole image with circular shuttering, compares the ash of each pixel and center pixel in template
Angle value, and given threshold value differentiates whether the pixel belongs to the region USAN, such as following formula:
In formula, c (r, r0) it is the discriminant function for belonging to the pixel in the region USAN in template, I (r0) it is template center's pixel
The gray value of (core), I (r) are the gray value of any other pixel in template, and t is gray scale difference thresholding.It influences to detect angle point
Number.T reduces, and more fine variations in image is obtained, to provide relatively large number of amount detection.Thresholding t must root
It is determined according to factors such as the contrast of image and noises.Then the USAN area size of certain point can be expressed from the next in image:
Wherein, g is geometric threshold, influences the angle point shape detected, and g is smaller, and the angle point detected is more sharp.(1)t,g
Determination thresholding g determine output angle point the region USAN maximum value, as long as pixel that is, in image has the USAN smaller than g
Region, the point are just judged as angle point.The size of g not only determines the number that angle point can be extracted from image, and such as preceding institute
It states, it also determines the acuity of detected angle point.So once it is determined that the quality (acuity) of required angle point,
G can take a changeless value.Thresholding t indicates to can be detected the minimum contrast of angle point, and the noise that can ignore
Maximum tolerance.It essentially dictates the feature quantity that can be extracted, and t is smaller, and spy can be extracted from the lower image of contrast
Sign, and the feature extracted is also more.Therefore for the image of different contrast and noise situations, different t values should be taken.
SUSAN operator have the advantages that one it is prominent, be exactly to local insensitive for noise, anti-noise ability is strong.This is because it independent of
Early period image segmentation as a result, and avoid gradient calculating, in addition, the region USAN is by having in template with template center pixel
The pixel of similar gray-value is cumulative and obtains, this is actually an integral process, has good inhibiting effect for Gaussian noise.
The last stage of SUSAN two dimensional character detection exactly finds the local maximum of initial angle point response, also
It is non-maximum suppression processing, to obtain final corner location.Non- maximum suppression is exactly in subrange, such as its name suggests
The initial communication of fruit center pixel is the maximum value in this region, then retains its value, otherwise delete, can be obtained by part in this way
The maximum value in region.
(1) automatic positioning of eyeball and canthus.During the automatic positioning at eyeball and canthus, first using normalization mould
The matched method Primary Location face of plate.The general area of face is determined in entire facial image.Common human eye positioning
Algorithm is positioned according to the valley point property of eyes, and herein then using by the symmetrical of the search of valley point and direction projection and eyeball
Property the method that combines, the accuracy of eyes positioning can be improved using the correlation between two.To the upper left of face area
Gradient map integral projection is carried out with upper right portion, and the histogram of integral projection is normalized, first according to floor projection
Valley point determine that eyes in the approximate location in the direction y, then allow x to change in the larger context, find the paddy in this region
Point, what be will test puts the eyeball central point as two.
On the basis of obtaining two eyeball positions, ocular is handled, uses self-adaption binaryzation method first
It determines threshold value, obtains the automatic binary image of ocular, then in conjunction with Susan operator, examined using edge and angle point
Interior tail of the eye point is accurately positioned in the algorithm of survey in ocular.
By the ocular edge image that above-mentioned algorithm obtains, angle is carried out to the boundary curve in image on this basis
Point, which extracts, can be obtained accurate two intraocular external eyes corner locations.
(2) automatic positioning of nose characteristic of field point.The key feature points of face nasal area are determined as two nostrils center
The midpoint of line, i.e. muffle central point.The position of face muffle central point is relatively stable, and for facial image normalizing
It can also play the role of datum mark when changing pretreatment.
Based on two eyeball positions found, the position in two nostrils is determined using the method for area grayscale integral projection
The strip region of two eye pupil hole widths is intercepted first, carries out Y-direction integral projection, then drop shadow curve is divided
Analysis.It can be seen that searching for downwards along drop shadow curve from the Y-coordinate height of eyeball position, the position for finding first valley point is (logical
Cross adjustment and select peak valley Δ value appropriate, ignoring the intermediate burr that may be generated due to factors such as face's scar or glasses is influenced),
Using this valley point as the Y-coordinate datum mark of naris position；Second step is chosen using two eyeball X-coordinate as width, in the Y-coordinate of nostril
Lower δ pixel (for example, choosing δ=[nostril Y-coordinate-eyeball Y-coordinate] × 0.06) is that the region of height carries out X-direction integral throwing
Then shadow is analyzed drop shadow curve, the point centered on the X-coordinate at two eye pupil hole midpoints, and two sides carry out to the left and right respectively
Search, first valley point found is the X-coordinate of the central point in left and right nostril.The midpoint in two nostrils is calculated as in muffle
Point, obtains the accurate location of muffle central point, and delimits nasal area.
(3) automatic positioning of the corners of the mouth.Since the difference of human face expression may cause the large variation of mouth shape, and
Mouth region is easier the interference by factors such as beards, thus the accuracy extracted of mouth feature point for identification influence compared with
Greatly.Since the position of corners of the mouth point is influenced by expression etc. that relative variability is smaller, the position of angle point is more accurate, so taking mouth region
Important Characteristic Points be two corners of the mouth points positioning method.
On the basis of eyes region and nose characteristic of field point has been determined, first with the method for area grayscale integral projection
Determine from first valley point of the following Y-coordinate drop shadow curve in nostril (similarly, need to eliminate by peak valley Δ value appropriate due to
The burr that the factors such as beard, mole trace generate influences) Y-coordinate position as mouth；Then mouth region is selected, to area image
It is handled using Susan operator, after obtaining mouth edge figure；Angle point grid is finally carried out, two corners of the mouths can be obtained
S103: face colour 3D grid is established according to characteristic point.
S104: according to the characteristic value of face colour 3D grid measures characteristic point and the connection relationship between characteristic point is calculated.
Specifically, associated eigenvalue can be measured for the characteristic point of face characteristic by colouring information, it should
Characteristic value is that face characteristic in 2D plane includes to one in position, distance, shape, size, angle, radian and curvature
Kind or a variety of measurements, in addition, further including the measurement to color, brightness, texture etc..Such as according to iris central pixel point to
Surrounding extends, and obtains whole location of pixels of eyes, the shape of eyes, the inclination radian at canthus, eye color etc..
Color combining information and depth information can then calculate the connection relationship between characteristic point, which can
To be topological connection relation and space geometry distance between characteristic point, or it is also possible to the various combined dynamics of characteristic point
Connection relation information etc..
The plane letter of each element including face itself can be obtained according to the measurement of face colour 3D grid and calculating
Spatial relation between breath and the local message and each element of the spatial relation of the characteristic point on each element
Global Information.Local message and Global Information respectively from part and on the whole reflection lie in information on face RGBD figure and
S105: characteristic value and connection relationship are analyzed to obtain the 3d space distribution characteristics information of characteristic point.
In step S105, by the analysis to characteristic value and connection relationship, thus three-dimensional face shape letter can be obtained
Breath when so that the later period is carried out recognition of face, can pass through to obtain the 3d space distribution characteristics information of each characteristic point of face
The 3d space distribution characteristics information of face is identified.
It is different from the prior art, the present invention establishes face colour 3D net by the characteristic point acquired on face RGBD atlas
Lattice, and by the characteristic value and connection relationship of face colour 3D grid acquisition characteristic point, to obtain human face characteristic point
3d space distribution characteristics information, be applied to face identification, due to 3d space distribution characteristics information include colouring information and
Depth information, face information is more comprehensive, also, can establish face skeleton by the 3d space distribution characteristics information, passes through
Face skeleton is identified, so the non-geometric cosmetic variation such as the posture of face, expression, illumination and facial makeup and face
The variation of situations such as fat or thin will not influence recognition of face, therefore can be more accurate to the identification of face.
Referring to Fig. 2, Fig. 2 be another embodiment of the present invention provides a kind of face 3D characteristic information acquisition methods stream
Journey schematic diagram.
S201: RGBD facial image is obtained.
S202: pass through the characteristic point of RGBD man face image acquiring face.
S203: face colour 3D grid is established according to characteristic point.
S204: according to the characteristic value of face colour 3D grid measures characteristic point and the pass of the Topology connection between characteristic point is calculated
System and space geometry distance.
S205: using finite element method between characteristic value, characteristic point topological connection relation and space geometry away from
The 3d space distribution characteristics information of characteristic point is obtained from being analyzed.
Specifically, curved surface deformation can be carried out to face colour 3D grid using finite element analysis.Finite element analysis (FEA,
Finite Element Analysis) i.e. using mathematical approach method to actual physical system (geometry and load working condition) into
Row simulation.Also utilize simple and interaction element, i.e. unit, so that it may which it is unlimited to go to approach with the unknown quantity of limited quantity
The real system of unknown quantity.
For example, can establish the list of line unit after carrying out strain energy of distortion analysis to each line unit of face colour 3D grid
First stiffness equations.Then it introduces constraint element, such as point, line, cuts arrow, method arrow constraint element type.Because curve and surface will expire
Foot checks when design to its shape, position, size and requires with the continuity of adjacent curved surface etc., these be all by constrain come
It realizes.The present embodiment handles these constraints by penalty function method, the final stiffness matrix and equivalent load for obtaining constraint element
Expand the data structure of Deformable curve and surface, so that the data structure of Deformable curve and surface had both included such as order, control
The geometric parameter part of vertex processed and knot vector etc., further includes some parameters for showing physical characteristic and external applied load.To make
Obtaining Deformable curve and surface can integrally indicate that some more complicated bodies show to enormously simplify the geometrical model of face.And
And physical parameter in data structure and constrained parameters uniquely determine the configuration geometric parameter of face,
Finite element solving Deformable curve and surface is used by programming, for different constraint elements, setting unit enters
Mouth program can calculate the element stiffness matrix and unit load array of any constraint.According to pair of global stiffness matrix
Title property, band-like property and sparsity calculate global stiffness matrix using variable bandwidth one-dimension array storage method.When assembling, not only
By line unit or face element stiffness matrix, constraint element stiffness matrix is also added to global stiffness square by " sitting in the right seat " mode
In battle array, while constraint element equivalent load array being added in General load array, line is finally solved using Gaussian reduction
Property Algebraic Equation set.
For example, the formative method of face curve and surface can be described with mathematical model are as follows:
Required deformation curve
Or curved surface
It is the solution of following extreme-value problem
Wherein,It is the energy functional of curve and surface, it reflects the deformation characteristics of curve and surface to a certain extent,
Assign curve and surface physical characteristic.F1, f2, f3, f4 are the functions about variable in (),It is parameter definition
The boundary in domain, Γ ' are the curve in Surface Parameters domain, (μ0, v0) be certain parameter value in parameter field, condition (1) be interpolating on sides about
Beam, condition (2) are boundary continuity constraints, and condition (3) is the constraint of characteristic curve in curved surface, and condition (4) is in curve and surface
Point constraint.In the application, energy functionalTake into following form:
Wherein, α, β, γ respectively indicate the stretching of curve, object for appreciation is gone, coefficient of torsion, and α ij and β ij are respectively curved surface at (μ, v)
Place removes coefficient with object for appreciation very much partially along μ, the drawing in the direction v.
It is both full as can be seen that Deformable curve and surface modeling method is same, handles all kinds of constraints in phase from mathematical model
Foot Partial controll in turn ensures whole wide suitable.Using variation principle, solution such as lower section can be converted by solving above-mentioned extreme-value problem
Here δ indicates first variation.Formula (5) is a differential equation, since the equation is more complicated, it is difficult to find out essence
Really analysis knot, therefore liberated using numerical value.For example, using finite element method.
Finite element method is regarded as first selecting suitable Interpolation as needed, then solves combination parameter, therefore institute
The solution obtained is not only conitnuous forms, and the grid that pre-treatment generates also is laid a good foundation for finite element analysis.
Similarity measurement between cognitive phase, unknown facial image and known face template is given by:
In formula: CiXjThe feature of face, i in the feature and face database of face respectively to be identified1,i2,j1,j2,k1,k2For
3D grid vertex feature.First item in formula is that machine selects corresponding local feature X in two vector fieldsjAnd CiSimilarity degree,
Binomial is then to calculate local location relationship and matching order, it can be seen that, when best match i.e. least energy function
Curved surface deformation has been carried out to face colour 3D grid by above-mentioned finite element method, has kept face colour 3D grid each
Point to obtain three-dimensional face shape information, and then obtains human face characteristic point constantly close to the characteristic point of real human face
3d space distribution characteristics information.
Referring to Fig. 3, Fig. 3 is a kind of stream of the acquisition methods for face 3D characteristic information that further embodiment of this invention provides
Journey schematic diagram.
S301: RGBD facial image is obtained.
S302: pass through the characteristic point of RGBD man face image acquiring face.
In the present embodiment, face certain bits are carried out using wavelet transform, to acquire the characteristic point of face.It is face first
Portion's zone location, the shape of face are similar to ellipse, therefore the available ellipse algorithm that detects determines human face region, and obtains face
Rotate angle.Detection ellipse obtain include central point coordinate, the length of long and short axis, the parameters such as elliptical rotation angle, rotate
Angle can determine the angle of face rotation.
Followed by the positioning of face characteristic, for example, eyes, eyebrow, nose and mouth show as horizontal properties, it is the direction x
Low frequency signal and the direction y high-frequency signal, selected from LH component positioning eyes and mouth.Iris be the most important feature of face it
One, comprising more information content, original image is selected to position iris.
(1) eyes and Iris Location.By eyes in conjunction with eyebrow, positioned as eye feature.If obtained eye area
Domain completely includes iris, i.e. positioning is correct.After obtaining ocular positioning, iris is positioned, the shape of iris is standard round
Shape, due to the structure of eyes, iris is always pressed by part person, therefore is positioned using the HOUGH of strong interference immunity transformation to it.
(2) positioning of mouth and nose.Mouth and nose show as horizontal properties, are in wavelet transformation LH component and ellipse short shaft
Parallel line segment or segmental arc.The wing of nose is vertical features and characteristic is relatively stable, is divided to two kinds to be and transverse in wavelet transformation HL
Parallel line segment, using the geometrical relationship between HOUGH transformation and face characteristic, the line segment of detection mouth, nose and the wing of nose is being schemed
S303: face colour 3D grid is established according to characteristic point.
S304: according to the characteristic value of face colour 3D grid measures characteristic point and the pass of the Dynamic link library between characteristic point is calculated
It is information.
S305: the Dynamic link library relationship between characteristic value and characteristic point is divided using wavelet transformation texture analysis method
Analysis, to obtain the 3d space distribution characteristics information of characteristic point.
Specifically, Dynamic link library relationship is the Dynamic link library relationship of various characteristic points combination.Wavelet transformation be the time and
The local of frequency converts, it has the feature of multiresolution analysis, and all has characterization signal local feature in time-domain and frequency-domain
Ability.The present embodiment is passed through to the extraction of textural characteristics, classification and analytical procedure and is combined by wavelet transformation texture analysis
Face characteristic value and Dynamic link library relation information, specifically include colouring information and depth information, final to obtain three-dimensional face
Shape information, finally analysis extracts the lower face shape with invariance of face slight expression variation from face shape information again
Shape information carries out coding face shape model parameter, which can be used as the geometrical characteristic of face, to obtain face
The 3d space distribution characteristics information of characteristic point.
Face 2D feature letter has been also compatible in the acquisition methods for the face 3D characteristic information that some other embodiment provides
The acquisition of breath, the acquisition methods of face 2D characteristic information can be the various methods of this field routine.In those embodiments,
While obtaining face 3D characteristic information, face 2D characteristic information can also be obtained, to carry out the knowledge of 3D and 2D to face simultaneously
Not, to further increase the accuracy of recognition of face.
For example, the basis of 3 D wavelet transformation is as follows:
AJ1SPACE V is arrived for function f (x, y, z)3 J1Projection operator,
QnFor Hx,Hy,Hz Gx,Gy,GzCombination；
Order matrix H=(HM, k), G=(GM, k), wherein Hx,Hy,Hz
It respectively indicates H and is applied to three dimensional signal x, y, on the direction z, Gx,Gy,GzIt respectively indicates G and is applied to three dimensional signal x, y, on the direction z.
Cognitive phase takes its low frequency low resolution subgraph to be mapped to face space after unknown facial image wavelet transformation,
To obtain characteristic coefficient, can be used between Euclidean distance characteristic coefficient more to be sorted and everyone characteristic coefficient away from
From in conjunction with PCA algorithm, according to formula:
In formula, K be with the most matched people of unknown face, N be database number, Y be unknown face be mapped to by eigenface
The m dimensional vector obtained on the subspace of formation, YkIt is mapped on the subspace formed by eigenface for face known in database
Obtained m dimensional vector.
It is to be appreciated that in another embodiment, can also use is the 3D recognition of face based on 2-d wavelet feature
Method is identified, it is necessary first to carry out 2-d wavelet feature extraction, 2-d wavelet basic function g (x, y) is defined as
gmn(x, y)=a-mnG (x ', y '), a > 1, m.n ∈ Z
Wherein, σ is the size of Gauss window, and the filter function of a self similarity can pass through function gmn(x, y) to g (x,
Y) it is suitably expanded and rotation obtains.Based on superior function, the wavelet character of image I (x, y) can be defined as
Steps are as follows for the realization of facial image 2-d wavelet extraction algorithm:
(1) it is obtained by wavelet analysis and is characterized about the small echo of face, convert the individual features in original image I (x, y)
For wavelet-based attribute vector F (F ∈ Rm)。
(2) Fractional power polynomial models (FPP) model k (x, y)=(xy) is usedd(0 < d < 1) makes m tie up wavelet character space
RmProject to higher n-dimensional space RnIn.
(3) it is based on the linear judgment analysis algorithm (KFDA) of core, in RnMatrix S between class is established in spacebWith matrix S in classw。
Calculate SwNormal orthogonal feature vector α1, α2..., αn。
(4) it extracts facial image and significantly differentiates feature vector.Another P1=(α1, α2..., αq), wherein α1, α2..., αqIt is
SwThe feature vector that corresponding q characteristic value is positive, q=rank (Sw).It calculatesIt is maximum special corresponding to L
The feature vector β of value indicative1, β2..., βL, (L≤c-1), whereinC is face
The quantity of classification.It is significant to differentiate feature vector, fregular=BTP1 TY wherein, y ∈ Rn；B=(β1, β2..., βl)。
(5) the inapparent differentiation feature vector of facial image is extracted.It calculatesFeature corresponding to a maximum eigenvalue
Vector γ1, γ2..., γL, (L≤c-1).Enable P2=(αq+1, αq+2..., αm), then inapparent differentiation feature vector
In the 3D recognition of face stage, the steps included are as follows:
(1) front face is detected, positions face characteristic crucial in a front face and a facial image
Point, such as contour feature point, left eye and the right eye of face, mouth and nose etc..
(2) three-dimensional people is rebuild by the two-dimensional Gabor feature vector of said extracted and a common 3D face database
Face model.In order to rebuild a three-dimensional face model, ORL (Olivetti Research Laboratory) single face three is used
Face database is tieed up, including 100 facial images detected.Each faceform has nearly 70000 tops in database
Point.Determine a Feature Conversion matrix P, in original three-dimensional face identification method, which is usually by subspace analysis side
The subspace analysis projection matrix that method obtains is corresponded to the feature vector of preceding m maximum eigenvalue by the covariance matrix of sample
Composition.The small echo extracted is differentiated that feature vector corresponds to the feature vector of m maximum eigenvalue, forms main Feature Conversion square
Battle array P ', this feature transition matrix have stronger robustness to factors such as illumination, posture and expressions than original eigenmatrix P, i.e.,
The feature of representative is more acurrate and stablizes.
(3) newly-generated faceform is handled using template matching and linear discriminant analysis (FLDA) method, is mentioned
Difference and class inherited in the class of modulus type, advanced optimize last recognition result.
It please be a kind of structure of equipment for obtaining face 3D characteristic information provided in an embodiment of the present invention refering to Fig. 4, Fig. 4
The present invention also provides a kind of equipment for obtaining face 3D characteristic information, and specifically, which includes that image obtains
Module 10, acquisition module 20, grid establish module 30, computing module 40 and analysis module 50.
Wherein, image collection module 10 is for obtaining RGBD facial image.
Acquisition module 20 is connect with image collection module 10, for passing through the characteristic point of RGBD man face image acquiring face.
Specifically, acquisition module 20 carries out the acquisition of characteristic point by acquisition face element, wherein face element includes: eyebrow, eye
Eyeball, nose, mouth, cheek and lower Palestine and China one or more.
Grid is established module 30 and is connect with acquisition module 20, for establishing face colour 3D grid according to characteristic point.
Computing module 40 is established module 30 with grid and is connect, for the feature according to face colour 3D grid measures characteristic point
It is worth and calculates the connection relationship between characteristic point.Wherein, characteristic value include position, distance, shape, size, angle, radian and
One or more of curvature.
Analysis module 50 is connect with computing module 40, and it is empty to obtain the 3D of characteristic point for analyzing characteristic value and connection relationship
Between distribution characteristics information.
In one embodiment, connection relationship is characterized topological connection relation and space geometry distance between a little.Analysis
Module carries out the 3d space distribution spy that curved surface deformation obtains characteristic point to face colour 3D grid by finite element method
In another embodiment, connection relationship is characterized various combined Dynamic link library relation informations a little.Analyze mould
Agllutination closes wavelet transformation texture analysis and obtains face shape information, then the variation of face slight expression is lower to have invariance by extracting
Face shape information carry out coding face shape model parameter and obtain the 3d space distribution characteristics information of characteristic point.
Referring to Fig. 5, Fig. 5 is a kind of equipment entity device for obtaining face 3D characteristic information provided in an embodiment of the present invention
Structural schematic diagram.The device of present embodiment can execute the step in the above method, and related content refers to the above method
In detailed description, details are not described herein.
The intelligent electronic device includes processor 61, the memory coupled with processor 61 62.
Memory 62 calculates institute for storage program area, the program of setting, the RGBD facial image of acquisition and storage
The 3d space distribution characteristics information of the characteristic point obtained.
Processor 61 is for obtaining RGBD facial image；Pass through the characteristic point of the RGBD man face image acquiring face；Root
Face colour 3D grid is established according to the characteristic point；The characteristic value of the characteristic point is measured simultaneously according to the face colour 3D grid
Calculate the connection relationship between the characteristic point；The characteristic value and the connection relationship are analyzed to obtain the feature
The 3d space distribution characteristics information of point.
In several embodiments provided by the present invention, it should be understood that disclosed device and method can pass through
Other modes are realized.For example, equipment embodiment described above is only schematical, for example, module or unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit
It closes or communicates to connect, can be electrical property, mechanical or other forms.
Unit may or may not be physically separated as illustrated by the separation member, shown as a unit
Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks
On unit.It can select some or all of unit therein according to the actual needs to realize the mesh of present embodiment scheme
In addition, each functional unit in each embodiment of the present invention can integrate in one processing unit, it can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units.It is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.
It, can if integrated unit is realized in the form of SFU software functional unit and when sold or used as an independent product
To be stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention substantially or
Say that all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products
Out, which is stored in a storage medium, including some instructions are used so that a computer equipment
(can be personal computer, server or the network equipment etc.) or processor (processor) execute each implementation of the present invention
The all or part of the steps of methods.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM,
Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. it is various
It can store the medium of program code.
In conclusion the present invention obtains the 3d space distribution characteristics information of human face characteristic point, to be applied to the knowledge of face
Not, since 3d space distribution characteristics information includes colouring information and depth information, face information is more comprehensive, also, passes through
The 3d space distribution characteristics information can establish face skeleton, be identified by face skeleton, so the posture of face, table
Situations such as non-geometric cosmetic variation and the face such as feelings, illumination and facial makeup are fat or thin variation will not carry out recognition of face
It influences, therefore can be more accurate to the identification of face.
Mode the above is only the implementation of the present invention is not intended to limit the scope of the invention, all to utilize this
Equivalent structure or equivalent flow shift made by description of the invention and accompanying drawing content, it is relevant to be applied directly or indirectly in other
Technical field is included within the scope of the present invention.