CN106778491B - The acquisition methods and equipment of face 3D characteristic information - Google Patents

The acquisition methods and equipment of face 3D characteristic information Download PDF

Info

Publication number
CN106778491B
CN106778491B CN201611036376.3A CN201611036376A CN106778491B CN 106778491 B CN106778491 B CN 106778491B CN 201611036376 A CN201611036376 A CN 201611036376A CN 106778491 B CN106778491 B CN 106778491B
Authority
CN
China
Prior art keywords
characteristic point
characteristic
information
point
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611036376.3A
Other languages
Chinese (zh)
Other versions
CN106778491A (en
Inventor
黄源浩
肖振中
许宏淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Obi Zhongguang Technology Group Co., Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201611036376.3A priority Critical patent/CN106778491B/en
Publication of CN106778491A publication Critical patent/CN106778491A/en
Application granted granted Critical
Publication of CN106778491B publication Critical patent/CN106778491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • G06K9/00268Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • G06K9/00288Classification, e.g. identification

Abstract

The present invention provides the acquisition methods and equipment of a kind of face 3D characteristic information.Method includes the following steps: obtaining RGBD facial image;Pass through the characteristic point of RGBD man face image acquiring face;Face colour 3D grid is established according to characteristic point;According to the characteristic value of face colour 3D grid measures characteristic point and calculate the connection relationship between characteristic point;Characteristic value and connection relationship are analyzed to obtain the 3d space distribution characteristics information of characteristic point.The equipment includes that image collection module, acquisition module, grid establish module, computing module and analysis module.The present invention can obtain the 3d space distribution characteristics information of human face characteristic point, include the colouring information and depth information of face, so that the face information obtained is more comprehensive, so that recognition of face is more accurate.

Description

The acquisition methods and equipment of face 3D characteristic information
Technical field
The present invention relates to the technical fields for obtaining face 3D characteristic information, more particularly to a kind of face 3D characteristic information Acquisition methods and equipment.
Background technique
Information security issue has caused the extensive attention of various circles of society.The main path to ensure information safety is exactly The identity of information user is accurately identified, further judges whether the permission of user's acquisition information closes by identification result Method, to achieve the purpose that guarantee that information is not leaked and ensures user's legitimate rights and interests.Therefore, reliable identification is very It is important and necessary.
Recognition of face is a kind of biological identification technology for carrying out identification based on facial feature information of people, recognition of face Personal identification authentication technique of the technology as one kind more conveniently, safely, more and more attention has been paid to.Traditional face recognition technology For 2D recognition of face, 2D recognition of face does not have depth information, is easy non-several by posture, expression, illumination and facial makeup etc. The influence of what cosmetic variation, therefore, it is difficult to carry out accurate recognition of face.
Summary of the invention
The present invention provides the acquisition methods and equipment of a kind of face 3D characteristic information, is able to solve difficulty of the existing technology The problem of to carry out accurate recognition of face.
In order to solve the above technical problems, one technical scheme adopted by the invention is that: a kind of face 3D characteristic information is provided Acquisition methods, method includes the following steps: obtain RGBD facial image;Pass through the RGBD man face image acquiring face Characteristic point;Face colour 3D grid is established according to the characteristic point;The characteristic point is measured according to the face colour 3D grid Characteristic value and calculate the connection relationship between the characteristic point;The characteristic value and the connection relationship are analyzed to obtain Take the 3d space distribution characteristics information of the characteristic point.
Wherein, the company between the characteristic value and the characteristic point of the characteristic point according to the face colour 3D grid computing In the step of connecing relationship, topological connection relation and space geometry distance of the connection relationship between the characteristic point;
The step of 3d space distribution characteristics information of the characteristic point is obtained according to the characteristic value and the connection relationship In, the 3d space distribution characteristics information of human face characteristic point is obtained and carrying out curved surface deformation to the face colour 3D grid.
Wherein, the company between the characteristic value and the characteristic point of the characteristic point according to the face colour 3D grid computing In the step of connecing relationship, the connection relationship is the various combined Dynamic link library relation informations of the characteristic point;
The step of 3d space distribution characteristics information of the characteristic point is obtained according to the characteristic value and the connection relationship In, the 3d space distribution characteristics information of the characteristic point is obtained by obtaining face shape information.
Wherein, in the step of characteristic point by the RGBD man face image acquiring face, pass through acquisition face member Element carries out the acquisition of the characteristic point, wherein the face element includes: eyebrow, eyes, nose, mouth, cheek and chin In one or more.
Wherein, the characteristic value include one of position, distance, shape, size, angle, radian and curvature or It is a variety of.
In order to solve the above technical problems, another technical solution used in the present invention is: it is special to provide a kind of acquisition face 3D The equipment of reference breath, the equipment include that image collection module, acquisition module, grid establish module, computing module and analysis module; Image collection module is for obtaining RGBD facial image;Acquisition module obtains module with described image and connect, for by described The characteristic point of RGBD man face image acquiring face;Grid is established module and is connect with the acquisition module, for according to the feature Point establishes face colour 3D grid;Computing module is established module with the grid and is connect, for according to the face colour 3D net Lattice measure the characteristic value of the characteristic point and calculate the connection relationship between the characteristic point;Analysis module and the computing module Connection, the 3d space distribution characteristics information of the characteristic point is obtained for analyzing the characteristic value and the connection relationship.
Wherein, topological connection relation and space geometry distance of the connection relationship between the characteristic point;Described point The 3d space distribution characteristics that analysis module obtains human face characteristic point and carrying out curved surface deformation to the face colour 3D grid is believed Breath.
Wherein, the connection relationship is the various combined Dynamic link library relation informations of the characteristic point;The analysis mould Block obtains the 3d space distribution characteristics information of the characteristic point by obtaining face shape information.
Wherein, the acquisition module carries out the acquisition of the characteristic point by acquiring face element, wherein the face member Element includes: one or more of eyebrow, eyes, nose, mouth, cheek and lower Palestine and China.
Wherein, the characteristic value include one of position, distance, shape, size, angle, radian and curvature or It is a variety of.
The beneficial effects of the present invention are: being in contrast to the prior art, the present invention on face RGBD atlas by acquiring Characteristic point establish face colour 3D grid, and the characteristic value of characteristic point is obtained by the face colour 3D grid and connection is closed System, so that the 3d space distribution characteristics information of human face characteristic point is obtained, to be applied to the identification of face, since 3d space is distributed Characteristic information includes colouring information and depth information, and face information is more comprehensive, can by the 3d space distribution characteristics information It to establish face skeleton, is identified by face skeleton, so the posture of face, expression, illumination and facial makeup etc. are non- The variation of situations such as geometry appearance variation and fat or thin face will not influence recognition of face, therefore to the identification energy of face It is more accurate.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is a kind of flow diagram of the acquisition methods of face 3D characteristic information provided in an embodiment of the present invention;
Fig. 2 be another embodiment of the present invention provides a kind of face 3D characteristic information acquisition methods flow diagram;
Fig. 3 is a kind of flow diagram of the acquisition methods for face 3D characteristic information that further embodiment of this invention provides;
Fig. 4 is a kind of structural schematic diagram of equipment for obtaining face 3D characteristic information provided in an embodiment of the present invention;
Fig. 5 is a kind of structural representation of equipment entity device for obtaining face 3D characteristic information provided in an embodiment of the present invention Figure.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that the described embodiments are merely a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other Embodiment shall fall within the protection scope of the present invention.
Referring to Fig. 1, the process that Fig. 1 is a kind of acquisition methods of face 3D characteristic information provided in an embodiment of the present invention is shown It is intended to.
The acquisition methods of the face 3D characteristic information of the present embodiment the following steps are included:
S101: RGBD facial image is obtained.
Specifically, RGBD facial image includes the colouring information (RGB) and depth information (Depth) of face, RGBD face Image can be obtained by Kinect sensor.Wherein, which is specially an image set, including for example same Multiple RGBD images of personal multiple angles.
S102: pass through the characteristic point of RGBD man face image acquiring face.
In step S102, after obtaining RGBD facial image, on the RGBD facial image, by acquiring face element To carry out the acquisition of characteristic point, wherein face element includes: one of eyebrow, eyes, nose, mouth, cheek and lower Palestine and China Or it is multiple.
The acquisition methods of characteristic point can be it is a variety of, for example, passing through the face, face such as the eyes of handmarking's face, nose The characteristic points such as cheek, lower jaw and its edge can also be compatible with the human face characteristic point labeling method of RGB (2D) to determine the feature of face Point.
For example, 9 characteristic points of face, point of these characteristic points the localization method of face key feature points: are chosen Cloth has angle invariability, respectively 2 eyeball central points, 4 canthus points, the midpoint in two nostrils and 2 corners of the mouth points.In this base Other characteristic point positions of each organ characteristic of related face and extension can be readily available and identified on plinth, for into one The recognizer of step.
When carrying out face characteristic extraction, since the marginal information of part effectively can not be organized, traditional side Edge detective operators cannot reliably extract the feature (profiles of eyes or mouth) of face, but from human visual system, sufficiently The positioning that face key feature points are carried out using the feature of edge and angle point, then can greatly improve its reliability.
Wherein selection Susan (Smallest Univalue Segment Assimilating Nucleus) operator is used for Extract the edge and corner feature of regional area.According to the characteristic of Susan operator, it not only can be used to detect edge, but also can be used to Extract angle point.Therefore it compares with edge detection operators such as Sobel, Canny, Susan operator is more suitable for carrying out face eye The extraction of the features such as portion and mouth, the especially automatic positioning to canthus point and corners of the mouth point.
It is the introduction of Susan operator below:
Image is traversed with a circular shuttering, if the gray value of any other pixel and template center's pixel (core) in template The difference of gray value be less than certain threshold value, be considered as the gray value of the point and core with identical (or close), meet such condition Pixel composition region be known as the similar area of core value (Univalue Segment Assimilating Nucleus, USAN).? It is the base of SUSAN criterion that each pixel in image is associated with the regional area with close gray value.
When specific detection, it is to scan whole image with circular shuttering, compares the ash of each pixel and center pixel in template Angle value, and given threshold value differentiates whether the pixel belongs to the region USAN, such as following formula:
In formula, c (r, r0) it is the discriminant function for belonging to the pixel in the region USAN in template, I (r0) it is template center's pixel The gray value of (core), I (r) are the gray value of any other pixel in template, and t is gray scale difference thresholding.It influences to detect angle point Number.T reduces, and more fine variations in image is obtained, to provide relatively large number of amount detection.Thresholding t must root It is determined according to factors such as the contrast of image and noises.Then the USAN area size of certain point can be expressed from the next in image:
Wherein, g is geometric threshold, influences the angle point shape detected, and g is smaller, and the angle point detected is more sharp.(1)t,g Determination thresholding g determine output angle point the region USAN maximum value, as long as pixel that is, in image has the USAN smaller than g Region, the point are just judged as angle point.The size of g not only determines the number that angle point can be extracted from image, and such as preceding institute It states, it also determines the acuity of detected angle point.So once it is determined that the quality (acuity) of required angle point, G can take a changeless value.Thresholding t indicates to can be detected the minimum contrast of angle point, and the noise that can ignore Maximum tolerance.It essentially dictates the feature quantity that can be extracted, and t is smaller, and spy can be extracted from the lower image of contrast Sign, and the feature extracted is also more.Therefore for the image of different contrast and noise situations, different t values should be taken. SUSAN operator have the advantages that one it is prominent, be exactly to local insensitive for noise, anti-noise ability is strong.This is because it independent of Early period image segmentation as a result, and avoid gradient calculating, in addition, the region USAN is by having in template with template center pixel The pixel of similar gray-value is cumulative and obtains, this is actually an integral process, has good inhibiting effect for Gaussian noise.
The last stage of SUSAN two dimensional character detection exactly finds the local maximum of initial angle point response, also It is non-maximum suppression processing, to obtain final corner location.Non- maximum suppression is exactly in subrange, such as its name suggests The initial communication of fruit center pixel is the maximum value in this region, then retains its value, otherwise delete, can be obtained by part in this way The maximum value in region.
(1) automatic positioning of eyeball and canthus.During the automatic positioning at eyeball and canthus, first using normalization mould The matched method Primary Location face of plate.The general area of face is determined in entire facial image.Common human eye positioning Algorithm is positioned according to the valley point property of eyes, and herein then using by the symmetrical of the search of valley point and direction projection and eyeball Property the method that combines, the accuracy of eyes positioning can be improved using the correlation between two.To the upper left of face area Gradient map integral projection is carried out with upper right portion, and the histogram of integral projection is normalized, first according to floor projection Valley point determine that eyes in the approximate location in the direction y, then allow x to change in the larger context, find the paddy in this region Point, what be will test puts the eyeball central point as two.
On the basis of obtaining two eyeball positions, ocular is handled, uses self-adaption binaryzation method first It determines threshold value, obtains the automatic binary image of ocular, then in conjunction with Susan operator, examined using edge and angle point Interior tail of the eye point is accurately positioned in the algorithm of survey in ocular.
By the ocular edge image that above-mentioned algorithm obtains, angle is carried out to the boundary curve in image on this basis Point, which extracts, can be obtained accurate two intraocular external eyes corner locations.
(2) automatic positioning of nose characteristic of field point.The key feature points of face nasal area are determined as two nostrils center The midpoint of line, i.e. muffle central point.The position of face muffle central point is relatively stable, and for facial image normalizing It can also play the role of datum mark when changing pretreatment.
Based on two eyeball positions found, the position in two nostrils is determined using the method for area grayscale integral projection It sets.
The strip region of two eye pupil hole widths is intercepted first, carries out Y-direction integral projection, then drop shadow curve is divided Analysis.It can be seen that searching for downwards along drop shadow curve from the Y-coordinate height of eyeball position, the position for finding first valley point is (logical Cross adjustment and select peak valley Δ value appropriate, ignoring the intermediate burr that may be generated due to factors such as face's scar or glasses is influenced), Using this valley point as the Y-coordinate datum mark of naris position;Second step is chosen using two eyeball X-coordinate as width, in the Y-coordinate of nostril Lower δ pixel (for example, choosing δ=[nostril Y-coordinate-eyeball Y-coordinate] × 0.06) is that the region of height carries out X-direction integral throwing Then shadow is analyzed drop shadow curve, the point centered on the X-coordinate at two eye pupil hole midpoints, and two sides carry out to the left and right respectively Search, first valley point found is the X-coordinate of the central point in left and right nostril.The midpoint in two nostrils is calculated as in muffle Point, obtains the accurate location of muffle central point, and delimits nasal area.
(3) automatic positioning of the corners of the mouth.Since the difference of human face expression may cause the large variation of mouth shape, and Mouth region is easier the interference by factors such as beards, thus the accuracy extracted of mouth feature point for identification influence compared with Greatly.Since the position of corners of the mouth point is influenced by expression etc. that relative variability is smaller, the position of angle point is more accurate, so taking mouth region Important Characteristic Points be two corners of the mouth points positioning method.
On the basis of eyes region and nose characteristic of field point has been determined, first with the method for area grayscale integral projection Determine from first valley point of the following Y-coordinate drop shadow curve in nostril (similarly, need to eliminate by peak valley Δ value appropriate due to The burr that the factors such as beard, mole trace generate influences) Y-coordinate position as mouth;Then mouth region is selected, to area image It is handled using Susan operator, after obtaining mouth edge figure;Angle point grid is finally carried out, two corners of the mouths can be obtained Exact position.
S103: face colour 3D grid is established according to characteristic point.
S104: according to the characteristic value of face colour 3D grid measures characteristic point and the connection relationship between characteristic point is calculated.
Specifically, associated eigenvalue can be measured for the characteristic point of face characteristic by colouring information, it should Characteristic value is that face characteristic in 2D plane includes to one in position, distance, shape, size, angle, radian and curvature Kind or a variety of measurements, in addition, further including the measurement to color, brightness, texture etc..Such as according to iris central pixel point to Surrounding extends, and obtains whole location of pixels of eyes, the shape of eyes, the inclination radian at canthus, eye color etc..
Color combining information and depth information can then calculate the connection relationship between characteristic point, which can To be topological connection relation and space geometry distance between characteristic point, or it is also possible to the various combined dynamics of characteristic point Connection relation information etc..
The plane letter of each element including face itself can be obtained according to the measurement of face colour 3D grid and calculating Spatial relation between breath and the local message and each element of the spatial relation of the characteristic point on each element Global Information.Local message and Global Information respectively from part and on the whole reflection lie in information on face RGBD figure and Structural relation.
S105: characteristic value and connection relationship are analyzed to obtain the 3d space distribution characteristics information of characteristic point.
In step S105, by the analysis to characteristic value and connection relationship, thus three-dimensional face shape letter can be obtained Breath when so that the later period is carried out recognition of face, can pass through to obtain the 3d space distribution characteristics information of each characteristic point of face The 3d space distribution characteristics information of face is identified.
It is different from the prior art, the present invention establishes face colour 3D net by the characteristic point acquired on face RGBD atlas Lattice, and by the characteristic value and connection relationship of face colour 3D grid acquisition characteristic point, to obtain human face characteristic point 3d space distribution characteristics information, be applied to face identification, due to 3d space distribution characteristics information include colouring information and Depth information, face information is more comprehensive, also, can establish face skeleton by the 3d space distribution characteristics information, passes through Face skeleton is identified, so the non-geometric cosmetic variation such as the posture of face, expression, illumination and facial makeup and face The variation of situations such as fat or thin will not influence recognition of face, therefore can be more accurate to the identification of face.
Referring to Fig. 2, Fig. 2 be another embodiment of the present invention provides a kind of face 3D characteristic information acquisition methods stream Journey schematic diagram.
S201: RGBD facial image is obtained.
S202: pass through the characteristic point of RGBD man face image acquiring face.
S203: face colour 3D grid is established according to characteristic point.
S204: according to the characteristic value of face colour 3D grid measures characteristic point and the pass of the Topology connection between characteristic point is calculated System and space geometry distance.
S205: using finite element method between characteristic value, characteristic point topological connection relation and space geometry away from The 3d space distribution characteristics information of characteristic point is obtained from being analyzed.
Specifically, curved surface deformation can be carried out to face colour 3D grid using finite element analysis.Finite element analysis (FEA, Finite Element Analysis) i.e. using mathematical approach method to actual physical system (geometry and load working condition) into Row simulation.Also utilize simple and interaction element, i.e. unit, so that it may which it is unlimited to go to approach with the unknown quantity of limited quantity The real system of unknown quantity.
For example, can establish the list of line unit after carrying out strain energy of distortion analysis to each line unit of face colour 3D grid First stiffness equations.Then it introduces constraint element, such as point, line, cuts arrow, method arrow constraint element type.Because curve and surface will expire Foot checks when design to its shape, position, size and requires with the continuity of adjacent curved surface etc., these be all by constrain come It realizes.The present embodiment handles these constraints by penalty function method, the final stiffness matrix and equivalent load for obtaining constraint element Array.
Expand the data structure of Deformable curve and surface, so that the data structure of Deformable curve and surface had both included such as order, control The geometric parameter part of vertex processed and knot vector etc., further includes some parameters for showing physical characteristic and external applied load.To make Obtaining Deformable curve and surface can integrally indicate that some more complicated bodies show to enormously simplify the geometrical model of face.And And physical parameter in data structure and constrained parameters uniquely determine the configuration geometric parameter of face,
Finite element solving Deformable curve and surface is used by programming, for different constraint elements, setting unit enters Mouth program can calculate the element stiffness matrix and unit load array of any constraint.According to pair of global stiffness matrix Title property, band-like property and sparsity calculate global stiffness matrix using variable bandwidth one-dimension array storage method.When assembling, not only By line unit or face element stiffness matrix, constraint element stiffness matrix is also added to global stiffness square by " sitting in the right seat " mode In battle array, while constraint element equivalent load array being added in General load array, line is finally solved using Gaussian reduction Property Algebraic Equation set.
For example, the formative method of face curve and surface can be described with mathematical model are as follows:
Required deformation curve
Or curved surface
It is the solution of following extreme-value problem
Wherein,It is the energy functional of curve and surface, it reflects the deformation characteristics of curve and surface to a certain extent, Assign curve and surface physical characteristic.F1, f2, f3, f4 are the functions about variable in (),It is parameter definition The boundary in domain, Γ ' are the curve in Surface Parameters domain, (μ0, v0) be certain parameter value in parameter field, condition (1) be interpolating on sides about Beam, condition (2) are boundary continuity constraints, and condition (3) is the constraint of characteristic curve in curved surface, and condition (4) is in curve and surface Point constraint.In the application, energy functionalTake into following form:
Curve:
Curved surface:
Wherein, α, β, γ respectively indicate the stretching of curve, object for appreciation is gone, coefficient of torsion, and α ij and β ij are respectively curved surface at (μ, v) Place removes coefficient with object for appreciation very much partially along μ, the drawing in the direction v.
It is both full as can be seen that Deformable curve and surface modeling method is same, handles all kinds of constraints in phase from mathematical model Foot Partial controll in turn ensures whole wide suitable.Using variation principle, solution such as lower section can be converted by solving above-mentioned extreme-value problem Journey:
Here δ indicates first variation.Formula (5) is a differential equation, since the equation is more complicated, it is difficult to find out essence Really analysis knot, therefore liberated using numerical value.For example, using finite element method.
Finite element method is regarded as first selecting suitable Interpolation as needed, then solves combination parameter, therefore institute The solution obtained is not only conitnuous forms, and the grid that pre-treatment generates also is laid a good foundation for finite element analysis.
Similarity measurement between cognitive phase, unknown facial image and known face template is given by:
In formula: CiXjThe feature of face, i in the feature and face database of face respectively to be identified1,i2,j1,j2,k1,k2For 3D grid vertex feature.First item in formula is that machine selects corresponding local feature X in two vector fieldsjAnd CiSimilarity degree, Binomial is then to calculate local location relationship and matching order, it can be seen that, when best match i.e. least energy function Match.
Curved surface deformation has been carried out to face colour 3D grid by above-mentioned finite element method, has kept face colour 3D grid each Point to obtain three-dimensional face shape information, and then obtains human face characteristic point constantly close to the characteristic point of real human face 3d space distribution characteristics information.
Referring to Fig. 3, Fig. 3 is a kind of stream of the acquisition methods for face 3D characteristic information that further embodiment of this invention provides Journey schematic diagram.
S301: RGBD facial image is obtained.
S302: pass through the characteristic point of RGBD man face image acquiring face.
In the present embodiment, face certain bits are carried out using wavelet transform, to acquire the characteristic point of face.It is face first Portion's zone location, the shape of face are similar to ellipse, therefore the available ellipse algorithm that detects determines human face region, and obtains face Rotate angle.Detection ellipse obtain include central point coordinate, the length of long and short axis, the parameters such as elliptical rotation angle, rotate Angle can determine the angle of face rotation.
Followed by the positioning of face characteristic, for example, eyes, eyebrow, nose and mouth show as horizontal properties, it is the direction x Low frequency signal and the direction y high-frequency signal, selected from LH component positioning eyes and mouth.Iris be the most important feature of face it One, comprising more information content, original image is selected to position iris.
(1) eyes and Iris Location.By eyes in conjunction with eyebrow, positioned as eye feature.If obtained eye area Domain completely includes iris, i.e. positioning is correct.After obtaining ocular positioning, iris is positioned, the shape of iris is standard round Shape, due to the structure of eyes, iris is always pressed by part person, therefore is positioned using the HOUGH of strong interference immunity transformation to it.
(2) positioning of mouth and nose.Mouth and nose show as horizontal properties, are in wavelet transformation LH component and ellipse short shaft Parallel line segment or segmental arc.The wing of nose is vertical features and characteristic is relatively stable, is divided to two kinds to be and transverse in wavelet transformation HL Parallel line segment, using the geometrical relationship between HOUGH transformation and face characteristic, the line segment of detection mouth, nose and the wing of nose is being schemed Middle label.
S303: face colour 3D grid is established according to characteristic point.
S304: according to the characteristic value of face colour 3D grid measures characteristic point and the pass of the Dynamic link library between characteristic point is calculated It is information.
S305: the Dynamic link library relationship between characteristic value and characteristic point is divided using wavelet transformation texture analysis method Analysis, to obtain the 3d space distribution characteristics information of characteristic point.
Specifically, Dynamic link library relationship is the Dynamic link library relationship of various characteristic points combination.Wavelet transformation be the time and The local of frequency converts, it has the feature of multiresolution analysis, and all has characterization signal local feature in time-domain and frequency-domain Ability.The present embodiment is passed through to the extraction of textural characteristics, classification and analytical procedure and is combined by wavelet transformation texture analysis Face characteristic value and Dynamic link library relation information, specifically include colouring information and depth information, final to obtain three-dimensional face Shape information, finally analysis extracts the lower face shape with invariance of face slight expression variation from face shape information again Shape information carries out coding face shape model parameter, which can be used as the geometrical characteristic of face, to obtain face The 3d space distribution characteristics information of characteristic point.
Face 2D feature letter has been also compatible in the acquisition methods for the face 3D characteristic information that some other embodiment provides The acquisition of breath, the acquisition methods of face 2D characteristic information can be the various methods of this field routine.In those embodiments, While obtaining face 3D characteristic information, face 2D characteristic information can also be obtained, to carry out the knowledge of 3D and 2D to face simultaneously Not, to further increase the accuracy of recognition of face.
For example, the basis of 3 D wavelet transformation is as follows:
Wherein,
AJ1SPACE V is arrived for function f (x, y, z)3 J1Projection operator,
QnFor Hx,Hy,Hz Gx,Gy,GzCombination;
Order matrix H=(HM, k), G=(GM, k), wherein Hx,Hy,Hz It respectively indicates H and is applied to three dimensional signal x, y, on the direction z, Gx,Gy,GzIt respectively indicates G and is applied to three dimensional signal x, y, on the direction z.
Cognitive phase takes its low frequency low resolution subgraph to be mapped to face space after unknown facial image wavelet transformation, To obtain characteristic coefficient, can be used between Euclidean distance characteristic coefficient more to be sorted and everyone characteristic coefficient away from From in conjunction with PCA algorithm, according to formula:
In formula, K be with the most matched people of unknown face, N be database number, Y be unknown face be mapped to by eigenface The m dimensional vector obtained on the subspace of formation, YkIt is mapped on the subspace formed by eigenface for face known in database Obtained m dimensional vector.
It is to be appreciated that in another embodiment, can also use is the 3D recognition of face based on 2-d wavelet feature Method is identified, it is necessary first to carry out 2-d wavelet feature extraction, 2-d wavelet basic function g (x, y) is defined as
gmn(x, y)=a-mnG (x ', y '), a > 1, m.n ∈ Z
Wherein, σ is the size of Gauss window, and the filter function of a self similarity can pass through function gmn(x, y) to g (x, Y) it is suitably expanded and rotation obtains.Based on superior function, the wavelet character of image I (x, y) can be defined as
Steps are as follows for the realization of facial image 2-d wavelet extraction algorithm:
(1) it is obtained by wavelet analysis and is characterized about the small echo of face, convert the individual features in original image I (x, y) For wavelet-based attribute vector F (F ∈ Rm)。
(2) Fractional power polynomial models (FPP) model k (x, y)=(xy) is usedd(0 < d < 1) makes m tie up wavelet character space RmProject to higher n-dimensional space RnIn.
(3) it is based on the linear judgment analysis algorithm (KFDA) of core, in RnMatrix S between class is established in spacebWith matrix S in classw
Calculate SwNormal orthogonal feature vector α1, α2..., αn
(4) it extracts facial image and significantly differentiates feature vector.Another P1=(α1, α2..., αq), wherein α1, α2..., αqIt is SwThe feature vector that corresponding q characteristic value is positive, q=rank (Sw).It calculatesIt is maximum special corresponding to L The feature vector β of value indicative1, β2..., βL, (L≤c-1), whereinC is face The quantity of classification.It is significant to differentiate feature vector, fregular=BTP1 TY wherein, y ∈ Rn;B=(β1, β2..., βl)。
(5) the inapparent differentiation feature vector of facial image is extracted.It calculatesFeature corresponding to a maximum eigenvalue Vector γ1, γ2..., γL, (L≤c-1).Enable P2=(αq+1, αq+2..., αm), then inapparent differentiation feature vector
In the 3D recognition of face stage, the steps included are as follows:
(1) front face is detected, positions face characteristic crucial in a front face and a facial image Point, such as contour feature point, left eye and the right eye of face, mouth and nose etc..
(2) three-dimensional people is rebuild by the two-dimensional Gabor feature vector of said extracted and a common 3D face database Face model.In order to rebuild a three-dimensional face model, ORL (Olivetti Research Laboratory) single face three is used Face database is tieed up, including 100 facial images detected.Each faceform has nearly 70000 tops in database Point.Determine a Feature Conversion matrix P, in original three-dimensional face identification method, which is usually by subspace analysis side The subspace analysis projection matrix that method obtains is corresponded to the feature vector of preceding m maximum eigenvalue by the covariance matrix of sample Composition.The small echo extracted is differentiated that feature vector corresponds to the feature vector of m maximum eigenvalue, forms main Feature Conversion square Battle array P ', this feature transition matrix have stronger robustness to factors such as illumination, posture and expressions than original eigenmatrix P, i.e., The feature of representative is more acurrate and stablizes.
(3) newly-generated faceform is handled using template matching and linear discriminant analysis (FLDA) method, is mentioned Difference and class inherited in the class of modulus type, advanced optimize last recognition result.
It please be a kind of structure of equipment for obtaining face 3D characteristic information provided in an embodiment of the present invention refering to Fig. 4, Fig. 4 Schematic diagram.
The present invention also provides a kind of equipment for obtaining face 3D characteristic information, and specifically, which includes that image obtains Module 10, acquisition module 20, grid establish module 30, computing module 40 and analysis module 50.
Wherein, image collection module 10 is for obtaining RGBD facial image.
Acquisition module 20 is connect with image collection module 10, for passing through the characteristic point of RGBD man face image acquiring face. Specifically, acquisition module 20 carries out the acquisition of characteristic point by acquisition face element, wherein face element includes: eyebrow, eye Eyeball, nose, mouth, cheek and lower Palestine and China one or more.
Grid is established module 30 and is connect with acquisition module 20, for establishing face colour 3D grid according to characteristic point.
Computing module 40 is established module 30 with grid and is connect, for the feature according to face colour 3D grid measures characteristic point It is worth and calculates the connection relationship between characteristic point.Wherein, characteristic value include position, distance, shape, size, angle, radian and One or more of curvature.
Analysis module 50 is connect with computing module 40, and it is empty to obtain the 3D of characteristic point for analyzing characteristic value and connection relationship Between distribution characteristics information.
In one embodiment, connection relationship is characterized topological connection relation and space geometry distance between a little.Analysis Module carries out the 3d space distribution spy that curved surface deformation obtains characteristic point to face colour 3D grid by finite element method Reference breath.
In another embodiment, connection relationship is characterized various combined Dynamic link library relation informations a little.Analyze mould Agllutination closes wavelet transformation texture analysis and obtains face shape information, then the variation of face slight expression is lower to have invariance by extracting Face shape information carry out coding face shape model parameter and obtain the 3d space distribution characteristics information of characteristic point.
Referring to Fig. 5, Fig. 5 is a kind of equipment entity device for obtaining face 3D characteristic information provided in an embodiment of the present invention Structural schematic diagram.The device of present embodiment can execute the step in the above method, and related content refers to the above method In detailed description, details are not described herein.
The intelligent electronic device includes processor 61, the memory coupled with processor 61 62.
Memory 62 calculates institute for storage program area, the program of setting, the RGBD facial image of acquisition and storage The 3d space distribution characteristics information of the characteristic point obtained.
Processor 61 is for obtaining RGBD facial image;Pass through the characteristic point of the RGBD man face image acquiring face;Root Face colour 3D grid is established according to the characteristic point;The characteristic value of the characteristic point is measured simultaneously according to the face colour 3D grid Calculate the connection relationship between the characteristic point;The characteristic value and the connection relationship are analyzed to obtain the feature The 3d space distribution characteristics information of point.
In several embodiments provided by the present invention, it should be understood that disclosed device and method can pass through Other modes are realized.For example, equipment embodiment described above is only schematical, for example, module or unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
Unit may or may not be physically separated as illustrated by the separation member, shown as a unit Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks On unit.It can select some or all of unit therein according to the actual needs to realize the mesh of present embodiment scheme 's.
In addition, each functional unit in each embodiment of the present invention can integrate in one processing unit, it can also To be that each unit physically exists alone, can also be integrated in one unit with two or more units.It is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.
It, can if integrated unit is realized in the form of SFU software functional unit and when sold or used as an independent product To be stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention substantially or Say that all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products Out, which is stored in a storage medium, including some instructions are used so that a computer equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute each implementation of the present invention The all or part of the steps of methods.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. it is various It can store the medium of program code.
In conclusion the present invention obtains the 3d space distribution characteristics information of human face characteristic point, to be applied to the knowledge of face Not, since 3d space distribution characteristics information includes colouring information and depth information, face information is more comprehensive, also, passes through The 3d space distribution characteristics information can establish face skeleton, be identified by face skeleton, so the posture of face, table Situations such as non-geometric cosmetic variation and the face such as feelings, illumination and facial makeup are fat or thin variation will not carry out recognition of face It influences, therefore can be more accurate to the identification of face.
Mode the above is only the implementation of the present invention is not intended to limit the scope of the invention, all to utilize this Equivalent structure or equivalent flow shift made by description of the invention and accompanying drawing content, it is relevant to be applied directly or indirectly in other Technical field is included within the scope of the present invention.

Claims (10)

1. a kind of acquisition methods of face 3D characteristic information, which comprises the following steps:
Obtain RGBD facial image;
Pass through the characteristic point of the RGBD man face image acquiring face;
Face colour 3D grid is established according to the characteristic point;
The characteristic value of the characteristic point is measured according to the face colour 3D grid and calculates the pass of the connection between the characteristic point System;
The characteristic value and the connection relationship are analyzed to obtain the 3d space distribution characteristics information of the characteristic point.
2. the method according to claim 1, wherein the characteristic point according to the face colour 3D grid computing Characteristic value and the characteristic point between connection relationship the step of in, topology of the connection relationship between the characteristic point Connection relationship and space geometry distance;
In the step of obtaining the 3d space distribution characteristics information of the characteristic point according to the characteristic value and the connection relationship, lead to It crosses and is carried out by curved surface deformation and obtains the 3d space distribution characteristics information of human face characteristic point for the face colour 3D grid.
3. the method according to claim 1, wherein the characteristic point according to the face colour 3D grid computing Characteristic value and the characteristic point between connection relationship the step of in, the connection relationship be the characteristic point various combinations Dynamic link library relation information;
In the step of obtaining the 3d space distribution characteristics information of the characteristic point according to the characteristic value and the connection relationship, lead to Cross the 3d space distribution characteristics information for obtaining face shape information to obtain human face characteristic point.
4. according to the method in claim 2 or 3, which is characterized in that described to pass through the RGBD man face image acquiring face Characteristic point the step of in, carry out the acquisition of the characteristic point by acquiring face element, wherein the face element includes: Eyebrow, eyes, nose, mouth, cheek and lower Palestine and China one or more.
5. according to the method described in claim 4, it is characterized in that, the characteristic value includes position, distance, shape, size, angle One or more of degree, radian and curvature.
6. a kind of equipment for obtaining face 3D characteristic information characterized by comprising
Image collection module, for obtaining RGBD facial image;
Acquisition module obtains module with described image and connect, for the characteristic point by the RGBD man face image acquiring face;
Grid establishes module, connect with the acquisition module, for establishing face colour 3D grid according to the characteristic point;
Computing module is established module with the grid and is connect, for measuring the characteristic point according to the face colour 3D grid Characteristic value and calculate the connection relationship between the characteristic point;
Analysis module is connect with the computing module, for analyzing the characteristic value and the connection relationship to obtain the spy Levy the 3d space distribution characteristics information of point.
7. equipment according to claim 6, which is characterized in that topology of the connection relationship between the characteristic point is even Connect relationship and space geometry distance;
The analysis module obtains the 3d space of human face characteristic point and carrying out curved surface deformation to the face colour 3D grid Distribution characteristics information.
8. equipment according to claim 6, which is characterized in that the connection relationship is the various combinations of the characteristic point Dynamic link library relation information;
The analysis module obtains the 3d space distribution characteristics information of human face characteristic point by obtaining face shape information.
9. equipment according to claim 7 or 8, which is characterized in that the acquisition module is carried out by acquisition face element The acquisition of the characteristic point, wherein the face element includes: the one of eyebrow, eyes, nose, mouth, cheek and lower Palestine and China It is a or multiple.
10. equipment according to claim 9, which is characterized in that the characteristic value include position, distance, shape, size, One or more of angle, radian and curvature.
CN201611036376.3A 2016-11-14 2016-11-14 The acquisition methods and equipment of face 3D characteristic information Active CN106778491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611036376.3A CN106778491B (en) 2016-11-14 2016-11-14 The acquisition methods and equipment of face 3D characteristic information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611036376.3A CN106778491B (en) 2016-11-14 2016-11-14 The acquisition methods and equipment of face 3D characteristic information

Publications (2)

Publication Number Publication Date
CN106778491A CN106778491A (en) 2017-05-31
CN106778491B true CN106778491B (en) 2019-07-02

Family

ID=58971120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611036376.3A Active CN106778491B (en) 2016-11-14 2016-11-14 The acquisition methods and equipment of face 3D characteristic information

Country Status (1)

Country Link
CN (1) CN106778491B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481186B (en) * 2017-08-24 2020-12-01 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and computer equipment
CN108629278B (en) * 2018-03-26 2021-02-26 奥比中光科技集团股份有限公司 System and method for realizing information safety display based on depth camera
CN108888487A (en) * 2018-05-22 2018-11-27 深圳奥比中光科技有限公司 A kind of eyeball training system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于RGB-D数据的三维人脸建模及标准化》;傅泽华;《中国优秀硕士学位论文全文数据库 信息科技辑,2016年第01期,I138-516页》;20160115;第2-3,17-21,23,31-33页

Also Published As

Publication number Publication date
CN106778491A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106778468B (en) 3D face identification method and equipment
CN103632132B (en) Face detection and recognition method based on skin color segmentation and template matching
US9842247B2 (en) Eye location method and device
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
CN103942577B (en) Based on the personal identification method for establishing sample database and composite character certainly in video monitoring
CN106778586B (en) Off-line handwritten signature identification method and system
CN104834922B (en) Gesture identification method based on hybrid neural networks
CN106599785B (en) Method and equipment for establishing human body 3D characteristic identity information base
CN104850838B (en) Three-dimensional face identification method based on expression invariant region
CN104850825A (en) Facial image face score calculating method based on convolutional neural network
CN106778491B (en) The acquisition methods and equipment of face 3D characteristic information
CN106778474A (en) 3D human body recognition methods and equipment
CN108182397B (en) Multi-pose multi-scale human face verification method
CN103902978B (en) Face datection and recognition methods
CN106778489A (en) The method for building up and equipment of face 3D characteristic identity information banks
CN105447441B (en) Face authentication method and device
CN103218609A (en) Multi-pose face recognition method based on hidden least square regression and device thereof
Li et al. Efficient 3D face recognition handling facial expression and hair occlusion
CN102214299A (en) Method for positioning facial features based on improved ASM (Active Shape Model) algorithm
CN105809113B (en) Three-dimensional face identification method and the data processing equipment for applying it
CN102495999A (en) Face recognition method
CN101533466B (en) Image processing method for positioning eyes
CN107330371A (en) Acquisition methods, device and the storage device of the countenance of 3D facial models
CN103489011A (en) Three-dimensional face identification method with topology robustness
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808

Patentee after: Obi Zhongguang Technology Group Co., Ltd

Address before: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808

Patentee before: SHENZHEN ORBBEC Co.,Ltd.

CP01 Change in the name or title of a patent holder