CN106778491A - The acquisition methods and equipment of face 3D characteristic informations - Google Patents
The acquisition methods and equipment of face 3D characteristic informations Download PDFInfo
- Publication number
- CN106778491A CN106778491A CN201611036376.3A CN201611036376A CN106778491A CN 106778491 A CN106778491 A CN 106778491A CN 201611036376 A CN201611036376 A CN 201611036376A CN 106778491 A CN106778491 A CN 106778491A
- Authority
- CN
- China
- Prior art keywords
- face
- characteristic point
- characteristic
- annexation
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The invention provides the acquisition methods and equipment of a kind of face 3D characteristic informations.The method is comprised the following steps:Obtain RGBD facial images;By the characteristic point of RGBD man face image acquiring faces;Face colour 3D grids are set up according to characteristic point;Characteristic value according to face colour 3D grids measures characteristic point simultaneously calculates the annexation between characteristic point;Characteristic value and annexation are analyzed to obtain the 3d space distribution characteristics information of characteristic point.The equipment includes that image collection module, acquisition module, grid set up module, computing module and analysis module.The present invention can obtain the 3d space distribution characteristics information of human face characteristic point, the colouring information and depth information of face be included so that the face information of acquisition is more comprehensive, so that recognition of face is more accurate.
Description
Technical field
Technical field the present invention relates to obtain face 3D characteristic informations, more particularly to a kind of face 3D characteristic informations
Acquisition methods and equipment.
Background technology
Information security issue has caused the extensive attention of various circles of society.The main path for ensuring information safety is exactly
Identity to information user is accurately differentiated whether the authority for determining whether user's acquisition information by identification result closes
Method, so as to reach the purpose that guarantee information was not leaked and ensured user's legitimate rights and interests.Therefore, reliable identification is very
It is important and necessary.
Recognition of face is a kind of biological identification technology that the facial feature information based on people carries out identification, recognition of face
Technology increasingly attracts attention as a kind of personal identification authentication technique more conveniently, safely.Traditional face recognition technology
It is 2D recognitions of face, 2D recognitions of face do not have depth information, easily non-several by attitude, expression, illumination and facial makeup etc.
The influence of what cosmetic variation, therefore, it is difficult to carry out accurate recognition of face.
The content of the invention
The present invention provides the acquisition methods and equipment of a kind of face 3D characteristic informations, can solve the problem that the difficulty that prior art is present
To carry out the problem of accurate recognition of face.
In order to solve the above technical problems, one aspect of the present invention is:A kind of face 3D characteristic informations are provided
Acquisition methods, the method comprises the following steps:Obtain RGBD facial images;By the RGBD man face image acquirings face
Characteristic point;Face colour 3D grids are set up according to the characteristic point;The characteristic point is measured according to face colour 3D grids
Characteristic value and calculate the annexation between the characteristic point;The characteristic value and the annexation are analyzed to obtain
Take the 3d space distribution characteristics information of the characteristic point.
Wherein, the company according to face colour 3D grid computings between the characteristic value and the characteristic point of characteristic point
In the step of connecing relation, the annexation is the topological connection relation and space geometry distance between the characteristic point;
The step of 3d space distribution characteristics information of the characteristic point is obtained according to the characteristic value and the annexation
In, by the 3d space distribution characteristics information that human face characteristic point is obtained to face colour 3D grid march facial disfigurements.
Wherein, the company according to face colour 3D grid computings between the characteristic value and the characteristic point of characteristic point
In the step of connecing relation, the annexation is the Dynamic link library relation information of the various combinations of the characteristic point;
The step of 3d space distribution characteristics information of the characteristic point is obtained according to the characteristic value and the annexation
In, the 3d space distribution characteristics information of the characteristic point is obtained by obtaining face shape information.
Wherein, in the step of characteristic point by the RGBD man face image acquirings face, by gathering face unit
Element carries out the collection of the characteristic point, wherein, the face element includes:Eyebrow, eyes, nose, face, cheek and chin
In one or more.
Wherein, the characteristic value include position, distance, shape, size, angle, radian and curvature in one kind or
It is various.
In order to solve the above technical problems, another technical solution used in the present invention is:There is provided a kind of acquisition face 3D special
The equipment of reference breath, the equipment includes that image collection module, acquisition module, grid set up module, computing module and analysis module;
Image collection module is used to obtain RGBD facial images;Acquisition module is connected with described image acquisition module, for by described
The characteristic point of RGBD man face image acquiring faces;Grid is set up module and is connected with the acquisition module, for according to the feature
Point sets up face colour 3D grids;Computing module is set up module and is connected with the grid, for according to face colour 3D nets
Lattice measure the characteristic value of the characteristic point and calculate the annexation between the characteristic point;Analysis module and the computing module
Connection, for the 3d space distribution characteristics information for analyzing the characteristic value and the annexation to obtain the characteristic point.
Wherein, the annexation is the topological connection relation and space geometry distance between the characteristic point;Described point
Analysis module is believed by obtaining the 3d space distribution characteristics of human face characteristic point to face colour 3D grid march facial disfigurements
Breath.
Wherein, the annexation is the Dynamic link library relation information of the various combinations of the characteristic point;The analysis mould
Block obtains the 3d space distribution characteristics information of the characteristic point by obtaining face shape information.
Wherein, the acquisition module carries out the collection of the characteristic point by gathering face element, wherein, the face unit
Element includes:Eyebrow, eyes, nose, face, cheek and lower Palestine and China one or more.
Wherein, the characteristic value include position, distance, shape, size, angle, radian and curvature in one kind or
It is various.
The beneficial effects of the invention are as follows:The situation of prior art is different from, the present invention is gathered by face RGBD atlas
Characteristic point set up face colour 3D grids, and the characteristic value of characteristic point is obtained by face colour 3D grids and connection is closed
System, so as to obtain the 3d space distribution characteristics information of human face characteristic point, to be applied to the identification of face, due to 3d space distribution
Characteristic information includes colouring information and depth information, and face information is more comprehensive, can by the 3d space distribution characteristics information
To set up face skeleton, it is identified by face skeleton, so the attitude of face, expression, illumination and facial makeup etc. are non-
Situations such as geometry appearance changes and face is fat or thin change will not influence on recognition of face, therefore to the identification energy of face
It is more accurate.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below will be to that will make needed for embodiment description
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for
For those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawings
Accompanying drawing.
Fig. 1 is a kind of schematic flow sheet of the acquisition methods of face 3D characteristic informations provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic flow sheet of the acquisition methods of face 3D characteristic informations that another embodiment of the present invention is provided;
Fig. 3 is a kind of schematic flow sheet of the acquisition methods of face 3D characteristic informations that further embodiment of this invention is provided;
Fig. 4 is a kind of structural representation of equipment for obtaining face 3D characteristic informations provided in an embodiment of the present invention;
Fig. 5 is a kind of structural representation of equipment entity device for obtaining face 3D characteristic informations provided in an embodiment of the present invention
Figure.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is all other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of protection of the invention.
Fig. 1 is referred to, Fig. 1 is that a kind of flow of the acquisition methods of face 3D characteristic informations provided in an embodiment of the present invention is shown
It is intended to.
The acquisition methods of the face 3D characteristic informations of the present embodiment are comprised the following steps:
S101:Obtain RGBD facial images.
Specifically, RGBD facial images include the colouring information (RGB) and depth information (Depth) of face, RGBD faces
Image can be obtained by Kinect sensor.Wherein, the RGBD facial images are specially an image set, including for example same
Multiple RGBD images of personal multiple angles.
S102:By the characteristic point of RGBD man face image acquiring faces.
In step S102, after RGBD facial images are obtained, on the RGBD facial images, by gathering face element
To carry out the collection of characteristic point, wherein, face element includes:One of eyebrow, eyes, nose, face, cheek and lower Palestine and China
Or it is multiple.
The acquisition methods of characteristic point can be various, for example, by the face such as the eyes of handmarking's face, nose, face
The characteristic points such as cheek, lower jaw and its edge, it is also possible to which the human face characteristic point labeling method of compatible RGB (2D) determines the feature of face
Point.
For example, the localization method of face key feature points:Choose face 9 characteristic points, these characteristic points point
Cloth has angle invariability, respectively 2 eyeball central points, 4 canthus points, the midpoint in two nostrils and 2 corners of the mouth points.In this base
Other characteristic point positions of each organ characteristic of the face relevant with identification and extension can be readily available on plinth, be used for into one
The recognizer of step.
When face characteristic extraction is carried out, due to local marginal information effectively cannot be organized, traditional side
Edge detective operators can not reliably extract the feature (profile of eyes or mouth) of face, but from human visual system, fully
The positioning of face key feature points is carried out using the feature of edge and angle point, then can greatly improve its reliability.
Wherein selection Susan (Smallest Univalue Segment Assimilating Nucleus) operator is used for
Extract the edge and Corner Feature of regional area.According to the characteristic of Susan operators, it both can be used to detect edge, can be used for again
Extract angle point.Therefore compared with the edge detection operator such as Sobel, Canny, Susan operators are more suitable for carrying out face eye
The extraction of the feature such as portion and face, is especially automatically positioned to canthus point and corners of the mouth point.
The following is the introduction of Susan operators:
With a circular shuttering traversing graph picture, if the gray value of other any pixels and template center's pixel (core) in template
The difference of gray value be less than certain threshold value, being considered as the point and core has the gray value of identical (or close), meets such condition
Pixel composition region be referred to as the similar area of core value (Univalue Segment Assimilating Nucleus, USAN).
It is the base of SUSAN criterions that each pixel in image is associated with the regional area with close gray value.
It is to scan whole image with circular shuttering during specific detection, compares the ash of each pixel and center pixel in template
Angle value, and given threshold value differentiates whether the pixel belongs to USAN regions, such as following formula:
In formula, c (r, r0) to belong to the discriminant function of the pixel in USAN regions, I (r in template0) it is template center's pixel
The gray value of (core), I (r) is the gray value of other any pixels in template, and t is gray scale difference thresholding.It influences to detect angle point
Number.T reduces, and more fine changes in image is obtained, so as to provide relatively large number of amount detection.Thresholding t must root
The factors such as contrast and noise according to image determine.Then the USAN area sizes of certain point can be expressed from the next in image:
Wherein, g is geometric threshold, and the angle point shape that influence is detected, g is smaller, and the angle point for detecting is more sharp.(1)t,g
Determination thresholding g determine output angle point USAN regions maximum, as long as pixel that is, in image has the USAN smaller than g
Region, the point is just judged as angle point.The size of g is not only determined can extract the number of angle point from image, and such as preceding institute
State, it also determines the acuity of detected angle point.So once it is determined that the quality (acuity) of required angle point,
G can just take a changeless value.Thresholding t represents the minimum contrast that can be detected angle point, is also the noise that can ignore
Maximum tolerance.It essentially dictates the feature quantity that can be extracted, and t is smaller, and spy can be extracted from the lower image of contrast
Levy, and the feature extracted is also more.Therefore for different contrast and the image of noise situations, different t values should be taken.
SUSAN operators have the advantages that one it is prominent, be exactly that, to local insensitive for noise, anti-noise ability is strong.This is because it is not relied on
The result of early stage image segmentation, and avoid gradient calculation, in addition, USAN regions are that had with template center pixel by template
The pixel of similar gray-value adds up and obtains, and this is actually an integral process, has good inhibiting effect for Gaussian noise.
The last stage of SUSAN two dimensional characters detection, exactly finds the local maximum of initial angle point response, also
It is non-maximum suppression treatment, to obtain final corner location.Non- maximum suppression is exactly in subrange, such as its name suggests
The initial communication of fruit center pixel is the maximum in this region, then retain its value, is otherwise deleted, and so can be obtained by part
The maximum in region.
(1) eyeball and canthus are automatically positioned.During being automatically positioned of eyeball and canthus, first using normalization mould
The method Primary Location face of plate matching.The general area of face is determined in whole facial image.Common human eye positioning
Algorithm is positioned according to the valley point property of eyes, and herein then using by the symmetrical of the search of valley point and direction projection and eyeball
Property the method that is combined, the degree of accuracy of eyes positioning can be improved using the correlation between two.To the upper left of face area
Gradient map integral projection is carried out with upper right portion, and histogram to integral projection is normalized, first according to floor projection
Valley point determine approximate location of the eyes in y directions, then allow x to change in the larger context, find the paddy in this region
Point, will detect o'clock as the eyeball central point of two.
On the basis of two eyeball positions are obtained, ocular is processed, first using self-adaption binaryzation method
Determine threshold value, obtain the automatic binary image of ocular, then in conjunction with Susan operators, examined using edge and angle point
The algorithm of survey is accurately positioned interior tail of the eye point in ocular.
By the ocular edge image that above-mentioned algorithm is obtained, angle is carried out to the boundary curve in image on this basis
Point is extracted can obtain accurate two intraoculars external eyes corner location.
(2) nose characteristic of field point is automatically positioned.The key feature points of face nasal area are defined as two nostrils center
The midpoint of line, i.e. muffle central point.The position of face muffle central point is relatively stablized, and for facial image normalizing
The effect of datum mark is may also function as when changing pretreatment.
Based on two eyeball positions for finding, two positions in nostril are determined using the method for area grayscale integral projection
Put.
The strip region of two eye pupil hole widths is intercepted first, Y-direction integral projection is carried out, and then drop shadow curve is divided
Analysis.It can be seen that, along drop shadow curve from the Y-coordinate height search downwards of eyeball position, find first position of valley point (logical
The appropriate peak valley Δ value of adjustment selection is crossed, ignoring centre may be because the burr that the factors such as face's scar or glasses are produced influences),
Using this valley point as naris position Y-coordinate datum mark;It is width that second step is chosen with two eyeball X-coordinate, in the Y-coordinate of nostril
Lower δ pixels (for example, choosing δ=[nostril Y-coordinate-eyeball Y-coordinate] × 0.06) are that the region of height carries out X-direction integration throwing
Shadow, is then analyzed to drop shadow curve, and used as central point, both sides are carried out the X-coordinate using two eye pupil hole midpoints to the left and right respectively
Search, the X-coordinate of the first valley point as central point in left and right nostril for finding.Two midpoints in nostril are calculated as in muffle
Point, obtains the accurate location of muffle central point, and delimits nasal area.
(3) corners of the mouth is automatically positioned.Because the difference of human face expression may cause the large variation of face shape, and
Face region is easier to be disturbed by factors such as beards, thus mouth feature point extract accuracy for identification influence compared with
Greatly.Influence relative variability smaller because the position of corners of the mouth point is expressed one's feelings etc., the position of angle point is more accurate, so taking mouth region
Important Characteristic Points be two positioning methods of corners of the mouth point.
On the basis of eyes region and nose characteristic of field point is determined, first with the method for area grayscale integral projection
It is determined that from first valley point of the following Y-coordinate drop shadow curve in nostril (similarly, it is necessary to eliminated by appropriate peak valley Δ value due to
The burr influence that the factors such as beard, mole trace are produced) as the Y-coordinate position of face;Then face region is selected, to area image
Processed using Susan operators, after obtaining mouth edge figure;Angle point grid is finally carried out, two corners of the mouths just can be obtained
Exact position.
S103:Face colour 3D grids are set up according to characteristic point.
S104:Characteristic value according to face colour 3D grids measures characteristic point simultaneously calculates the annexation between characteristic point.
Specifically, the characteristic point that can be directed to face characteristic by colouring information is measured to associated eigenvalue, should
Characteristic value be face characteristic in 2D planes including in position, distance, shape, size, angle, radian and curvature
Plant or various measurements, additionally, also including the measurement to color, brightness, texture etc..For example according to iris central pixel point to
Surrounding extends, and obtains whole location of pixels of eyes, the shape of eyes, the inclination radian at canthus, eye color etc..
Color combining information and depth information, then can calculate the annexation between characteristic point, and the annexation can
Being the topological connection relation and space geometry distance between characteristic point, or it can also be the dynamic of the various combinations of characteristic point
Connection relation information etc..
Measurement and calculating according to face colour 3D grids can obtain the plane letter of each element including face in itself
The local message of the spatial relation of the characteristic point on breath and each element, and the spatial relation between each element
Global Information.Local message and Global Information respectively from it is local and on the whole reflection lie in information on face RGBD figures and
Structural relation.
S105:Characteristic value and annexation are analyzed to obtain the 3d space distribution characteristics information of characteristic point.
In step S105, by the analysis to characteristic value and annexation, thus the face shape letter of solid can be obtained
Breath, so as to obtain the 3d space distribution characteristics information of each characteristic point of face, when the later stage is carried out recognition of face, can pass through
The 3d space distribution characteristics information of face is identified.
Prior art is different from, the present invention sets up face colour 3D nets by the characteristic point gathered on face RGBD atlas
Lattice, and the characteristic value and annexation of characteristic point are obtained by face colour 3D grids, so as to obtain human face characteristic point
3d space distribution characteristics information, to be applied to the identification of face, due to 3d space distribution characteristics information include colouring information and
Depth information, face information is more comprehensive, also, can set up face skeleton by the 3d space distribution characteristics information, passes through
Face skeleton is identified, so the non-geometric such as the attitude of face, expression, illumination and facial makeup cosmetic variation and face
The change of fat or thin situations such as will not influence on recognition of face, therefore identification to face can be more accurate.
Fig. 2 is referred to, Fig. 2 is a kind of stream of the acquisition methods of face 3D characteristic informations that another embodiment of the present invention is provided
Journey schematic diagram.
S201:Obtain RGBD facial images.
S202:By the characteristic point of RGBD man face image acquiring faces.
S203:Face colour 3D grids are set up according to characteristic point.
S204:Characteristic value according to face colour 3D grids measures characteristic point simultaneously calculates the pass of the Topology connection between characteristic point
System and space geometry distance.
S205:Using finite element method to the topological connection relation and space geometry between characteristic value, characteristic point away from
From being analyzed to obtain the 3d space distribution characteristics information of characteristic point.
Specifically, can be to face colour 3D grid march facial disfigurements using finite element analysis.Finite element analysis (FEA,
Finite Element Analysis) it is that actual physical system (geometry and load working condition) is entered using the method for mathematical approach
Row simulation.Also using element that is simple and interacting, i.e. unit, it is possible to gone to approach infinitely with the unknown quantity of limited quantity
The real system of unknown quantity.
For example, after carrying out strain energy of distortion analysis to face colour 3D grids each line units, the list of line unit can be set up
First stiffness equations.Then introduce constraint element, such as point, line, cut arrow, method arrow constraint element type.Because curve and surface will expire
Foot required to its shape, position, size and with the continuity of adjacent curved surface etc. when checking design, these be all by constrain come
Realize.The present embodiment processes these and constrains by penalty function method, the final stiffness matrix and equivalent load for obtaining constraint element
Array.
Expand the data structure of Deformable curve and surface so that the data structure of Deformable curve and surface is both comprising such as exponent number, control
The geometric parameter part on summit processed and knot vector etc., also including showing some parameters of physical characteristic and external applied load.So that
Obtaining Deformable curve and surface can integrally represent that some complex bodies show, enormously simplify the geometrical model of face.And
And, physical parameter and constrained parameters in data structure uniquely determine the configuration geometric parameter of face,
Finite element solving Deformable curve and surface is used by programming, for different constraint elements, setting unit enters
Mouth program, can calculate the element stiffness matrix and unit load array of any constraint.According to the right of global stiffness matrix
Title property, banding and openness, using variable bandwidth one-dimension array storage method to global stiffness matrix computations.During assembling, not only
By line unit or face element stiffness matrix, constraint element stiffness matrix is also added to global stiffness square by " sitting in the right seat " mode
In battle array, while constraint element equivalent load array is added in General load array, line is finally solved using Gaussian reduction
Property Algebraic Equation set.
For example, the formative method of face curve and surface can be described as with Mathematical Modeling:
Required deformation curve
Or curved surface
It is the solution of following extreme-value problem
Wherein,It is the energy functional of curve and surface, it reflects the deformation characteristicses of curve and surface to a certain extent,
Assign curve and surface physical characteristic.F1, f2, f3, f4 are the functions on variable in (),It is parameter definition
The border in domain, Γ ' is the curve in Surface Parameters domain, (μ0, v0) be certain parameter value in parameter field, condition (1) be interpolating on sides about
Beam, condition (2) is boundary continuity constraint, and condition (3) is the constraint of characteristic curve in curved surface, and condition (4) is in curve and surface
Point constraint.In the application, energy functionalTake into following form:
Curve:
Curved surface:
Wherein, α, β, γ represent respectively curve stretching, object for appreciation go, coefficient of torsion, α ij and β ij be respectively curved surface (μ, v)
Place removes coefficient very much partially along μ, the drawing in v directions with object for appreciation.
It is both full as can be seen that Deformable curve and surface modeling method is same, process all kinds of constraints in phase from Mathematical Modeling
Foot Partial controll, in turn ensure that overall wide suitable.Using variation principle, solving above-mentioned extreme-value problem can be converted into solution such as lower section
Journey:
Here δ represents first variation.Formula (5) is a differential equation, because the equation is more complicated, it is difficult to obtain essence
Really analysis is tied, therefore using numerical value liberation.For example, using finite element method.
Finite element method is regarded as first selecting suitable Interpolation as needed, then solves combination parameter, therefore institute
The solution for obtaining is not only conitnuous forms, and the grid of pre-treatment generation is also for finite element analysis is laid a good foundation.
Similarity measurement between cognitive phase, unknown facial image and known face template is given by:
In formula:CiXjThe feature of face, i in the feature and face database of face respectively to be identified1,i2,j1,j2,k1,k2For
3D grid vertex features.Section 1 in formula is that machine selects corresponding local feature X in two vector fieldsjAnd CiSimilarity degree,
Binomial then be calculate local location relation and matching order, it can be seen that, when best match i.e. least energy function
Match somebody with somebody.
Curved surface deformation has been carried out to face colour 3D grids by above-mentioned finite element method, make face colour 3D grids each
Point is constantly close to the characteristic point of real human face, so as to obtain the face shape information of solid, and then obtains human face characteristic point
3d space distribution characteristics information.
Fig. 3 is referred to, Fig. 3 is a kind of stream of the acquisition methods of face 3D characteristic informations that further embodiment of this invention is provided
Journey schematic diagram.
S301:Obtain RGBD facial images.
S302:By the characteristic point of RGBD man face image acquiring faces.
In the present embodiment, face certain bits are carried out using wavelet transform, to gather the characteristic point of face.First it is face
Portion's zone location, the shape of face is similar to ellipse, therefore the oval algorithm of available detection determines human face region, and obtains face
The anglec of rotation.Detection ellipse obtains including the parameters such as the coordinate of central point, the length of long and short axle, the oval anglec of rotation, rotation
Angle can determine the angle of face rotation.
Followed by the positioning of face characteristic, for example, eyes, eyebrow, nose and mouth show as horizontal properties, it is x directions
Low frequency signal and y directions high-frequency signal, selected from LH components position eyes and mouth.Iris be the most important feature of face it
One, comprising more information content, selection original image positioning iris.
(1) eyes and Iris Location.Eyes are combined with eyebrow, is positioned as eye feature.If the eye area for obtaining
Domain intactly includes iris, that is, position correct.After ocular positioning is obtained, iris is positioned, the shape of iris is standard round
Shape, due to the structure of eyes, iris is always pressed by part person, therefore using strong interference immunity HOUGH conversion it is positioned.
(2) positioning of mouth and nose.Mouth and nose show as horizontal properties, are and ellipse short shaft in wavelet transformation LH components
Parallel line segment or segmental arc.The wing of nose is that vertical features and characteristic are stablized relatively, and being divided to two kinds in wavelet transformation HL is and transverse
Parallel line segment, using the geometrical relationship between HOUGH conversion and face characteristic, the line segment of detection face, nose and the wing of nose, in figure
Middle mark.
S303:Face colour 3D grids are set up according to characteristic point.
S304:Characteristic value according to face colour 3D grids measures characteristic point simultaneously calculates the pass of the Dynamic link library between characteristic point
It is information.
S305:The Dynamic link library relation between characteristic value and characteristic point is divided using wavelet transformation texture analysis method
Analysis, to obtain the 3d space distribution characteristics information of characteristic point.
Specifically, Dynamic link library relation is the Dynamic link library relation of various features point combination.Wavelet transformation be the time and
The local conversion of frequency, it has the feature of multiresolution analysis, and all has sign signal local feature in time-domain and frequency-domain
Ability.The present embodiment is by the way that wavelet transformation texture analysis is by the extraction to textural characteristics, classification and analytical procedure and combines
Face characteristic value and Dynamic link library relation information, specifically include colouring information and depth information, final to obtain three-dimensional face
Shape information, finally analysis extracts the lower people's shape of face with consistency of face slight expression change from face shape information again
Shape information, carries out encoding face shape model parameter, and the model parameter can be used as the geometric properties of face, so as to obtain face
The 3d space distribution characteristics information of characteristic point.
Face 2D features letter has been also compatible with the acquisition methods of the face 3D characteristic informations that some other embodiment is provided
The acquisition of breath, the acquisition methods of face 2D characteristic informations can be the conventional various methods in this area.In those embodiments,
While obtaining face 3D characteristic informations, face 2D characteristic informations can also be obtained, to carry out the knowledge of 3D and 2D to face simultaneously
Not, so as to further improve the accuracy of recognition of face.
For example, the basis of 3 D wavelet transformation is as follows:
Wherein,
AJ1For function f (x, y, z) arrives SPACE V3 J1Projection operator,
QnIt is Hx,Hy,Hz Gx,Gy,GzCombination;
Order matrix H=(HM, k), G=(GM, k), wherein, Hx,Hy,HzPoint
Not Biao Shi H be applied on three dimensional signal x, y, z directions, Gx,Gy,GzRepresent that G is applied on three dimensional signal x, y, z directions respectively.
Cognitive phase, by unknown facial image wavelet transformation after, take its low frequency low resolution subgraph and be mapped to face space,
Characteristic coefficient will be obtained, it is possible to use between Euclidean distance characteristic coefficient more to be sorted and everyone characteristic coefficient away from
From with reference to PCA algorithms, according to formula:
In formula, K is the people most matched with unknown face, and N is database number, and Y is mapped to by eigenface for unknown face
The m dimensional vectors obtained on the subspace of formation, YkFor known face is mapped on the subspace formed by eigenface in database
The m dimensional vectors for obtaining.
It is to be appreciated that in another embodiment, it is the 3D recognitions of face based on 2-d wavelet feature that can also use
Method is identified, it is necessary first to carry out 2-d wavelet feature extraction, and 2-d wavelet basic function g (x, y) is defined as
gmn(x, y)=a-mnG (x ', y '), a > 1, m.n ∈ Z
Wherein, σ is the size of Gauss window, and a filter function for self similarity can be by function gmn(x, y) to g (x,
Y) suitably expansion and rotation is carried out to obtain.Based on superior function, the wavelet character to image I (x, y) can be defined as
Facial image 2-d wavelet extraction algorithm realizes that step is as follows:
(1) obtain the small echo on face by wavelet analysis to characterize, convert the individual features in original image I (x, y)
It is wavelet-based attribute vector F (F ∈ Rm)。
(2) Fractional power polynomial models (FPP) model k (x, y)=(xy) is usedd(0 < d < 1) makes m tie up wavelet character space
RmProject to n-dimensional space R highernIn.
(3) based on the linear judgment analysis algorithm (KFDA) of core, in RnMatrix S between class is set up in spacebWith matrix S in classw。
Calculate SwNormal orthogonal characteristic vector α1, α2..., αn。
(4) extract facial image and significantly differentiate characteristic vector.Another P1=(α1, α2..., αq), wherein, α1, α2..., αqIt is
SwCorresponding q characteristic value is positive characteristic vector, q=rank (Sw).CalculateIt is maximum special corresponding to L
The characteristic vector β of value indicative1, β2..., βL, (L≤c-1), wherein,C is face point
The quantity of class.Significantly differentiate characteristic vector, fregular=BTP1 TY wherein, y ∈ Rn;B=(β1, β2..., βl)。
(5) the inapparent differentiation characteristic vector of facial image is extracted.CalculateCorresponding to a feature for eigenvalue of maximum
Vectorial γ1, γ2..., γL, (L≤c-1).Make P2=(αq+1, αq+2..., αm), then inapparent differentiation characteristic vector
It is as follows the step of including in the 3D recognition of face stages:
(1) front face is detected, crucial face characteristic in one front face of positioning and a facial image
The contour feature point of point, such as face, left eye and right eye, mouth and nose etc..
(2) three-dimensional people is rebuild by the conventional 3D face databases of the two-dimensional Gabor characteristic vector of said extracted and
Face model.In order to rebuild a three-dimensional face model, ORL (Olivetti Research Laboratory) single face three is used
Dimension face database, including 100 facial images for detecting.Each faceform has nearly 70000 tops in database
Point.Determine a Feature Conversion matrix P, in original three-dimensional face identification method, the matrix is typically by subspace analysis side
The subspace analysis projection matrix that method is obtained, the characteristic vector of preceding m eigenvalue of maximum is corresponded to by the covariance matrix of sample
Composition.The small echo that will be extracted differentiates that characteristic vector corresponds to the m characteristic vector of eigenvalue of maximum, constitutes main Feature Conversion square
Battle array P ', this feature transition matrix has stronger robustness than original eigenmatrix P to factors such as illumination, attitude and expressions, i.e.,
The feature of representative is more accurate and stable.
(3) newly-generated faceform is processed using template matches and linear discriminant analysis (FLDA) method, is carried
Difference and class inherited in the class of modulus type, further optimize last recognition result.
Please be refering to Fig. 4, Fig. 4 is a kind of structure of equipment for obtaining face 3D characteristic informations provided in an embodiment of the present invention
Schematic diagram.
Present invention also offers a kind of equipment for obtaining face 3D characteristic informations, specifically, the equipment is obtained including image
Module 10, acquisition module 20, grid set up module 30, computing module 40 and analysis module 50.
Wherein, image collection module 10 is used to obtain RGBD facial images.
Acquisition module 20 is connected with image collection module 10, for the characteristic point by RGBD man face image acquiring faces.
Specifically, acquisition module 20 carries out the collection of characteristic point by gathering face element, wherein, face element includes:Eyebrow, eye
Eyeball, nose, face, cheek and lower Palestine and China one or more.
Grid is set up module 30 and is connected with acquisition module 20, for setting up face colour 3D grids according to characteristic point.
Computing module 40 is set up module 30 and is connected with grid, for the feature according to face colour 3D grids measures characteristic point
It is worth and calculates the annexation between characteristic point.Wherein, characteristic value include position, distance, shape, size, angle, radian and
One or more in curvature.
Analysis module 50 is connected with computing module 40, empty to obtain the 3D of characteristic point for analyzing characteristic value and annexation
Between distribution characteristics information.
In one embodiment, topological connection relation and space geometry distance between annexation is characterized a little.Analysis
Module is distributed special by the 3d space that finite element method obtains characteristic point to face colour 3D grid march facial disfigurements
Reference ceases.
In another embodiment, annexation is characterized the Dynamic link library relation information of various combinations a little.Analysis mould
The texture analysis of block combined with wavelet transformed obtains face shape information, then changes lower with consistency by extracting face slight expression
Face shape information encode face shape model parameter and obtain the 3d space distribution characteristics information of characteristic point.
Fig. 5 is referred to, Fig. 5 is a kind of equipment entity device for obtaining face 3D characteristic informations provided in an embodiment of the present invention
Structural representation.The device of present embodiment can perform the step in the above method, and related content refers to the above method
In detailed description, will not be repeated here.
The intelligent electronic device includes the memory 62 that processor 61 is coupled with processor 61.
Program, the RGBD facial images of acquisition and storage calculating institute that memory 62 is used for storage program area, sets
The 3d space distribution characteristics information of the characteristic point for obtaining.
Processor 61 is used to obtain RGBD facial images;By the characteristic point of the RGBD man face image acquirings face;Root
Face colour 3D grids are set up according to the characteristic point;Characteristic value according to the face colour 3D grids measurement characteristic point is simultaneously
Calculate the annexation between the characteristic point;The characteristic value and the annexation are analyzed to obtain the feature
The 3d space distribution characteristics information of point.
In several implementation methods provided by the present invention, it should be understood that disclosed apparatus and method, can pass through
Other modes are realized.For example, equipment implementation method described above is only schematical, for example, module or unit
Divide, only a kind of division of logic function there can be other dividing mode when actually realizing, for example multiple units or component
Can combine or be desirably integrated into another system, or some features can be ignored, or do not perform.It is another, it is shown or
The coupling each other for discussing or direct-coupling or communication connection can be the indirect couplings of device or unit by some interfaces
Close or communicate to connect, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, be shown as unit
Part can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple networks
On unit.Some or all of unit therein can be according to the actual needs selected to realize the mesh of present embodiment scheme
's.
In addition, during each functional unit in each implementation method of the invention can be integrated in a processing unit, also may be used
Being that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.It is above-mentioned integrated
Unit can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If integrated unit, can to realize in the form of SFU software functional unit and as independent production marketing or when using
To store in a computer read/write memory medium.Based on such understanding, technical scheme substantially or
Saying all or part of the part or technical scheme contributed to prior art can be embodied in the form of software product
Out, computer software product storage is in a storage medium, including some instructions are used to so that a computer equipment
(can be personal computer, server, or network equipment etc.) or processor (processor) perform each implementation of the present invention
The all or part of step of methods.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM,
Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. it is various
Can be with the medium of store program codes.
In sum, the present invention obtains the 3d space distribution characteristics information of human face characteristic point, to be applied to the knowledge of face
Not, because 3d space distribution characteristics information includes colouring information and depth information, face information is more comprehensive, also, passes through
The 3d space distribution characteristics information can set up face skeleton, be identified by face skeleton, so the attitude of face, table
The change of situations such as non-geometric such as feelings, illumination and facial makeup cosmetic variation and fat or thin face will not be carried out to recognition of face
Influence, therefore identification to face can be more accurate.
Embodiments of the present invention are the foregoing is only, the scope of the claims of the invention is not thereby limited, it is every using this
Equivalent structure or equivalent flow conversion that description of the invention and accompanying drawing content are made, or directly or indirectly it is used in other correlations
Technical field, is included within the scope of the present invention.
Claims (10)
1. a kind of acquisition methods of face 3D characteristic informations, it is characterised in that comprise the following steps:
Obtain RGBD facial images;
By the characteristic point of the RGBD man face image acquirings face;
Face colour 3D grids are set up according to the characteristic point;
Measure the characteristic value of the characteristic point and calculate the connection between the characteristic point according to face colour 3D grids and close
System;
The characteristic value and the annexation are analyzed to obtain the 3d space distribution characteristics information of the characteristic point.
2. method according to claim 1, it is characterised in that the characteristic point according to face colour 3D grid computings
Characteristic value and the characteristic point between annexation the step of in, the annexation is the topology between the characteristic point
Annexation and space geometry distance;
In the step of obtaining the 3d space distribution characteristics information of the characteristic point according to the characteristic value and the annexation, lead to
Cross the 3d space distribution characteristics information that human face characteristic point is obtained to face colour 3D grid march facial disfigurements.
3. method according to claim 1, it is characterised in that the characteristic point according to face colour 3D grid computings
Characteristic value and the characteristic point between annexation the step of in, the annexation is the various combinations of the characteristic point
Dynamic link library relation information;
In the step of obtaining the 3d space distribution characteristics information of the characteristic point according to the characteristic value and the annexation, lead to
Cross the 3d space distribution characteristics information for obtaining face shape information to obtain human face characteristic point.
4. according to the method in claim 2 or 3, it is characterised in that described by the RGBD man face image acquirings face
Characteristic point the step of in, carry out the collection of the characteristic point by gathering face element, wherein, the face element includes:
Eyebrow, eyes, nose, face, cheek and lower Palestine and China one or more.
5. method according to claim 4, it is characterised in that the characteristic value includes position, distance, shape, size, angle
One or more in degree, radian and curvature.
6. it is a kind of obtain face 3D characteristic informations equipment, it is characterised in that including:
Image collection module, for obtaining RGBD facial images;
Acquisition module, is connected with described image acquisition module, for the characteristic point by the RGBD man face image acquirings face;
Grid sets up module, is connected with the acquisition module, for setting up face colour 3D grids according to the characteristic point;
Computing module, sets up module and is connected with the grid, for measuring the characteristic point according to face colour 3D grids
Characteristic value and calculate the annexation between the characteristic point;
Analysis module, is connected with the computing module, for analyzing the characteristic value and the annexation to obtain the spy
Levy 3d space distribution characteristics information a little.
7. equipment according to claim 6, it is characterised in that the annexation be topology between the characteristic point even
Connect relation and space geometry distance;
The analysis module is by obtaining the 3d space of human face characteristic point to face colour 3D grid march facial disfigurements
Distribution characteristics information.
8. equipment according to claim 6, it is characterised in that the annexation is the various combinations of the characteristic point
Dynamic link library relation information;
The analysis module obtains the 3d space distribution characteristics information of human face characteristic point by obtaining face shape information.
9. the equipment according to claim 7 or 8, it is characterised in that the acquisition module is carried out by gathering face element
The collection of the characteristic point, wherein, the face element includes:The one of eyebrow, eyes, nose, face, cheek and lower Palestine and China
Individual or multiple.
10. equipment according to claim 9, it is characterised in that the characteristic value include position, distance, shape, size,
One or more in angle, radian and curvature.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611036376.3A CN106778491B (en) | 2016-11-14 | 2016-11-14 | The acquisition methods and equipment of face 3D characteristic information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611036376.3A CN106778491B (en) | 2016-11-14 | 2016-11-14 | The acquisition methods and equipment of face 3D characteristic information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106778491A true CN106778491A (en) | 2017-05-31 |
CN106778491B CN106778491B (en) | 2019-07-02 |
Family
ID=58971120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611036376.3A Active CN106778491B (en) | 2016-11-14 | 2016-11-14 | The acquisition methods and equipment of face 3D characteristic information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106778491B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481186A (en) * | 2017-08-24 | 2017-12-15 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and computer equipment |
CN108629278A (en) * | 2018-03-26 | 2018-10-09 | 深圳奥比中光科技有限公司 | The system and method that information security is shown is realized based on depth camera |
CN108888487A (en) * | 2018-05-22 | 2018-11-27 | 深圳奥比中光科技有限公司 | A kind of eyeball training system and method |
CN113033387A (en) * | 2021-03-23 | 2021-06-25 | 金哲 | Intelligent assessment method and system for automatically identifying chronic pain degree of old people |
-
2016
- 2016-11-14 CN CN201611036376.3A patent/CN106778491B/en active Active
Non-Patent Citations (1)
Title |
---|
傅泽华: "《基于RGB-D数据的三维人脸建模及标准化》", 《中国优秀硕士学位论文全文数据库 信息科技辑,2016年第01期,I138-516页》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481186A (en) * | 2017-08-24 | 2017-12-15 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and computer equipment |
CN107481186B (en) * | 2017-08-24 | 2020-12-01 | Oppo广东移动通信有限公司 | Image processing method, image processing device, computer-readable storage medium and computer equipment |
CN108629278A (en) * | 2018-03-26 | 2018-10-09 | 深圳奥比中光科技有限公司 | The system and method that information security is shown is realized based on depth camera |
CN108888487A (en) * | 2018-05-22 | 2018-11-27 | 深圳奥比中光科技有限公司 | A kind of eyeball training system and method |
CN113033387A (en) * | 2021-03-23 | 2021-06-25 | 金哲 | Intelligent assessment method and system for automatically identifying chronic pain degree of old people |
Also Published As
Publication number | Publication date |
---|---|
CN106778491B (en) | 2019-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106778468B (en) | 3D face identification method and equipment | |
CN106778586B (en) | Off-line handwritten signature identification method and system | |
US9842247B2 (en) | Eye location method and device | |
CN105956582B (en) | A kind of face identification system based on three-dimensional data | |
CN103632132B (en) | Face detection and recognition method based on skin color segmentation and template matching | |
CN104834922B (en) | Gesture identification method based on hybrid neural networks | |
CN106778474A (en) | 3D human body recognition methods and equipment | |
CN102592136B (en) | Three-dimensional human face recognition method based on intermediate frequency information in geometry image | |
CN106778489A (en) | The method for building up and equipment of face 3D characteristic identity information banks | |
CN101819628B (en) | Method for performing face recognition by combining rarefaction of shape characteristic | |
CN108182397B (en) | Multi-pose multi-scale human face verification method | |
CN106599785B (en) | Method and equipment for establishing human body 3D characteristic identity information base | |
CN101540000B (en) | Iris classification method based on texture primitive statistical characteristic analysis | |
CN102270308B (en) | Facial feature location method based on five sense organs related AAM (Active Appearance Model) | |
CN106611158A (en) | Method and equipment for obtaining human body 3D characteristic information | |
CN102799872B (en) | Image processing method based on face image characteristics | |
CN106778491B (en) | The acquisition methods and equipment of face 3D characteristic information | |
CN105654035B (en) | Three-dimensional face identification method and the data processing equipment for applying it | |
Biswas et al. | A new approach of iris detection and recognition | |
CN109886091A (en) | Three-dimensional face expression recognition methods based on Weight part curl mode | |
CN110532915B (en) | Three-dimensional face shielding discrimination method based on normal vector azimuth local entropy | |
CN112541897A (en) | Facial paralysis degree evaluation system based on artificial intelligence | |
Pathak et al. | Multimodal eye biometric system based on contour based E-CNN and multi algorithmic feature extraction using SVBF matching | |
CN105404883B (en) | A kind of heterogeneous three-dimensional face identification method | |
CN105590107B (en) | A kind of face low-level image feature construction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808 Patentee after: Obi Zhongguang Technology Group Co., Ltd Address before: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808 Patentee before: SHENZHEN ORBBEC Co.,Ltd. |