CN110032927A - A kind of face identification method - Google Patents

A kind of face identification method Download PDF

Info

Publication number
CN110032927A
CN110032927A CN201910146378.5A CN201910146378A CN110032927A CN 110032927 A CN110032927 A CN 110032927A CN 201910146378 A CN201910146378 A CN 201910146378A CN 110032927 A CN110032927 A CN 110032927A
Authority
CN
China
Prior art keywords
image
dimension
texture
model
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910146378.5A
Other languages
Chinese (zh)
Inventor
骞志彦
王国强
陈学伟
张斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sight Margin (shanghai) Intelligent Technology Co Ltd
Original Assignee
Sight Margin (shanghai) Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sight Margin (shanghai) Intelligent Technology Co Ltd filed Critical Sight Margin (shanghai) Intelligent Technology Co Ltd
Priority to CN201910146378.5A priority Critical patent/CN110032927A/en
Publication of CN110032927A publication Critical patent/CN110032927A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The present invention provides a kind of face identification method, belongs to Image Processing and Pattern Recognition technical field.Face identification method of the invention includes: to utilize 3 dimension faceform's data texturing of 3 dimension modules and 2 dimension face image data building;Projective transformation is carried out to 3 dimension faceform's data texturing and obtains 2 dimension texture images;Parameterized treatment is carried out to the 2 dimension texture image and obtains UV image;Illumination normalized is carried out to the UV image;The UV image after illumination normalized is identified to obtain similarity score.A kind of face identification method of the invention proposes new -2 dimension (3D-2D) identification framework of 3 dimension, the frame is registered using 3D data, only 2D data is needed to be identified simultaneously, recognition methods discrimination with higher, and discrimination and robustness are had more for condition variation.

Description

A kind of face identification method
Technical field
The invention belongs to Image Processing and Pattern Recognition technical field more particularly to a kind of face identification methods.
Background technique
Recognition of face (FR) is always a key topic in computer vision, pattern-recognition and machine learning research, It extends perception, behavior and social principles.In parallel, FR technology is in the side such as sensor, algorithm, database and appraisal framework Face is evolving always, the interest a part for promoting its growing by difficult task and challenge (it is i.e. complicated, in class Object identifying problem) driving, a part is by being related to the various application drives of Identity Management.The challenge studied at present includes: (i) Inherence is separated with external cosmetic variation;(ii) development differentiates expression and similarity measurement;(iii) discovery is across isomeric data With the performance invariant of condition.Application program-oriented method, face are becoming a kind of powerful biological identification technology, and one kind is used for The high-level semantics of index and retrieval based on content and it is a kind of for human-computer interaction from however communication pattern abundant. The existing frame for face recognition is according to certain methods (such as: data-driven/be based on model/perception) or face data Domain (such as: image/cloud/depth map) variation.
Existing 2D-2D face recognition technology haves the defects that accuracy, is identified by rate, recognition speed etc., It will receive the influence of the objective condition such as illumination, angle and clarity, to affect to the result determined, at present Certain methods have been provided to improve the accuracy of comparison, but there are many restrictions in the method for these prior arts.
Summary of the invention
The present invention provides a kind of face identification method, accurate existing for existing 2D-2D face recognition technology to solve The defect of spend, be identified by rate, recognition speed etc..
In order to solve the above technical problems, the present invention provides a kind of face identification methods, comprising:
Utilize 3 dimension faceform's data texturing of 3 dimension modules and 2 dimension face image data building;
Projective transformation is carried out to 3 dimension faceform's data texturing and obtains 2 dimension texture images;
Parameterized treatment is carried out to the 2 dimension texture image and obtains UV image;
Illumination normalized is carried out to the UV image;
The UV image after illumination normalized is identified to obtain similarity score.
According to an embodiment of the present invention, the UV image after the normalized to illumination identify After the step of to similarity score further include:
Similarity score described in normalized.
Another embodiment according to the present invention, 3 dimension module are that AFM constructs model, the AFM building model Surface parameter is an injective function:
WhereinIndicate 3 dimensional vector R3, M expression image, 2 dimension image networks of U expression.
Another embodiment according to the present invention, it is described that 3 dimension faceform's data texturing progress projective transformation is obtained Include: to the step of 2 dimension texture image
Linear Mapping is carried out to 3 dimension faceform's data texturing under perspective projection model and obtains 2 dimension texture maps Picture;
The processing of re-projection error minimum is carried out to the 2 dimension texture image.
Another embodiment according to the present invention, it is described that UV figure is obtained to the 2 dimension texture image progress parameterized treatment As the step of include:
Model parameterization is carried out to the 2 dimension texture image to handle to obtain UV image;
Remove the dummy values point in the UV image.
Another embodiment according to the present invention, the step of illumination normalized is carried out to UV image packet It includes:
Analysis determines the skin reflex model of the UV image;
According to the texture illuminance model of UV image described in the skin reflex model construction;
Illumination normalized is carried out to the UV image using the texture illuminance model.
On the other hand, the present invention also provides a kind of face identification devices, comprising:
3 dimension modules construct module, for utilizing 3 dimension faceform's texture of 3 dimension modules and 2 dimension face image data building Data;
Projective transformation module obtains 2 dimension texture maps for carrying out projective transformation to 3 dimension faceform's data texturing Picture;
Parameterized treatment module obtains UV image for carrying out parameterized treatment to the 2 dimension texture image;
Illumination processing module, for carrying out illumination normalized to the UV image;
Identification module, for being identified to obtain similarity score to the UV image after illumination normalized.
According to an embodiment of the present invention, the projective transformation module includes:
Perspective projection unit, for linearly being reflected under perspective projection model to 3 dimension faceform's data texturing It penetrates to obtain 2 dimension texture images;
Re-projection error minimizes unit, for carrying out the processing of re-projection error minimum to the 2 dimension texture image.
Another embodiment according to the present invention, the parameterized treatment module include:
Parameterized treatment unit handles to obtain UV image for carrying out model parameterization to the 2 dimension texture image;
Removal unit, for removing the dummy values point in the UV image.
Another embodiment according to the present invention, the illumination process module include:
Skin reflex model analysis unit, for analyzing the skin reflex model for determining the UV image;
Texture illuminance model construction unit, the texture for the UV image according to the skin reflex model construction shine Spend model;
Illumination normalization unit, for being carried out at illumination normalization using the texture illuminance model to the UV image Reason.
Beneficial effects of the present invention:
A kind of face identification method of the invention proposes new -2 dimension (3D-2D) identification framework of 3 dimension, which utilizes 3D data are registered, while only 2D data being needed to be identified, recognition methods discrimination with higher, and for item Part variation has more discrimination and robustness.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, institute in being described below to the embodiment of the present invention Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention Example, for those of ordinary skill in the art, without any creative labor, can also be attached according to these Figure obtains other attached drawings.
Fig. 1 is a kind of flow diagram of one embodiment of face identification method of the invention;
Fig. 2 is a kind of flow diagram of one embodiment of the step 200 of face identification method of the invention;
Fig. 3 is a kind of flow diagram of one embodiment of the step 300 of face identification method of the invention;
Fig. 4 is a kind of flow diagram of one embodiment of the step 400 of face identification method of the invention;
Fig. 5 is a kind of structural schematic diagram of one embodiment of face identification device of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other reality obtained by those of ordinary skill in the art without making creative efforts Example is applied, shall fall within the protection scope of the present invention.
Shown in Figure 1, the embodiment of the invention provides a kind of face identification methods, comprising:
Step 100: utilizing 3 dimension faceform's data texturing of 3 dimension modules and 2 dimension face image data building;
3 dimension faceform's data texturings herein are 3 dimensions that the correspondence 2 obtained using 3 dimension modules ties up face facial image Control key faceform's data.
Step 200: projective transformation being carried out to 3 dimension faceform's data texturing and obtains 2 dimension texture images;
2 dimension texture images herein are that two dimensional image does not have colouring information.
Step 300: parameterized treatment being carried out to the 2 dimension texture image and obtains UV image;
UV image herein is that two dimensional image contains color and other image informations.
Step 400: illumination normalized is carried out to the UV image;
Step 500: the UV image after illumination normalized is identified to obtain similarity score.
As one for example, as shown in Figure 1 after the step 500 of the embodiment of the present invention further include:
Step 600: similarity score described in normalized.
Optionally, 3 dimension modules are that AFM constructs model, the surface ginseng of the AFM building model in the present embodiment step 100 Number is an injective function:
WhereinIndicate 3 dimensional vector R3, M expression image, 2 dimension image networks of U expression.
The step 100 septum reset data pattern and record frame of the present embodiment are provided by AFM building model.It will be former The fixed dimension 2 that beginning polygon or surface M are mapped to rule sampling ties up (2D) grid U, referred to as several picture.It is any to be suitble to 3 dimensions The model of (3D) data can all inherit this predefined parametrization and identical several picture grid.By by texture value and mould Type point is associated, can construct texture image on UV coordinate space, to provide generalized reference frame for local facial characteristics Frame.Texture image in the space UV by construction alignment (due to being several picture) and records (due to the systematicness of grid), and And local similarity measurement can be used and be compared.For having the 3D model of the 2D data texturing of record, from corresponding to The image value that each UV coordinate is obtained in the texel of immediate 3D point, using identical method in picture library and detection line Automatic 3D-2D registration is carried out in reason.
As another for example, as shown in Fig. 2, the step 200 of the embodiment of the present invention includes:
Step 201: Linear Mapping being carried out to 3 dimension faceform's data texturing under perspective projection model and obtains 2 dimensions Texture image;
Step 202: the processing of re-projection error minimum is carried out to the 2 dimension texture image.
Specific step 200 realized first by estimation perspective projection transformation will fitting 3D Model registration to image surface, It is related to 3D rigid transformation (rotation and translation) and 2D projection.In the most generalized case, both modalities which may all be shown different Shape face.During registration, known camera parameter can be used and the 3D of acquisition and 2D data are converted simultaneously.Work as 2D When probe needs to be registered to 3D picture library model, identification may relate to uncontrolled face rotation and different identity, by aobvious The given 3D-2D of formula estimation projects to correct the relative pose difference between mode.
Before models fitting, 3D posture is considered by the rigid alignment of initial data and AFM.It is nearest by iteration Point algorithm estimates R3In similarity transformation A so that model points and the immediate surface 3D point alignment.By based on corresponding pre- It is directed at initialization algorithm, the robustness of extreme posture can be obtained, 3D data are mapped to normative model posture and are In.After fitting, distorted pattern vertex may map to luv space:
Wherein,It is the column permutation matrix at homogeneous coordinates midpoint, A is 4 × 4 matrixes.Original 3D data and model The point in the surface 3D can be used in this bijective map between postureTo establish and 2D picture pointCorresponding relationship.Pass through a corresponding collectionPerspective projection is estimated to obtain 3D and the face 2D Relative attitude.
Perspective projection estimation: basic assumption is the 2D face-image generated by observing some facial grids.From 3D table The point in face, linearly maps on to the 2 d image plane, by being made of external (or posture) component and inherent (or camera) component Perspective projection, ignore the distortion of non-linear camera lens.The Linear Mapping P of 3D to 2D is given under total perspective projection model ∈R3×4:
Wherein, K is 3 × 3 matrixes of inner parameter, and E is 3 × 4 matrixes of external parameter.It can be relative to translation VectorSpin matrix R and scale s are further written, this is fuzzy for perspective model.PointIt is mapped to a littleIn homogeneous coordinates.Solving P amount goes estimation to pass through linear transformation for one group of 3D pointIt is mapped to 2D image The entry of 3 × 4 projection matrixes of position X:
Wherein, X=(x1,...,xl) andIt is the column permutation matrix of the point in homogeneous coordinates.The system relates to And 11 freedom degrees, because any matrix sP will lead to the fuzzy of the same projection set on image for any scalar s Property.By the facial marks positioning marked in 2D and 3D, the corresponding set of one group of 3D-2D point is establishedUse l= 9 point reference set, this results in the approximation of an overdetermined system in some small errors, equal in (4).
Carry out terrestrial reference re-projection error minimum later: the estimation of projective transformation (4) is formulated as 3D-2D boundary mark weight Projection error minimizes, and the interface 3D re-projection error is the projection by the 3D point of P come the error of approximate 2D point.By using The difference of two squares solves the least squqre approximation problem in all reference points to estimate P:
In formula, objective function is relative to projection matrix P (that is, variables set { (P)jJ=1 ..., 12) joined Numberization, rather than individually camera and pose parameter.Minimization problem is solved using iterative optimization procedure, in this feelings Under condition, using Levenberg-Marquadt (LM) algorithm of direct linear transformation's algorithm initialization, it is corresponding to give Accurate Points Effective and close approximation of P.P provides accurate point correspondence.For the invariance of similarity transformation, point set relative to Position (origin) and average distance (ratio) away from its mass center are normalized.
P is minimized to correspond to match the posture of the unknown basis instrument of any 3D and 2D data and the coupling of camera parameter Close estimation.Estimation can further be decomposed to inner parameter matrix K and external matrix E, external matrix E indicates camera frame Relative orientation (equation 3).However, projection matrix P is sufficient, because each in order to use 3D model to obtain texture image Decomposition in parameter matrix is usually not unique.
The accuracy of projection depends on the quantity of 3D and 2D terrestrial reference, positioning and corresponding precision.In order to which 3D model is registered in The image of same object, under unknown image-forming condition, which can handle various head pose variations.Initial approximation solution Iterative refinement is provides robustness from less corresponding estimation;When using five or four index points, for approximate positive appearance Gesture generates visually consistent texture.3D model points and grid on the 2 d image are projected by visualization.
As another for example, as shown in figure 3, the step 300 of the embodiment of the present invention includes:
Step 301: model parameterization being carried out to the 2 dimension texture image and handles to obtain UV image;
Step 302: removing the dummy values point in the UV image.
Specifically, step 300 in the present embodiment are as follows: the projective transformation of given 3D model and 2D image to (M, I) and estimation P, texture image T (u), u ∈ U are from image value I (x), x ∈ X and registration model verticesIt generates or is promoted.It should Process be similar to from be co-registered with into the texture image of model extract UV image: parameterized by UV, the line in several picture Reason value is that the image value on the position by the model vertices projected again distributes.
The projection of model points is obtained in the cascade converted from two, and the three-dimension varying in equation is by distorted pattern It is mapped to 3D data coordinate system, and by perspective projection P from minimum:
Wherein, X andIt is the matrix of model vertices in image and 3d space respectively.Corresponding to some model pointsUV coordinate in, use the image I in x ∈ X value obtain T value:
Wherein, the model parameterization in h:M → U equation.By from projection model triangulation interpolation come it is specified for U does not correspond to certainT value.
It is two kinds of to block the texture that will affect generation certainly: along camera views side due to 3D posture and 2D posture To model surface block regional occlusion (the sightless facial area on (sightless 3D triangle) and the 2D plane of delineation Domain).In the facial area being blocked, the grid projected again is not to penetrate in region, and surface triangulation is to image The mapping in region will lead to the 2D triangle of overlapping.Identical image value will be assigned to visible and block delta-shaped region Corresponding Texture Points.
By calculating visibility figure, exclude from subsequent processing due to dummy values point caused by blocking.In order to determine that 2D is visible Property, by applying depth buffered method, track the depth value of the subpoint on the plane of delineation.The value of target image at x It is distributed by equation 1.The corresponding 3D point with minimum depth value (closest to camera point)Point u.Visibility figure is target function:
With
Wherein, xi,xjIt is by equationThe re-projection that provides andIt is R3In depth coordinate list Bit vector.On the contrary, invisible figure is the indicator function of point u, identical image is competed with other image values of smaller depth Value.Additional 3D visibility figure can be estimated by excluding the 3D point of the surface normal with transformation in UV, whereinIt is contrary with viewpoint: ifThenWherein E is equation The external matrix that formula 3D-2D projection calculates.
As another for example, as shown in figure 4, the step 400 of the embodiment of the present invention includes:
Step 401: analysis determines the skin reflex model of the UV image;
Step 402: according to the texture illuminance model of UV image described in the skin reflex model construction;
Step 403: illumination normalized being carried out to the UV image using the texture illuminance model.
Specifically, step 400 in the present embodiment are as follows: under normalized a pair of of texture lighting condition, pass through optimizing application , without specific albedo estimate illumination delivery.It is proposed that secondary lighting algorithms carry out the texture in the space UV of AFM Operation, reduces it to the greatest extent by element illumination difference.Use the skin reflex mould in mixing bidirectional reflectance distribution function (BRDF) form Type (ASRM) is analyzed.This method has less constraint and limitation than existing method: (I) is it is not assumed that source face-image It carries out in (that is, viewpoint, registration), is inherited because UV indicates to register by 3D-2D;(II) is it is not assumed that in light source (quantity, distance Or direction), it is made in the presence of shade or mirror-reflection;(III) is dependent on minimum input and unglazed calibration information (that is, a pair of Probegallery texture image and 3D model of fit);(iv) from different examples, local light variation is required;(ⅴ) It is related to an Optimization Steps, minimizes joint error, rather than solves two individual inverse problems and independently estimate each line The albedo of reason;(VI) it be two-way (that is, the role of source and target texture can be interchanged);(vii) it uses identical 3D Model, for normalizing posture and texture lifting, for obtaining the estimation of surface normal.
Texture lighting model: the texture image T (u) in the space UV,Applied to unknown facial albedo B (u):
T (u)=Ls(u)+(Ld(u)+La(u))B(u) (9)
Wherein, Ld(u), LsIt (u) is diffusing reflection and specular components (assuming that white DE Specular Lighting) and LaIt (u) is environment Light fixture.A pair of of texture image can be normalized in two distinct ways: estimation albedo ingredient (luminous) or It transmits lighting parameter (illumination again).Equation (9) are solved because shining B (u)=(T (u)-Ls(u))/(Ld(u)+La(u)) it needs Estimate light fixture.In this work, promotion carries out weight light using texture in the case where no prior estimate albedo According to.
Analyzing skin reflection model: using the diffusing reflection and specular components of analysis BRDF, ignore subsurface scattering.It is unrestrained Reflection Ld is using basic lambert's-BRDF model, it assumes that there is equally bright surface in all directions.For having The single light source of intensity L, surface pointIntensity and surface normal and incident light direction between angle, θ (u) it is directly proportional: Ld (u)=Lcos (θ (u)).Mirror-reflection is explained by Phan-Brdf, it passes through Ls(u)=Lcosη(φ (u)) carrys out mould Mirror-reflection intensity at quasi- surface point, wherein φ (u) is the angle between view vector and reflected light, and η is that control is high The parameter of bright part size.The variation of the specular properties of different face areas is received by the mirror image annotated based on AFM Enter.
Model in equation (9) can be write relative to the parameter of analysis model.Texture is strong in RGB or form and aspect saturation Spend the space (HSI) autonomous channel in be modeled, and assume light color be it is white, this is to obtain indoors or under controlled condition The reasonable approximation of the face-image obtained.In addition, multi-point source is polymerize by SUMM.Their respective BRFD functions:
Optimization illumination transfer: the parameter of lighting model is the multiple light sources of each light source and Color ChannelPosition and The parameter of reflectance component, such asIt is two to minimize the difference of illumination condition between two textures Group light (on the sphere around model mass center) has formulated an optimization problem;One illuminates for removing from the texture of source, Another is used to add the illumination of target texture.The program is based on the texture for minimizing identical albedo but different illumination conditions Between approximate error are as follows:
Wherein, La, LdAnd LsIt is the reflecting component of source texture T, is provided in equation 1.La, LdAnd LsTarget texture T Reflecting component;Minimum is defined according to the position of the compound vector λ and T of optical parameter.Error function can also be construed to two The mean intensity of a texture septum reset albedo B (u) changes.In practice, error function be color vector value Eq Europe it is several in Moral norm.On RGB channel, and visibility figure V (u) is being applied, the union of the equation of V ' (u) are as follows:
It can seek the minimum of equation 11 by global optimization method.It is annealed using simulated annealing and adaptive index Scheme.Consider multiple luminous points, three for the illumination on analog light source.In order to improve performance under low lighting conditions, by color Texture transformation is HSI color space, and the RGB in equation is converted to equation.It is weighted average replacement, intensity weighted is color Twice of reconciliation saturation degree.This approach improves the synthesis appearance of dark facial area and increase similarity score.Picture library is used Make the target texture of source and probe, the UV being both used under same mask is indicated.
Another characteristic of illumination is two-way again, and in the sense that, illumination transmitting again can be between source-target Any direction on occur, and the role of probe and picture library texture can be interchanged in cost function.Visual inspection shows It redesigns texture and seems that naturally facial characteristics is retained, and weight During Illumination will not generate apparent artifact.
Related coefficient based on image gradient direction in the step 500 of the present embodiment, is obtained by global similarity score Probe and picture library promote the pairs of comparison of texture, this seriously mismatches two images very insensitive.It is particularly well suited to measure The similitude of the face data of variation is not only due to different acquisition conditions, and due to the significant variation of individual appearance.
In addition, can also use standard value that similarity score is normalized in the present embodiment, and such as: standard Score is normalized to 1-N in Z-score, normalizes and normalized using the distance progress N-N extracted from picture library data Measure multidimensional scaling.
The face identification method of the embodiment of the present invention proposes new 3D-2D identification framework, the frame using 3D data into The case where going and register, while only 2D data being needed to be identified, and 2D-3D can also be readily applied to, the calculation based on 3D Method illustrates very high discrimination, and the face signature based on 3D model has more discrimination and robustness for condition variation.With Asymmetry or isomery recognition methods across different modalities mappings characteristics are compared, and the 3D-2D frame (referred to as UR2D) of exploitation depends on Mode synergistic effect, wherein 3D model is used for the registration of 2D image and data texturing, and alignment is with rear according to standardization.It is used with previous It is compared in 3D-2D registration with the method for fitting, UR2D carrys out illumination again (using surface normal information) using 3D shape information and divides Number calculates.Compared with existing multi-mode 2D+3D method, UR2D is in a manner of specific subject across mode and across registering/identifying Stage integrates face data.In addition, different from existing 3D auxiliary 2D recognition methods, this method infers that 3D schemes using 2D image Library model.UR2D is to be constructed based on personalized picture library model by actual 3D face data model of fit.
On the other hand, as figure 5 illustrates, the embodiment of the invention also provides a kind of face identification devices, comprising:
3 dimension modules construct module 10, for utilizing 3 dimension faceform's line of 3 dimension modules and 2 dimension face image data building Manage data;
Projective transformation module 20 obtains 2 dimension textures for carrying out projective transformation to 3 dimension faceform's data texturing Image;
Parameterized treatment module 30 obtains UV image for carrying out parameterized treatment to the 2 dimension texture image;
Illumination processing module 40, for carrying out illumination normalized to the UV image;
Identification module 50, for being identified to obtain similarity score to the UV image after illumination normalized.
Optionally, face identification device further includes normalized module in the present embodiment, for described in normalized Similarity score.
As one for example, the projective transformation module 20 of the embodiment of the present invention includes:
Perspective projection unit, for linearly being reflected under perspective projection model to 3 dimension faceform's data texturing It penetrates to obtain 2 dimension texture images;
Re-projection error minimizes unit, for carrying out the processing of re-projection error minimum to the 2 dimension texture image.
As another for example, the parameterized treatment module 30 of the embodiment of the present invention includes:
Parameterized treatment unit handles to obtain UV image for carrying out model parameterization to the 2 dimension texture image;
Removal unit, for removing the dummy values point in the UV image.
As another for example, the illumination process module 40 of the embodiment of the present invention includes:
Skin reflex model analysis unit, for analyzing the skin reflex model for determining the UV image;
Texture illuminance model construction unit, the texture for the UV image according to the skin reflex model construction shine Spend model;
Illumination normalization unit, for being carried out at illumination normalization using the texture illuminance model to the UV image Reason.
The present embodiment is the device of corresponding face identification method, since face identification method has above-mentioned technique effect, therefore Corresponding face identification device also has corresponding technical effect, and details are not described herein.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned tools Body embodiment, the above mentioned embodiment is only schematical, rather than restrictive, the ordinary skill of this field Personnel under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, can also make Many forms belong within protection of the invention.

Claims (10)

1. a kind of face identification method characterized by comprising
Utilize 3 dimension faceform's data texturing of 3 dimension modules and 2 dimension face image data building;
Projective transformation is carried out to 3 dimension faceform's data texturing and obtains 2 dimension texture images;
Parameterized treatment is carried out to the 2 dimension texture image and obtains UV image;
Illumination normalized is carried out to the UV image;
The UV image after illumination normalized is identified to obtain similarity score.
2. a kind of face identification method according to claim 1, which is characterized in that after the normalized to illumination The UV image was identified after the step of obtaining similarity score further include:
Similarity score described in normalized.
3. a kind of face identification method according to claim 1, which is characterized in that 3 dimension module is that AFM constructs mould The surface parameter of type, the AFM building model is an injective function:
WhereinIndicate 3 dimensional vector R3, M expression image, 2 dimension image networks of U expression.
4. face identification method according to claim 2, which is characterized in that described to 3 dimension faceform's texture number Include: according to the step of projective transformation obtains 2 dimension texture image is carried out
Linear Mapping is carried out to 3 dimension faceform's data texturing under perspective projection model and obtains 2 dimension texture images;
The processing of re-projection error minimum is carried out to the 2 dimension texture image.
5. face identification method according to claim 3, which is characterized in that described to join to the 2 dimension texture image Numberization handles the step of obtaining UV image and includes:
Model parameterization is carried out to the 2 dimension texture image to handle to obtain UV image;
Remove the dummy values point in the UV image.
6. face identification method according to claim 4, which is characterized in that described to carry out illumination normalization to the UV image The step of processing includes:
Analysis determines the skin reflex model of the UV image;
According to the texture illuminance model of UV image described in the skin reflex model construction;
Illumination normalized is carried out to the UV image using the texture illuminance model.
7. a kind of face identification device characterized by comprising
3 dimension modules construct module, for utilizing 3 dimension faceform's data texturing of 3 dimension modules and 2 dimension face image data building;
Projective transformation module obtains 2 dimension texture images for carrying out projective transformation to 3 dimension faceform's data texturing;
Parameterized treatment module obtains UV image for carrying out parameterized treatment to the 2 dimension texture image;
Illumination processing module, for carrying out illumination normalized to the UV image;
Identification module, for being identified to obtain similarity score to the UV image after illumination normalized.
8. face identification device according to claim 7, which is characterized in that the projective transformation module includes:
Perspective projection unit is obtained for carrying out Linear Mapping to 3 dimension faceform's data texturing under perspective projection model To 2 dimension texture images;
Re-projection error minimizes unit, for carrying out the processing of re-projection error minimum to the 2 dimension texture image.
9. face identification method according to claim 8, which is characterized in that the parameterized treatment module includes:
Parameterized treatment unit handles to obtain UV image for carrying out model parameterization to the 2 dimension texture image;
Removal unit, for removing the dummy values point in the UV image.
10. face identification device method according to claim 9, which is characterized in that the illumination process module includes:
Skin reflex model analysis unit, for analyzing the skin reflex model for determining the UV image;
Texture illuminance model construction unit, the texture illumination mould for the UV image according to the skin reflex model construction Type;
Illumination normalization unit, for carrying out illumination normalized to the UV image using the texture illuminance model.
CN201910146378.5A 2019-02-27 2019-02-27 A kind of face identification method Pending CN110032927A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910146378.5A CN110032927A (en) 2019-02-27 2019-02-27 A kind of face identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910146378.5A CN110032927A (en) 2019-02-27 2019-02-27 A kind of face identification method

Publications (1)

Publication Number Publication Date
CN110032927A true CN110032927A (en) 2019-07-19

Family

ID=67235021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910146378.5A Pending CN110032927A (en) 2019-02-27 2019-02-27 A kind of face identification method

Country Status (1)

Country Link
CN (1) CN110032927A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580715A (en) * 2019-08-06 2019-12-17 武汉大学 Image alignment method based on illumination constraint and grid deformation
CN112528902A (en) * 2020-12-17 2021-03-19 四川大学 Video monitoring dynamic face recognition method and device based on 3D face model
CN113505717A (en) * 2021-07-17 2021-10-15 桂林理工大学 Online passing system based on face and facial feature recognition technology

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070080967A1 (en) * 2005-10-11 2007-04-12 Animetrics Inc. Generation of normalized 2D imagery and ID systems via 2D to 3D lifting of multifeatured objects
CN101159015A (en) * 2007-11-08 2008-04-09 清华大学 Two-dimension human face image recognizing method
US20080205712A1 (en) * 2007-02-28 2008-08-28 Fotonation Vision Limited Separating Directional Lighting Variability in Statistical Face Modelling Based on Texture Space Decomposition
CN101320484A (en) * 2008-07-17 2008-12-10 清华大学 Three-dimensional human face recognition method based on human face full-automatic positioning
KR20090093737A (en) * 2008-02-28 2009-09-02 성균관대학교산학협력단 Method and apparatus for illumination normalization of face image
CN101561874A (en) * 2008-07-17 2009-10-21 清华大学 Method for recognizing face images
CN101777120A (en) * 2010-01-28 2010-07-14 山东大学 Face recognition image processing method based on sequence characteristics
CN101916454A (en) * 2010-04-08 2010-12-15 董洪伟 Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN103052960A (en) * 2010-11-11 2013-04-17 数字光学欧洲有限公司 Rapid auto-focus using classifier chains, mems and/or multiple object focusing
CN103049896A (en) * 2012-12-27 2013-04-17 浙江大学 Automatic registration algorithm for geometric data and texture data of three-dimensional model
CN104376594A (en) * 2014-11-25 2015-02-25 福建天晴数码有限公司 Three-dimensional face modeling method and device
KR20160098581A (en) * 2015-02-09 2016-08-19 홍익대학교 산학협력단 Method for certification using face recognition an speaker verification
CN106469465A (en) * 2016-08-31 2017-03-01 深圳市唯特视科技有限公司 A kind of three-dimensional facial reconstruction method based on gray scale and depth information
CN106529486A (en) * 2016-11-18 2017-03-22 深圳市唯特视科技有限公司 Racial recognition method based on three-dimensional deformed face model
CN107247916A (en) * 2017-04-19 2017-10-13 广东工业大学 A kind of three-dimensional face identification method based on Kinect
CN107507263A (en) * 2017-07-14 2017-12-22 西安电子科技大学 A kind of Texture Generating Approach and system based on image
US20180046854A1 (en) * 2015-02-16 2018-02-15 University Of Surrey Three dimensional modelling
CN107909640A (en) * 2017-11-06 2018-04-13 清华大学 Face weight illumination method and device based on deep learning
CN108109198A (en) * 2017-12-18 2018-06-01 深圳市唯特视科技有限公司 A kind of three-dimensional expression method for reconstructing returned based on cascade
CN111652172A (en) * 2020-06-09 2020-09-11 安徽省徽腾智能交通科技有限公司 Method for improving face recognition efficiency
CN115761844A (en) * 2022-11-03 2023-03-07 江西方兴科技股份有限公司 3D-2D face recognition method

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070080967A1 (en) * 2005-10-11 2007-04-12 Animetrics Inc. Generation of normalized 2D imagery and ID systems via 2D to 3D lifting of multifeatured objects
US20080205712A1 (en) * 2007-02-28 2008-08-28 Fotonation Vision Limited Separating Directional Lighting Variability in Statistical Face Modelling Based on Texture Space Decomposition
CN101159015A (en) * 2007-11-08 2008-04-09 清华大学 Two-dimension human face image recognizing method
KR20090093737A (en) * 2008-02-28 2009-09-02 성균관대학교산학협력단 Method and apparatus for illumination normalization of face image
CN101320484A (en) * 2008-07-17 2008-12-10 清华大学 Three-dimensional human face recognition method based on human face full-automatic positioning
CN101561874A (en) * 2008-07-17 2009-10-21 清华大学 Method for recognizing face images
CN101777120A (en) * 2010-01-28 2010-07-14 山东大学 Face recognition image processing method based on sequence characteristics
CN101916454A (en) * 2010-04-08 2010-12-15 董洪伟 Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN103052960A (en) * 2010-11-11 2013-04-17 数字光学欧洲有限公司 Rapid auto-focus using classifier chains, mems and/or multiple object focusing
CN103049896A (en) * 2012-12-27 2013-04-17 浙江大学 Automatic registration algorithm for geometric data and texture data of three-dimensional model
CN104376594A (en) * 2014-11-25 2015-02-25 福建天晴数码有限公司 Three-dimensional face modeling method and device
KR20160098581A (en) * 2015-02-09 2016-08-19 홍익대학교 산학협력단 Method for certification using face recognition an speaker verification
US20180046854A1 (en) * 2015-02-16 2018-02-15 University Of Surrey Three dimensional modelling
CN106469465A (en) * 2016-08-31 2017-03-01 深圳市唯特视科技有限公司 A kind of three-dimensional facial reconstruction method based on gray scale and depth information
CN106529486A (en) * 2016-11-18 2017-03-22 深圳市唯特视科技有限公司 Racial recognition method based on three-dimensional deformed face model
CN107247916A (en) * 2017-04-19 2017-10-13 广东工业大学 A kind of three-dimensional face identification method based on Kinect
CN107507263A (en) * 2017-07-14 2017-12-22 西安电子科技大学 A kind of Texture Generating Approach and system based on image
CN107909640A (en) * 2017-11-06 2018-04-13 清华大学 Face weight illumination method and device based on deep learning
CN108109198A (en) * 2017-12-18 2018-06-01 深圳市唯特视科技有限公司 A kind of three-dimensional expression method for reconstructing returned based on cascade
CN111652172A (en) * 2020-06-09 2020-09-11 安徽省徽腾智能交通科技有限公司 Method for improving face recognition efficiency
CN115761844A (en) * 2022-11-03 2023-03-07 江西方兴科技股份有限公司 3D-2D face recognition method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MACHIRAJU L, 等: "Estimation of 3D Faces and Illumination from Single Photographs Using a Bilinear Illumination Model", 《EUROGRAPHICS SYMPOSITUM ON RENDERING, 2005》 *
晏洁等: "具有真实感的三维虚拟特定人脸生成方法", 《计算机学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580715A (en) * 2019-08-06 2019-12-17 武汉大学 Image alignment method based on illumination constraint and grid deformation
CN110580715B (en) * 2019-08-06 2022-02-01 武汉大学 Image alignment method based on illumination constraint and grid deformation
CN112528902A (en) * 2020-12-17 2021-03-19 四川大学 Video monitoring dynamic face recognition method and device based on 3D face model
CN113505717A (en) * 2021-07-17 2021-10-15 桂林理工大学 Online passing system based on face and facial feature recognition technology

Similar Documents

Publication Publication Date Title
Dame et al. Dense reconstruction using 3D object shape priors
Johnson et al. Shape estimation in natural illumination
Miyazaki et al. Transparent surface modeling from a pair of polarization images
Criminisi et al. Single view metrology
US20160246078A1 (en) Process and method for real-time physically accurate and realistic-looking glasses try-on
US20090310828A1 (en) An automated method for human face modeling and relighting with application to face recognition
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
Lu et al. Symps: Brdf symmetry guided photometric stereo for shape and light source estimation
Paletta et al. 3D attention: measurement of visual saliency using eye tracking glasses
CN110032927A (en) A kind of face identification method
Toderici et al. Bidirectional relighting for 3D-aided 2D face recognition
JP2001283229A (en) Method for calculating position and direction of object in three-dimensional space
CN114140539A (en) Method and device for acquiring position of indoor object
Wang et al. Dynamic human body reconstruction and motion tracking with low-cost depth cameras
JP4552431B2 (en) Image collation apparatus, image collation method, and image collation program
JP2004086929A5 (en)
CN114137564A (en) Automatic indoor object identification and positioning method and device
CN115761844A (en) 3D-2D face recognition method
Wang et al. Capturing and rendering geometry details for BTF-mapped surfaces
Liang et al. Better together: shading cues and multi-view stereo for reconstruction depth optimization
Liang et al. Monocular depth estimation for glass walls with context: a new dataset and method
Shem-Tov et al. Towards reflectometry from interreflections
Shufelt Projective geometry and photometry for object detection and delineation
Zheng et al. An extended photometric stereo algorithm for recovering specular object shape and its reflectance properties
Wang et al. Real-time rgbd reconstruction using structural constraint for indoor ar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination