CN106469465A - A kind of three-dimensional facial reconstruction method based on gray scale and depth information - Google Patents

A kind of three-dimensional facial reconstruction method based on gray scale and depth information Download PDF

Info

Publication number
CN106469465A
CN106469465A CN201610794122.1A CN201610794122A CN106469465A CN 106469465 A CN106469465 A CN 106469465A CN 201610794122 A CN201610794122 A CN 201610794122A CN 106469465 A CN106469465 A CN 106469465A
Authority
CN
China
Prior art keywords
face
data
dimensional
feature
rigid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610794122.1A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201610794122.1A priority Critical patent/CN106469465A/en
Priority to PCT/CN2016/098100 priority patent/WO2018040099A1/en
Publication of CN106469465A publication Critical patent/CN106469465A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

A kind of three-dimensional facial reconstruction method based on gray scale and depth information proposing in the present invention, its main contents includes:Face half-tone information is identified;Face depth information is identified;Multi-modal recognition of face;Mated by 3D model;3D reconstruction is carried out to face.Its process is to carry out characteristic area positioning to human face data, carries out registration and feature extraction using characteristic point.Selected for maximally efficient feature of classifying using Adaboost algorithm.Then calculate coupling fraction using nearest neighbor classifier and realize multi-modal recognition of face.Finally by coupling local 3D model, complete human face rebuilding.The present invention passes through convergence strategy, effectively improves performance and the efficiency of face identification system.Cascaded using 3D and return, select the 3D point set of densification, face is fully labeled, it is to avoid mark position changes, solve the problems, such as that action change anchor point is inconsistent and self-enclosed, calculate and spend reduction.Highly versatile, live effect is good.

Description

A kind of three-dimensional facial reconstruction method based on gray scale and depth information
Technical field
The present invention relates to technical field of face recognition, especially relate to a kind of three-dimensional face based on gray scale and depth information Method for reconstructing.
Background technology
3D face mesh reconstruction method, can be used for criminal's monitoring, in the feelings not needing criminal's fingerprint or identity information Carry out face reconstruct under condition, can be also used for 3 D-printing, can be additionally used for the fields such as three-dimensional face modeling, cartoon making In, the impact to each field is great.Three-dimensional face identification, with respect to two-dimension human face identification, has it to illumination robust, is subject to attitude And expression etc. factor impact less the advantages of, therefore three dimensional data collection technology develop rapidly and three-dimensional data quality After greatly promoting with precision, a lot of scholars put into their research in this field.
Face gray level image is easily affected by illumination variation, and face depth image is easily subject to accuracy of data acquisition And the impact such as expression shape change, these factors have impact on stability and the accuracy of face identification system to a certain extent.Cause This multi-modal fusion system is increasingly paid close attention to by people.Multimodal systems are by carrying out the collection of multi-modal data, permissible Using the advantage of each modal data, and overcome some inherences weakness of single mode system by convergence strategy (as gray-scale maps The illumination of picture, the expression of depth image), effectively improve the performance of face identification system.
The present invention passes through to merge gray scale and depth information obtains multimodal systems by carrying out two dimensional gray information and three-dimensional The collection of depth information, passes through to mate local 3D Model Reconstruction facial contours using collection information point.By convergence strategy Lai gram Take some inherences weakness (such as the illumination of gray level image, the expression of depth image) of single mode system, effectively improve face The performance of identifying system is so that recognition of face more accurate quick.Cascaded using 3D and return, in face in action change, mark keeps Unanimously, by selecting fine and close 3D point set, face is fully labeled, it is to avoid the position of mark changes, and solves action and becomes Change the inconsistent and self-enclosed problem of anchor point;Calculate cost to greatly reduce.3D grid does not contain background, highly versatile, Er Qieshi Shi Xiaoguo is good.
Content of the invention
Easily affected by illumination variation for face gray level image, and face depth image is easily subject to data acquisition The problem of the impact such as precision and expression shape change, it is an object of the invention to provide a kind of three-dimensional based on gray scale and depth information Facial reconstruction method, obtains multimodal systems by merging gray scale and depth information, by carrying out two dimensional gray information and three-dimensional The collection of depth information, passes through to mate local 3D Model Reconstruction facial contours using collection information point.
For solving the above problems, the present invention provides a kind of three-dimensional facial reconstruction method based on gray scale and depth information, its Main contents include:
(1) face half-tone information is identified;
(2) face depth information is identified;
(3) multi-modal recognition of face
(4) mated by 3D model;
(5) 3D reconstruction is carried out to face;
Wherein, described face half-tone information is identified, comprise the steps:
(1) characteristic area positioning, obtains human eye area using human eye detection device, and described human eye detection device is hierarchical classification Device H, obtains through following algorithm:
Given training sample set S={ (x1,y1),…,(xm,ym), weak spatial classification deviceWherein xi∈ χ is sample Vector, yi=± 1 is tag along sort, and m is total sample number, initialization sample probability distribution
To t=1 ..., each Weak Classifier in T makees following operation:
Sample space χ is divided, obtains X1,X2,…,Xn
Wherein ε is a little normal number;
Calculate normalization factor,
Select one so that Z minimizes in Weak Classifier space
Update training sample probability distribution
Wherein ZtFor normalization factor so that Dt+1For a probability distribution;
Finally strong classifier H is
(2) carry out registration using the human eye area position obtaining, obtain LBP using LBP algorithm process position of human eye data Nogata
Figure feature, value formula is
This feature input gray level Image Classifier is obtained Gray-scale Matching fraction.
Wherein, described face depth information is identified, comprise the steps:
(1) characteristic area positioning, judges face nose regional location;
(2) for the three-dimensional data of different attitudes, after obtaining the reference zone of registration, carry out data according to ICP algorithm Registration, calculates the Euclidean distance between the three-dimensional face model data in input data and registry after the completion of registration;
(3) carry out the acquisition of depth image according to depth information, using wave filter in the depth image after mapping Noise point compensates denoising, finally expression robust region is selected, obtains final three-dimensional face depth image;
(4) extract the visual dictionary histogram feature vector of three dimensional depth image, after testing facial image input, pass through After Gabor filtering, by all primitive vocabulary ratios in vision participle allusion quotation all corresponding with its position for arbitrary filter vector Relatively, by way of distance coupling, it is mapped to therewith on closest primitive, extracts original depth image Visual dictionary histogram feature, obtains coupling fraction using this feature input depth image grader.
Wherein, described multi-modal recognition of face, includes multiple data sources including multi-modal fusion system:As two dimensional gray Image, three dimensional depth image;
(1) for 2-D gray image, carry out feature point detection (human eye) first, then using the characteristic point position obtaining Carry out registration, after gray level image registration, using LBP algorithm to this data acquisition LBP histogram feature;
(2) for range data, carry out feature point detection (nose) first and joined using the characteristic point obtaining Then three-dimensional space data after registration is mapped as face depth image by standard, using visual dictionary algorithm to this data acquisition Visual dictionary histogram feature;
Further, this multimodal systems utilizes Feature-level fusion strategy, therefore after obtaining each data source characteristic, by institute There are merging features to form feature pool, one Weak Classifier of each of feature pool feature construction together, then utilize Adaboost algorithm, picks out in feature pool for maximally efficient feature of classifying, is finally based on multi-modal Feature-level fusion The feature obtaining, calculates coupling fraction using nearest neighbor classifier, realizes multi-modal recognition of face with this.
Wherein, described mated by 3D model, comprise the steps:
(1) iterative algorithm refinement corresponding relation
Two dimensional gray information before and the collection of three-dimensional depth information, rebuild 3D shape from two-dimensional shapes, need to make Reconstructed error minimizes
Here P represents matrix in two-dimentional projection, and z is the two-dimensional shapes of target, and alternative manner is in 2D characteristic point Registration 3D model, establishes rigidity (p={ s, α, beta, gamma, t }) and the conversion of non-rigid (r and s);
The increase of summit quantity is faint to the reducing effect rebuilding error rate, and summit quantity increases impact regression model And matching speed, number of vertex measures lower value;The increase of iterative algorithm number of times is notable to the reducing effect rebuilding error rate, to mould The impact of molded dimension is little, so iterative algorithm number of times takes higher value;
(2) corrected by matrix
It is assumed that there being semantic correspondence between 2D and 3D characteristic point, to select the corresponding 2D's of correct 3D in the form of matrix Characteristic point, semanteme here corresponded in the modelling phase it has been established that two-dimensional projection's mark of 3D mark is obtained by cascading recurrence;
(3) constrain visible mark
By constraining the process of visible mark, cascade returns the definition that have evaluated mark
ξ=j | vj=1 } subset showing indicator index is visible;
(4) two-dimensional measurement
The synchronous two-dimensional measurement (z (1) ..., z (C)) of entry time, all of C measurement represents identical three-dimensional face, But from different angles, by the restriction of the reconstruction to all measurements, above formula is extended:
Subscript (k) represents kthSecondary measurement, visibility is set to ξ (k) because we observe be identical face but be From different perspectives, integral rigidity (r) is all identical with partly non-rigid (s) measuring method;
(5) rigidity, non-rigid parameter are determined
It is assumed that the rigid structure of face varies less, (parameter r), an espressiove has change, and (parameter s), in order to solve this The situation of kind, is solved in the time domain
1) calculate rigid modifications' parameter:
т={ z(t)| t=1 ..., T } represent the setting of measure of time, rigid modifications' parameter that r т representative calculates from т, this Non-rigid parameter in one step is set to 0;
2) frame calculates rigid modifications parameter t ∈ [1 ..., T] at any time,
Wherein, described 3D reconstruction is carried out to face, including in a parameter vector
q:p(q)∝N(q;0,∧)
The priority principle of parameter follow meansigma methodss be 0, variance be Λ normal distribution, true using Principal Component Analysis Method Determine the d part of 3-dimensional base vector, then:
Respectively this two parts rigid and non-rigid are modeled,
Wherein 3-dimensional base vector d part (θ=[θ1;...;θM]∈R3M×d) description rigid deformation, the e portion of 3-dimensional base vector Divide (ψ=[ψ1;...;ψM]∈R3M×d) describe non-rigid deformation.
Further, described characteristic area positioning, comprises the steps:
(1) threshold value, determines that the threshold value of efficiency metric density is averagely born in domain, is defined as thr;
(2) utilize depth information to choose pending data, using the depth information of data, be extracted in the range of certain depth Human face data as pending data;
(3) calculating of normal vector, calculates the side vector information of the human face data being selected by depth information;
(4) zone leveling bears the calculating of efficiency metric density, bears the definition of efficiency metric density according to zone leveling, obtains The connected domain averagely born efficiency metric density, select wherein density value maximum of a connected domain in pending data;
(5) determine whether to find nose region, when current region threshold value is more than predefined thr, this region is nose Region, otherwise returns to step (1) and restarts to circulate.
Further, described ICP algorithm, key step is as follows:
(1) determine matched data set pair, the three-dimensional nose data decimation reference point data point set P from reference template;
(2) select to input the number matching in three-dimensional face with reference data using the nearest distance between point-to-point Strong point collection Q;
(3) calculate rigid motion parameter, calculate spin matrix R and translation vector t;When X determinant is 1, R=X;t =P-R*Q;
(4) according to the error judgment 3-D data set between the data set RQ+t after rigid transformation and reference data set P it is No registration, by the Euclidean distance between the three-dimensional face model data in input data and registry after registration
Wherein P, Q are set of characteristic points to be matched respectively, contain N number of characteristic point in set.
Further, the visual dictionary histogram feature vector of described extraction three dimensional depth image, comprises the steps:
1) three-dimensional face Range Image Segmentation is become some local grain regions;
2) for each GaBor filter response vector, the difference according to position maps that to its corresponding vision participle In the vocabulary of allusion quotation, and according to this based on set up visual dictionary histogram vectors as three-dimensional face special medical treatment express;
3) the visual dictionary histogram feature of the LBP histogram feature of gray level image and depth image is stitched together composition Feature pool, using Feature Selection algorithm, such as Adaboost, chooses wherein for recognition of face from the feature pool having obtained For combinations of features effectively, realize the data fusion of characteristic layer;
4), after obtaining face characteristic, nearest neighbor classifier is used as last recognition of face, wherein L1 is apart from selected As distance metric.
Further, described rigid element, is selection intermediate frame from each video, and application Principal Component Analysis Method determines Base vector (θ) and meansigma methodssProvide the linear subspaces of an entirety, describe the change of face shape.
Further, the linear subspaces target setting up description non-rigid deformation (ψ) is to set up a model, by independently instructing The pca model collection practicing and sharing soft-sided circle is combined into, and sets up the model based on part, makes apex height related, is formed intensive Region, because these regions will more preferably be compressed by PCA, drives segmentation to find facial expression data, employs data set In 6000 frames selected, data set D ∈ R6000 × 3072 are made up of 6000 frames and 1024 three-dimensional vertices;D is divided into three subsets Dx, Dy, Dz ∈ R6000 × 1024 each comprise the space coordinatess of vertex correspondence, the measurement of correlation between description summit, pass through Dx, Dy, Dz calculate correlation matrix normalization, then averagely become a correlation matrix C;The summit of same area also should be in face Surface is close to each other, and therefore, we normalize to [0,1] scope using calculating distance formation distance matrix G between model vertices, This two matrixes are integrated into a matrix.
Brief description
Fig. 1 is a kind of system flow chart based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention.
Fig. 2 is a kind of two-dimension human face human eye detection based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention Schematic diagram.
Fig. 3 is a kind of two-dimension human face LBP feature based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention Schematic diagram.
Fig. 4 be a kind of two-dimension human face gray scale chart based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention as Feature extraction schematic diagram.
Fig. 5 is a kind of three-dimensional face nose positioning based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention Schematic diagram.
Fig. 6 is a kind of three-dimensional face space reflection based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention Schematic diagram.
Fig. 7 be a kind of three-dimensional face depthmeter based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention as Feature extraction schematic diagram.
Fig. 8 is a kind of multi-modal recognition of face of the three-dimensional facial reconstruction method based on gray scale and depth information of the present invention FB(flow block).
Fig. 9 is a kind of multi-modal recognition of face of the three-dimensional facial reconstruction method based on gray scale and depth information of the present invention System block diagram.
Figure 10 is a kind of being carried out by 3D model of three-dimensional facial reconstruction method based on gray scale and depth information of the present invention Coupling flow chart.
Figure 11 is a kind of iterationses based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention and fixed point Quantity is to the graph of relation rebuilding error rate.
Figure 12 is that a kind of 3D that face is carried out of three-dimensional facial reconstruction method based on gray scale and depth information of the present invention weighs Build flow chart.
Figure 13 is a kind of face reconstruct image based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention.
Specific embodiment
It should be noted that in the case of not conflicting, the embodiment in the application and the feature in embodiment can phases Mutually combine, with specific embodiment, the present invention is described in further detail below in conjunction with the accompanying drawings.
Fig. 1 is a kind of system flow chart based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention, including Face half-tone information is identified;Face depth information is identified;Multi-modal recognition of face;Carried out by 3D model Join;3D reconstruction is carried out to face.
Fig. 2 is a kind of two-dimension human face human eye detection based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention Schematic diagram.As shown in Fig. 2 obtaining human eye area by human eye detection device, this human eye detection device is hierarchical classification device, and each layer is all It is a strong classifier (as Adaboost), each layer all can filter a part of non-human eye area, the image-region finally obtaining It is exactly human eye area.Adaboost algorithm may be summarized as follows:
Given training sample set S={ (x1,y1),…,(xm,ym), weak spatial classification deviceWherein xi∈ χ is sample Vector, yi=± 1 is tag along sort, and m is total sample number, initialization sample probability distribution
To t=1 ..., each Weak Classifier in T makees following operation:
Sample space χ is divided, obtains X1,X2,…,Xn
Wherein ε is a little normal number;
Calculate normalization factor,
Select one so that Z minimizes in Weak Classifier space
Update training sample probability distribution
Wherein ZtFor normalization factor so that Dt+1For a probability distribution;
Finally strong classifier H is
Fig. 3 is a kind of two-dimension human face LBP feature based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention Schematic diagram.As shown in figure 3, carrying out registration using the human eye area position obtaining, obtained using LBP algorithm process position of human eye data Obtain LBP histogram feature, value formula is
Fig. 4 is two-dimension human face gray scale chart of the present invention as feature extraction schematic diagram.As shown in figure 4, input two-dimension human face data, First pass through human eye detection and extract key point, then according to position of human eye, this facial image is adjusted to positive straight by rigid transformation Standing position state.The gray-scale maps being passed through by registration are extracted LBP histogram feature.This feature input gray level Image Classifier is obtained Take Gray-scale Matching fraction.
Fig. 5 is a kind of three-dimensional face nose positioning based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention Schematic diagram.As shown in figure 5, for range data, carrying out the detection in face nose region first, walk especially by following Suddenly:
(1) threshold value, determines that the threshold value of efficiency metric density is averagely born in domain, is defined as thr;
(2) utilize depth information to choose pending data, using the depth information of data, be extracted in the range of certain depth Human face data as pending data;
(3) calculating of normal vector, calculates the side vector information of the human face data being selected by depth information;
(4) zone leveling bears the calculating of efficiency metric density, bears the definition of efficiency metric density according to zone leveling, obtains The connected domain averagely born efficiency metric density, select wherein density value maximum of a connected domain in pending data;
(5) determine whether to find nose region, when current region threshold value is more than predefined thr, this region is nose Region, otherwise returns to step (1) and restarts to circulate.
Fig. 6 is a kind of three-dimensional face space reflection based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention Schematic diagram.As shown in fig. 6, carrying out registration using the nose region obtaining, carry out the registration of data using ICP algorithm, step is such as Under:
(1) determine matched data set pair, the three-dimensional nose data decimation reference point data point set P from reference template;
(2) select to input the number matching in three-dimensional face with reference data using the nearest distance between point-to-point Strong point collection Q;
(3) calculate rigid motion parameter, calculate spin matrix R and translation vector t;When X determinant is 1, R=X;t =P-R*Q;
(4) according to the error judgment 3-D data set between the data set RQ+t after rigid transformation and reference data set P it is No registration, by the Euclidean distance between the three-dimensional face model data in input data and registry after registration
Wherein P, Q are set of characteristic points to be matched respectively, contain N number of characteristic point in set.
Fig. 7 be a kind of three-dimensional face depthmeter based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention as Feature extraction schematic diagram.As shown in fig. 7, after testing facial image input, after Gabor filtering, by arbitrary filter vector All primitive vocabulary in all corresponding with its position vision participle allusion quotation compare, by way of distance coupling, it It is mapped to therewith on closest primitive, extract the visual dictionary histogram feature of original depth image, flow process is such as Under:
1) three-dimensional face Range Image Segmentation is become some local grain regions;
2) for each GaBor filter response vector, the difference according to position maps that to its corresponding vision participle In the vocabulary of allusion quotation, and according to this based on set up visual dictionary histogram vectors as three-dimensional face special medical treatment express;
3) the visual dictionary histogram feature of the LBP histogram feature of gray level image and depth image is stitched together composition Feature pool, using Feature Selection algorithm, such as Adaboost, chooses wherein for recognition of face from the feature pool having obtained For combinations of features effectively, realize the data fusion of characteristic layer;
4), after obtaining face characteristic, nearest neighbor classifier is used as last recognition of face, wherein L1 is apart from selected As distance metric.
Fig. 8 is a kind of multi-modal recognition of face of the three-dimensional facial reconstruction method based on gray scale and depth information of the present invention FB(flow block).Fig. 9 is a kind of multi-modal recognition of face of the three-dimensional facial reconstruction method based on gray scale and depth information of the present invention System block diagram.As shown in Figure 8,9, multi-modal fusion system includes multiple data sources:As 2-D gray image, three-dimensional depth map Picture;
(1) for 2-D gray image, carry out feature point detection (human eye) first, then using the characteristic point position obtaining Carry out registration, after gray level image registration, using LBP algorithm to this data acquisition LBP histogram feature;
(2) for range data, carry out feature point detection (nose) first and joined using the characteristic point obtaining Then three-dimensional space data after registration is mapped as face depth image by standard, using visual dictionary algorithm to this data acquisition Visual dictionary histogram feature;
This multimodal systems utilizes Feature-level fusion strategy, therefore after obtaining each data source characteristic, all features is spelled Be connected together formation feature pool, one Weak Classifier of each of feature pool feature construction, then utilizes Adaboost algorithm, Feature pool is picked out for maximally efficient feature of classifying, be finally based on the feature that multi-modal Feature-level fusion obtains, profit Calculate coupling fraction with nearest neighbor classifier, multi-modal recognition of face is realized with this.
Figure 10 is a kind of being carried out by 3D model of three-dimensional facial reconstruction method based on gray scale and depth information of the present invention Coupling flow chart, mainly comprises the following steps:
(1) iterative algorithm refinement corresponding relation
Two dimensional gray information before and the collection of three-dimensional depth information, rebuild 3D shape from two-dimensional shapes, need to make Reconstructed error minimizes
Here P represents matrix in two-dimentional projection, and z is the two-dimensional shapes of target, and alternative manner is in 2D characteristic point Registration 3D model, establishes rigidity (p={ s, α, beta, gamma, t }) and the conversion of non-rigid (r and s);
(2) corrected by matrix
It is assumed that there being semantic correspondence between 2D and 3D characteristic point, to select the corresponding 2D's of correct 3D in the form of matrix Characteristic point, semanteme here corresponded in the modelling phase it has been established that two-dimensional projection's mark of 3D mark is obtained by cascading recurrence;
(3) constrain visible mark
By constraining the process of visible mark, cascade returns the definition that have evaluated mark
ξ=j | vj=1 } subset showing indicator index is visible;
(4) two-dimensional measurement
The synchronous two-dimensional measurement (z (1) ..., z (C)) of entry time, all of C measurement represents identical three-dimensional face, But from different angles, by the restriction of the reconstruction to all measurements, above formula is extended:
Subscript (k) represents kthSecondary measurement, visibility is set to ξ (k) because we observe be identical face but be From different perspectives, integral rigidity (r) is all identical with partly non-rigid (s) measuring method;
(5) rigidity, non-rigid parameter are determined
It is assumed that the rigid structure of face varies less, (parameter r), an espressiove has change, and (parameter s), in order to solve this The situation of kind, is solved in the time domain
1) calculate rigid modifications' parameter:
т={ z(t)| t=1 ..., T } represent the setting of measure of time, rigid modifications' parameter that r т representative calculates from т, this Non-rigid parameter in one step is set to 0;
2) frame calculates rigid modifications parameter t ∈ [1 ..., T] at any time,
Figure 11 is a kind of iterationses based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention and fixed point Quantity is to the graph of relation rebuilding error rate.As can be seen that the increase of summit quantity is micro- to the reducing effect rebuilding error rate Weak, and summit quantity increases impact regression model and matching speed, and number of vertex measures lower value;The increase of iterative algorithm number of times To the reducing effect rebuilding error rate significantly, the impact to moulded dimension is little, so iterative algorithm number of times takes higher value.Make During with monocular camera image, corresponding formula has multiple solutions, it is to avoid produce 3D hallucination, here simultaneously using multiple images frame.
Figure 12 is that a kind of 3D that face is carried out of three-dimensional facial reconstruction method based on gray scale and depth information of the present invention weighs Build flow chart.In a parameter vector
q:p(q)∝N(q;0,∧)
The priority principle of parameter follow meansigma methodss be 0, variance be Λ normal distribution, true using Principal Component Analysis Method Determine the d part of 3-dimensional base vector, then:
Respectively this two parts rigid and non-rigid are modeled,
Wherein 3-dimensional base vector d part (θ=[θ1;...;θM]∈R3M×d) description rigid deformation, the e portion of 3-dimensional base vector Divide (ψ=[ψ1;...;ψM]∈R3M×d) describe non-rigid deformation.
Further, set up rigid element, we select intermediate frame from each video, application Principal Component Analysis Method determines Base vector (θ) and meansigma methodssProvide the linear subspaces of an entirety, describe the change of face shape;
Further, the linear subspaces target setting up description non-rigid deformation (ψ) is to set up a model, by independently instructing The pca model collection practicing and sharing soft-sided circle is combined into, and sets up the model based on part, makes apex height related, is formed intensive Region, because these regions will more preferably be compressed by PCA, drives segmentation to find facial expression data, employs data set In 6000 frames selected, data set D ∈ R6000 × 3072 are made up of 6000 frames and 1024 three-dimensional vertices;D is divided into three subsets Dx, Dy, Dz ∈ R6000 × 1024 each comprise the space coordinatess of vertex correspondence, the measurement of correlation between description summit, pass through Dx, Dy, Dz calculate correlation matrix normalization, then averagely become a correlation matrix C;The summit of same area also should be in face Surface is close to each other, and therefore, we normalize to [0,1] scope using calculating distance formation distance matrix G between model vertices, This two matrixes are integrated into a matrix.
Figure 13 is a kind of face reconstruct image based on gray scale and the three-dimensional facial reconstruction method of depth information of the present invention.Permissible Find out, using multi-frame video image, obtain 3D grid vertex, face is completely covered by 3D point set, action change anchor point keeps Unanimously, and be successfully completed human face rebuilding.
For those skilled in the art, the present invention is not restricted to the details of above-described embodiment, in the essence without departing substantially from the present invention In the case of god and scope, the present invention can be realized with other concrete forms.Additionally, those skilled in the art can be to this Bright carry out various change and modification without departing from the spirit and scope of the present invention, these improve and modification also should be regarded as the present invention's Protection domain.Therefore, all changes that claims are intended to be construed to including preferred embodiment and fall into the scope of the invention More and modification.

Claims (10)

1. a kind of three-dimensional facial reconstruction method based on gray scale and depth information is it is characterised in that mainly include to face gray scale Information is identified (one);Face depth information is identified (two);Multi-modal recognition of face (three);Carried out by 3D model Coupling (four);Face is carried out with 3D reconstruction (five).
2. based on (one) being identified to face half-tone information described in claims 1 it is characterised in that including following walking Suddenly:
(1) characteristic area positioning, obtains human eye area using human eye detection device, and described human eye detection device is hierarchical classification device H, Obtain through following algorithm:
Given training sample set S={ (x1,y1),…,(xm,ym), weak spatial classification deviceWherein xi∈ χ is sample vector, yi=± 1 is tag along sort, and m is total sample number, initialization sample probability distribution
D 1 ( i ) = 1 m , i = 1 , ... , m ;
To t=1 ..., each Weak Classifier in T makees following operation:
Sample space χ is divided, obtains X1,X2,…,Xn
∀ x ∈ X i , h ( x ) = 1 2 ln ( W + 1 j + ϵ W - 1 j + ϵ ) , j = 1 , ... , n ,
Wherein ε is a little normal number;
Calculate normalization factor,
Z = 2 Σ j W + 1 j W - 1 j
Select one so that Z minimizes in Weak Classifier space
Update training sample probability distribution
D i + 1 ( i ) = D i ( i ) exp [ - y i h i ( x i ) ] Z t , i = 1 , ... , m ,
Wherein ZtFor normalization factor so that Dt+1For a probability distribution;
Finally strong classifier H is
H ( x ) = s i g n [ Σ t = 1 r h t ( x ) - b ]
(2) carry out registration using the human eye area position obtaining, obtain LBP Nogata using LBP algorithm process position of human eye data Figure feature, value formula is
LBP P R = Σ P = 0 P - 1 s ( g p - g c ) 2 p
This feature input gray level Image Classifier is obtained Gray-scale Matching fraction.
3. based on (two) being identified to face depth information described in claims 1 it is characterised in that including following walking Suddenly:
(1) characteristic area positioning, judges face nose regional location;
(2) for the three-dimensional data of different attitudes, after obtaining the reference zone of registration, carry out the registration of data according to ICP algorithm, The Euclidean distance between the three-dimensional face model data in input data and registry is calculated after the completion of registration;
(3) carry out the acquisition of depth image according to depth information, using wave filter for the noise in the depth image after mapping Point compensates denoising, finally expression robust region is selected, obtains final three-dimensional face depth image;
(4) extract the visual dictionary histogram feature vector of three dimensional depth image, after testing facial image input, pass through After Gabor filtering, by all primitive vocabulary ratios in vision participle allusion quotation all corresponding with its position for arbitrary filter vector Relatively, by way of distance coupling, it is mapped to therewith on closest primitive, extracts original depth image Visual dictionary histogram feature, obtains coupling fraction using this feature input depth image grader.
4. based on characteristic area positioning (1) described in claims 3 it is characterised in that comprising the steps:
(1) threshold value, determines that the threshold value of efficiency metric density is averagely born in domain, is defined as thr;
(2) utilize depth information to choose pending data, using the depth information of data, be extracted in the people in the range of certain depth Face data is as pending data;
(3) calculating of normal vector, calculates the side vector information of the human face data being selected by depth information;
(4) zone leveling bears the calculating of efficiency metric density, bears the definition of efficiency metric density according to zone leveling, obtains and waits to locate The connected domain averagely born efficiency metric density, select wherein density value maximum of a connected domain in reason data;
(5) determine whether to find nose region, when current region threshold value is more than predefined thr, this region is nose region, Otherwise return to step (1) to restart to circulate.
5. based on the ICP algorithm described in claims 3 it is characterised in that inclusion step is as follows:
(1) determine matched data set pair, the three-dimensional nose data decimation reference point data point set P from reference template;
(2) select to input the data point matching in three-dimensional face with reference data using the nearest distance between point-to-point Collection Q;
(3) calculate rigid motion parameter, calculate spin matrix R and translation vector t;When X determinant is 1, R=X;T=P- R*Q;
(4) whether joined according to the error judgment 3-D data set between the data set RQ+t after rigid transformation and reference data set P Standard, by the Euclidean distance between the three-dimensional face model data in input data and registry after registration
D ( P , Q ) = Σ 1 N ( p i - q i ) 2 / N
Wherein P, Q are set of characteristic points to be matched respectively, contain N number of characteristic point in set.
6. based on the step (4) described in claims 3 it is characterised in that comprising the steps:
1) three-dimensional face Range Image Segmentation is become some local grain regions;
2) for each GaBor filter response vector, the difference according to position maps that to its corresponding vision participle allusion quotation In vocabulary, and according to this based on set up visual dictionary histogram vectors as three-dimensional face special medical treatment express;
3) the visual dictionary histogram feature of the LBP histogram feature of gray level image and depth image is stitched together constitutive characteristic Pond, using Feature Selection algorithm, such as Adaboost, choosing from the feature pool having obtained wherein has the most for recognition of face Effect ground combinations of features, realizes the data fusion of characteristic layer;
4), after obtaining face characteristic, nearest neighbor classifier is used as last recognition of face, wherein L1 distance is selected as Distance metric.
7. based on the multi-modal recognition of face (three) described in claims 1 it is characterised in that including multi-modal fusion system bag Include multiple data sources:As 2-D gray image, three dimensional depth image;
(1) for 2-D gray image, carry out feature point detection (human eye) first, then carried out using the characteristic point position obtaining Registration, after gray level image registration, using LBP algorithm to this data acquisition LBP histogram feature;
(2) for range data, carry out feature point detection (nose) first and carry out registration, so using the characteristic point obtaining Afterwards the three-dimensional space data after registration is mapped as face depth image, using visual dictionary algorithm to this data acquisition visual word Allusion quotation histogram feature;
This multimodal systems utilizes Feature-level fusion strategy, therefore after obtaining each data source characteristic, all merging features is existed Form feature pool, one Weak Classifier of each of feature pool feature construction together, then utilize Adaboost algorithm, in spy Levy and pick out in pond for maximally efficient feature of classifying, be finally based on the feature that multi-modal Feature-level fusion obtains, using Nearest Neighbor Classifier calculates coupling fraction, realizes multi-modal recognition of face with this.
8. based on described in claims 1, (four) are mated it is characterised in that comprising the steps by 3D model:
(1) iterative algorithm refinement corresponding relation
Two dimensional gray information before and the collection of three-dimensional depth information, rebuild 3D shape from two-dimensional shapes, need to make reconstruct Error minimize
argmin p , r , s Σ i = 1 M | | Px i ( p , r , s ) - z i | | 2 2
Here P represents matrix in two-dimentional projection, and z is the two-dimensional shapes of target, and alternative manner is registered in 2D characteristic point 3D model, establishes rigidity (p={ s, α, beta, gamma, t }) and the conversion of non-rigid (r and s);
Summit quantity increase to rebuild error rate reducing effect faint, and summit quantity increase impact regression model and Join speed, number of vertex measures lower value;The increase of iterative algorithm number of times is notable to the reducing effect rebuilding error rate, to model scale Very little impact is little, so iterative algorithm number of times takes higher value;
(2) corrected by matrix
In the form of matrix, it is assumed that there being semantic correspondence between 2D and 3D characteristic point, to select the feature of the corresponding 2D of correct 3D Point, semanteme here corresponded in the modelling phase it has been established that two-dimensional projection's mark of 3D mark is obtained by cascading recurrence;
(3) constrain visible mark
By constraining the process of visible mark, cascade returns the definition that have evaluated mark
argmin p , r , s Σ i ∈ ξ | | Px i ( p , r , s ) - z i | | 2 2
ζ=j | vj=1 } subset showing indicator index is visible;
(4) two-dimensional measurement
The synchronous two-dimensional measurement (z (1) ..., z (C)) of entry time, all of C measurement represents identical three-dimensional face, but From different angles, by the restriction of the reconstruction to all measurements, above formula is extended:
arg min p ( 1 ) , ... , p ( C ) , Σ k = 1 C Σ i ∈ ξ ( k ) | | Px i ( p ( k ) , r , s ) - z i ( k ) | | 2 2
Subscript (k) represents kthSecondary measurement, visibility is set to ξ (k) because we observe be identical face but be never Same angle, integral rigidity (r) is all identical with partly non-rigid (s) measuring method;
(5) rigidity, non-rigid parameter are determined
It is assumed that the rigid structure of face varies less, (parameter r), an espressiove has change, and (parameter s), in order to solve this feelings Condition, is solved in the time domain
1) calculate rigid modifications' parameter:
arg m i n r T Σ t ∈ T Σ i ∈ ξ ( t ) | | Px i ( p ( t ) , r T , 0 ) - z i ( t ) | | 2 2
т={ z(t)| t=1 ..., T } represent the setting of measure of time, rigid modifications' parameter that r т representative calculates, this step from т In non-rigid parameter be set to 0;
2) frame calculates rigid modifications parameter t ∈ [1 ..., T] at any time,
argmin p ( t ) , s ( t ) Σ i ∈ ξ ( t ) | | Px i ( p ( t ) , r T , s ( t ) ) - z i ( t ) | | 2 2 .
9. 3D reconstruction (five) is carried out to face it is characterised in that including in a parameter vector based on described in claims 1 In
q:p(q)∝N(q;0,∧)
The priority principle of parameter follow meansigma methodss be 0, variance be Λ normal distribution, determine 3 using Principal Component Analysis Method The d part of Wiki vector, then:
Respectively this two parts rigid and non-rigid are modeled,
x i = ( p , r , s ) = s R ( x ‾ i + θ i r + ψ i s ) + t , ( i = 1 , ... , M )
Wherein 3-dimensional base vector d part (θ=[θ1;...;θM]∈R3M×d) description rigid deformation, the e part (ψ of 3-dimensional base vector =[ψ1;...;ψM]∈R3M×d) describe non-rigid deformation.
10. based on the rigid element described in claims 9 it is characterised in that including selecting intermediate frame from each video, should Determine base vector (θ) and meansigma methodss with Principal Component Analysis MethodProvide the linear subspaces of an entirety, describe face The change of shape;Described non-rigid deformation is it is characterised in that include setting up the linear subspaces of description non-rigid deformation (ψ) Target is to set up a model, is combined into by the pca model collection independently training and sharing soft-sided circle, sets up the mould based on part Type, makes apex height related, forms intensive region, because these regions will more preferably be compressed by PCA, in order to find facial table Feelings data-driven is split, and employs 6000 frames selected in data set, data set D ∈ R6000 × 3072 are by 6000 frames and 1024 Three-dimensional vertices form;D is divided into three subsets Dx, Dy, Dz ∈ R6000 × 1024 each comprise the space coordinatess of vertex correspondence, Measurement of correlation between description summit, by Dx, Dy, Dz calculate correlation matrix normalization, then averagely become a correlation matrix C;The summit of same area also should be close to each other on face surface, and therefore, we are using calculating between model vertices apart from shape Distance matrix G is become to normalize to [0,1] scope, this two matrixes are integrated into a matrix.
CN201610794122.1A 2016-08-31 2016-08-31 A kind of three-dimensional facial reconstruction method based on gray scale and depth information Pending CN106469465A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610794122.1A CN106469465A (en) 2016-08-31 2016-08-31 A kind of three-dimensional facial reconstruction method based on gray scale and depth information
PCT/CN2016/098100 WO2018040099A1 (en) 2016-08-31 2016-09-05 Three-dimensional face reconstruction method based on grayscale and depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610794122.1A CN106469465A (en) 2016-08-31 2016-08-31 A kind of three-dimensional facial reconstruction method based on gray scale and depth information

Publications (1)

Publication Number Publication Date
CN106469465A true CN106469465A (en) 2017-03-01

Family

ID=58230456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610794122.1A Pending CN106469465A (en) 2016-08-31 2016-08-31 A kind of three-dimensional facial reconstruction method based on gray scale and depth information

Country Status (2)

Country Link
CN (1) CN106469465A (en)
WO (1) WO2018040099A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107045631A (en) * 2017-05-25 2017-08-15 北京华捷艾米科技有限公司 Facial feature points detection method, device and equipment
CN107886568A (en) * 2017-12-09 2018-04-06 东方梦幻文化产业投资有限公司 A kind of method and system that human face expression is rebuild using 3D Avatar
CN107992797A (en) * 2017-11-02 2018-05-04 中控智慧科技股份有限公司 Face identification method and relevant apparatus
CN108876708A (en) * 2018-05-31 2018-11-23 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN109697749A (en) * 2017-10-20 2019-04-30 虹软科技股份有限公司 A kind of method and apparatus for three-dimensional modeling
CN109729285A (en) * 2019-01-17 2019-05-07 广州华多网络科技有限公司 Fuse lattice special efficacy generation method, device, electronic equipment and storage medium
WO2019100216A1 (en) * 2017-11-21 2019-05-31 深圳市柔宇科技有限公司 3d modeling method, electronic device, storage medium and program product
CN110032927A (en) * 2019-02-27 2019-07-19 视缘(上海)智能科技有限公司 A kind of face identification method
CN110070611A (en) * 2019-04-22 2019-07-30 清华大学 A kind of face three-dimensional rebuilding method and device based on depth image fusion
CN110163953A (en) * 2019-03-11 2019-08-23 腾讯科技(深圳)有限公司 Three-dimensional facial reconstruction method, device, storage medium and electronic device
WO2019219012A1 (en) * 2018-05-15 2019-11-21 清华大学 Three-dimensional reconstruction method and device uniting rigid motion and non-rigid deformation
CN110689609A (en) * 2019-09-27 2020-01-14 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2020108304A1 (en) * 2018-11-29 2020-06-04 广州市百果园信息技术有限公司 Method for reconstructing face mesh model, device, apparatus and storage medium
CN111627092A (en) * 2020-05-07 2020-09-04 江苏原力数字科技股份有限公司 Method for constructing high-strength bending constraint from topological relation
CN112562082A (en) * 2020-08-06 2021-03-26 长春理工大学 Three-dimensional face reconstruction method and system
CN113366491A (en) * 2021-04-26 2021-09-07 华为技术有限公司 Eyeball tracking method, device and storage medium
CN114727002A (en) * 2021-01-05 2022-07-08 北京小米移动软件有限公司 Shooting method and device, terminal equipment and storage medium
US11972527B2 (en) 2018-11-29 2024-04-30 Bigo Technology Pte. Ltd. Method and apparatus for reconstructing face mesh model, and storage medium

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717730B (en) * 2018-04-10 2023-01-10 福建天泉教育科技有限公司 3D character reconstruction method and terminal
CN109100731B (en) * 2018-07-17 2022-11-11 重庆大学 Mobile robot positioning method based on laser radar scanning matching algorithm
CN110826580B (en) * 2018-08-10 2023-04-14 浙江万里学院 Object two-dimensional shape classification method based on thermonuclear characteristics
US10885702B2 (en) * 2018-08-10 2021-01-05 Htc Corporation Facial expression modeling method, apparatus and non-transitory computer readable medium of the same
CN109325994B (en) * 2018-09-11 2023-03-24 合肥工业大学 Method for enhancing data based on three-dimensional face
CN110942479B (en) * 2018-09-25 2023-06-02 Oppo广东移动通信有限公司 Virtual object control method, storage medium and electronic device
CN111144180B (en) * 2018-11-06 2023-04-07 天地融科技股份有限公司 Risk detection method and system for monitoring video
CN109614879B (en) * 2018-11-19 2022-12-02 温州大学 Hopper particle detection method based on image recognition
CN111382626B (en) * 2018-12-28 2023-04-18 广州市百果园信息技术有限公司 Method, device and equipment for detecting illegal image in video and storage medium
CN110084259B (en) * 2019-01-10 2022-09-20 谢飞 Facial paralysis grading comprehensive evaluation system combining facial texture and optical flow characteristics
CN110046543A (en) * 2019-02-27 2019-07-23 视缘(上海)智能科技有限公司 A kind of three-dimensional face identification method based on plane parameter
CN110020613B (en) * 2019-03-19 2022-12-06 广州爱科赛尔云数据科技有限公司 Front-end face real-time detection method based on Jetson TX1 platform
CN110276408B (en) * 2019-06-27 2022-11-22 腾讯科技(深圳)有限公司 3D image classification method, device, equipment and storage medium
CN110349140B (en) * 2019-07-04 2023-04-07 五邑大学 Traditional Chinese medicine ear diagnosis image processing method and device
CN111127631B (en) * 2019-12-17 2023-07-28 深圳先进技术研究院 Three-dimensional shape and texture reconstruction method, system and storage medium based on single image
CN111402403B (en) * 2020-03-16 2023-06-20 中国科学技术大学 High-precision three-dimensional face reconstruction method
CN113673287B (en) * 2020-05-15 2023-09-12 深圳市光鉴科技有限公司 Depth reconstruction method, system, equipment and medium based on target time node
CN111754557B (en) * 2020-05-29 2023-02-17 清华大学 Target geographic area face template generation method and device
CN111681309B (en) * 2020-06-08 2023-07-25 北京师范大学 Edge computing platform for generating voxel data and edge image feature ID matrix
CN111652974B (en) * 2020-06-15 2023-08-25 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for constructing three-dimensional face model
CN111951372B (en) * 2020-06-30 2024-01-05 重庆灵翎互娱科技有限公司 Three-dimensional face model generation method and equipment
CN111968152B (en) * 2020-07-15 2023-10-17 桂林远望智能通信科技有限公司 Dynamic identity recognition method and device
CN112017230A (en) * 2020-09-07 2020-12-01 浙江光珀智能科技有限公司 Three-dimensional face model modeling method and system
CN112085117B (en) * 2020-09-16 2022-08-30 北京邮电大学 Robot motion monitoring visual information fusion method based on MTLBP-Li-KAZE-R-RANSAC
CN112257552B (en) * 2020-10-19 2023-09-05 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN112614213B (en) * 2020-12-14 2024-01-23 杭州网易云音乐科技有限公司 Facial expression determining method, expression parameter determining model, medium and equipment
CN112882666A (en) * 2021-03-15 2021-06-01 上海电力大学 Three-dimensional modeling and model filling-based 3D printing system and method
CN113254684B (en) * 2021-06-18 2021-10-29 腾讯科技(深圳)有限公司 Content aging determination method, related device, equipment and storage medium
CN113642545B (en) * 2021-10-15 2022-01-28 北京万里红科技有限公司 Face image processing method based on multi-task learning
CN116168163B (en) * 2023-03-29 2023-11-17 湖北工业大学 Three-dimensional model construction method, device and storage medium
CN116109743B (en) * 2023-04-11 2023-06-20 广州智算信息技术有限公司 Digital person generation method and system based on AI and image synthesis technology
CN117218119B (en) * 2023-11-07 2024-01-26 苏州瑞霏光电科技有限公司 Quality detection method and system for wafer production

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080309662A1 (en) * 2005-12-14 2008-12-18 Tal Hassner Example Based 3D Reconstruction
US20110148868A1 (en) * 2009-12-21 2011-06-23 Electronics And Telecommunications Research Institute Apparatus and method for reconstructing three-dimensional face avatar through stereo vision and face detection
CN102254154A (en) * 2011-07-05 2011-11-23 南京大学 Method for authenticating human-face identity based on three-dimensional model reconstruction
CN102592309A (en) * 2011-12-26 2012-07-18 北京工业大学 Modeling method of nonlinear three-dimensional face
CN104598878A (en) * 2015-01-07 2015-05-06 深圳市唯特视科技有限公司 Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information
CN104778441A (en) * 2015-01-07 2015-07-15 深圳市唯特视科技有限公司 Multi-mode face identification device and method fusing grey information and depth information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404091B (en) * 2008-11-07 2011-08-31 重庆邮电大学 Three-dimensional human face reconstruction method and system based on two-step shape modeling
CN104008366A (en) * 2014-04-17 2014-08-27 深圳市唯特视科技有限公司 3D intelligent recognition method and system for biology
CN103971122B (en) * 2014-04-30 2018-04-17 深圳市唯特视科技有限公司 Three-dimensional face based on depth image describes method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080309662A1 (en) * 2005-12-14 2008-12-18 Tal Hassner Example Based 3D Reconstruction
US20110148868A1 (en) * 2009-12-21 2011-06-23 Electronics And Telecommunications Research Institute Apparatus and method for reconstructing three-dimensional face avatar through stereo vision and face detection
CN102254154A (en) * 2011-07-05 2011-11-23 南京大学 Method for authenticating human-face identity based on three-dimensional model reconstruction
CN102592309A (en) * 2011-12-26 2012-07-18 北京工业大学 Modeling method of nonlinear three-dimensional face
CN104598878A (en) * 2015-01-07 2015-05-06 深圳市唯特视科技有限公司 Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information
CN104778441A (en) * 2015-01-07 2015-07-15 深圳市唯特视科技有限公司 Multi-mode face identification device and method fusing grey information and depth information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LÁSZLÓ A. JENI 等: "Dense 3D face alignment from 2D video for real-time use", 《IMAGE AND VISION COMPUTING》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107045631B (en) * 2017-05-25 2019-12-24 北京华捷艾米科技有限公司 Method, device and equipment for detecting human face characteristic points
CN107045631A (en) * 2017-05-25 2017-08-15 北京华捷艾米科技有限公司 Facial feature points detection method, device and equipment
CN109697749A (en) * 2017-10-20 2019-04-30 虹软科技股份有限公司 A kind of method and apparatus for three-dimensional modeling
CN107992797B (en) * 2017-11-02 2022-02-08 中控智慧科技股份有限公司 Face recognition method and related device
CN107992797A (en) * 2017-11-02 2018-05-04 中控智慧科技股份有限公司 Face identification method and relevant apparatus
WO2019100216A1 (en) * 2017-11-21 2019-05-31 深圳市柔宇科技有限公司 3d modeling method, electronic device, storage medium and program product
CN107886568A (en) * 2017-12-09 2018-04-06 东方梦幻文化产业投资有限公司 A kind of method and system that human face expression is rebuild using 3D Avatar
CN107886568B (en) * 2017-12-09 2020-03-03 东方梦幻文化产业投资有限公司 Method and system for reconstructing facial expression by using 3D Avatar
WO2019219012A1 (en) * 2018-05-15 2019-11-21 清华大学 Three-dimensional reconstruction method and device uniting rigid motion and non-rigid deformation
CN108876708A (en) * 2018-05-31 2018-11-23 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
WO2020108304A1 (en) * 2018-11-29 2020-06-04 广州市百果园信息技术有限公司 Method for reconstructing face mesh model, device, apparatus and storage medium
US11972527B2 (en) 2018-11-29 2024-04-30 Bigo Technology Pte. Ltd. Method and apparatus for reconstructing face mesh model, and storage medium
CN109729285A (en) * 2019-01-17 2019-05-07 广州华多网络科技有限公司 Fuse lattice special efficacy generation method, device, electronic equipment and storage medium
CN110032927A (en) * 2019-02-27 2019-07-19 视缘(上海)智能科技有限公司 A kind of face identification method
CN110163953A (en) * 2019-03-11 2019-08-23 腾讯科技(深圳)有限公司 Three-dimensional facial reconstruction method, device, storage medium and electronic device
CN110163953B (en) * 2019-03-11 2023-08-25 腾讯科技(深圳)有限公司 Three-dimensional face reconstruction method and device, storage medium and electronic device
CN110070611A (en) * 2019-04-22 2019-07-30 清华大学 A kind of face three-dimensional rebuilding method and device based on depth image fusion
CN110070611B (en) * 2019-04-22 2020-12-01 清华大学 Face three-dimensional reconstruction method and device based on depth image fusion
CN110689609A (en) * 2019-09-27 2020-01-14 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111627092A (en) * 2020-05-07 2020-09-04 江苏原力数字科技股份有限公司 Method for constructing high-strength bending constraint from topological relation
CN112562082A (en) * 2020-08-06 2021-03-26 长春理工大学 Three-dimensional face reconstruction method and system
CN114727002A (en) * 2021-01-05 2022-07-08 北京小米移动软件有限公司 Shooting method and device, terminal equipment and storage medium
CN113366491A (en) * 2021-04-26 2021-09-07 华为技术有限公司 Eyeball tracking method, device and storage medium
WO2022226747A1 (en) * 2021-04-26 2022-11-03 华为技术有限公司 Eyeball tracking method and apparatus and storage medium

Also Published As

Publication number Publication date
WO2018040099A1 (en) 2018-03-08

Similar Documents

Publication Publication Date Title
CN106469465A (en) A kind of three-dimensional facial reconstruction method based on gray scale and depth information
US11501508B2 (en) Parameterized model of 2D articulated human shape
Yang et al. Fine-grained recurrent neural networks for automatic prostate segmentation in ultrasound images
Paragios et al. Non-rigid registration using distance functions
CN101398886B (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
US9189855B2 (en) Three dimensional close interactions
Cai et al. Multi-modality vertebra recognition in arbitrary views using 3D deformable hierarchical model
Mori Guiding model search using segmentation
WO2017219391A1 (en) Face recognition system based on three-dimensional data
CN102880866B (en) Method for extracting face features
Azouz et al. Automatic locating of anthropometric landmarks on 3D human models
CN104598878A (en) Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information
CN106157372A (en) A kind of 3D face grid reconstruction method based on video image
CN104778441A (en) Multi-mode face identification device and method fusing grey information and depth information
CN106991411B (en) Remote Sensing Target based on depth shape priori refines extracting method
CN112784736A (en) Multi-mode feature fusion character interaction behavior recognition method
CN110263605A (en) Pedestrian's dress ornament color identification method and device based on two-dimension human body guise estimation
CN104573722A (en) Three-dimensional face race classifying device and method based on three-dimensional point cloud
Kanaujia et al. 3D human pose and shape estimation from multi-view imagery
Ming et al. A unified 3D face authentication framework based on robust local mesh SIFT feature
Chen et al. Learning shape priors for single view reconstruction
Timoner Compact representations for fast nonrigid registration of medical images
Häne et al. An overview of recent progress in volumetric semantic 3D reconstruction
Guo et al. Photo-realistic face images synthesis for learning-based fine-scale 3D face reconstruction
CN113468923A (en) Human-object interaction behavior detection method based on fine-grained multi-modal common representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170301

RJ01 Rejection of invention patent application after publication