CN110363858A - A kind of three-dimensional facial reconstruction method and system - Google Patents
A kind of three-dimensional facial reconstruction method and system Download PDFInfo
- Publication number
- CN110363858A CN110363858A CN201910524707.5A CN201910524707A CN110363858A CN 110363858 A CN110363858 A CN 110363858A CN 201910524707 A CN201910524707 A CN 201910524707A CN 110363858 A CN110363858 A CN 110363858A
- Authority
- CN
- China
- Prior art keywords
- target face
- point cloud
- dimensional
- model
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
Abstract
The present invention provides a kind of three-dimensional facial reconstruction method and system, and method includes: the color image and depth image for obtaining target face under at least two visual angles, and judges whether the color image and the depth image are aligned;Obtain the three-dimensional coordinate of the index point of the target face and the original object face three-dimensional point cloud model of target face;Filter out the point cloud model of target face;It carries out a cloud slightly to match, obtains the target face three-dimensional point cloud model of rough registration;Accuracy registration is carried out, the target face three-dimensional point cloud model of accuracy registration is obtained;Fusion duplicate removal, gridding and Mesh smoothing are carried out, then carrying out texture enhances the target face wire frame model after being optimized;Texture maps are made, texture mapping is carried out to the target face wire frame model after optimization, obtains final target face wire frame model.It is higher to reconstruct the faceform's mass come, more levels off to true face effect.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a kind of three-dimensional facial reconstruction method and systems.
Background technique
With the rapid development of depth camera and universal, the three-dimensional data of object increasingly can be obtained quickly and accurately,
This applies depth camera quickly in fields such as three-dimensional space mapping, object dimensional reconstruction, intelligent human-machine interaction, VR,
The hot spot direction that three-dimensional face model is current research is wherein rebuild based on the depth data of face.
But in the prior art, all it is to establish face template by machine learning, lacks authenticity.
Summary of the invention
The present invention in order to solve the problems in the prior art, provides a kind of three-dimensional facial reconstruction method and system.
The technical solution adopted by the present invention is as described below:
A kind of three-dimensional facial reconstruction method includes the following steps: S1: obtaining the colour of target face under at least two visual angles
Image and depth image, and judge whether the color image and the depth image are aligned;S2: in the color image and institute
In the case where stating depth image alignment, two of the index point of target face described in the color image are extracted by deep learning
Tie up coordinate, and obtained according to the corresponding relationship of the color image and the depth image target face index point three
Tie up coordinate;S3: position and the set depth of the target face are estimated according to the three-dimensional coordinate of the index point of the target face
The threshold value of value, to filter out the point cloud model of target face from the original object face three-dimensional point cloud model of target face,
In, the original object face three-dimensional point cloud model is the model obtained according to the color image and the depth image;S4:
A cloud is carried out to the point cloud model of directions different in the point cloud model of the target face slightly to match, and obtains the target person of rough registration
Face three-dimensional point cloud model;S5: the target face three-dimensional point cloud model of the rough registration is subjected to accuracy registration, obtains accuracy registration
Target face three-dimensional point cloud model;S6: the target face three-dimensional point cloud model of the accuracy registration is subjected to fusion duplicate removal, net
It formats and Mesh smoothing, and carry out texture to enhance the target face wire frame model after being optimized;S7: after the optimization
Target face wire frame model and the corresponding relationship of the color image and the depth image make texture maps, and according to described
Texture maps carry out texture mapping to the target face wire frame model after the optimization, obtain final target face wire frame model.
Preferably, it includes: to adopt that the color image of target face and depth image under at least two visual angles are obtained in step S1
The color image and the depth map for waiting angular distances to obtain the target face under at least two visual angles are pressed with RGB-D camera
Picture.
Preferably, the target person is obtained according to the corresponding relationship of the color image and the depth image in step S2
The three-dimensional coordinate of the index point of face includes: the intrinsic parameter of depth camera in RGB-D camera are as follows: pixel focal length fx,fy, principal point deviation
u0,v0;Then the picture point [u, v] of the depth image and three-dimensional coordinate [x is reconstructed under the depth camera coordinate systemc,yc,
zc] transformation relation are as follows: xc=zc·(u-u0)/fx;yc=zc·(v-v0)/fy;zc=zc;Wherein, zcFor the depth image
Pixel on depth value.
Preferably, normal step S3 further include: S31: is carried out to the point cloud data in the point cloud model of the target face
It calculates: K-D tree being generated according to described cloud in the point cloud model of the target face, establishes the topological relation of described cloud;
The neighborhood for searching any point cloud analyzes dimensionality reduction to two-dimensional surface to the neighborhood point principal component of described cloud, the two dimension is arranged
The normal of plane is the normal of described cloud, is oriented further according to the position of depth camera to the normal of described cloud, is completed
Method line computation;S32: traversal calculates the point cloud in the point cloud model of the target face and takes mean value at a distance from each point in neighborhood,
Judge whether the mean value is more than pre-set threshold value, determines that described cloud is removed for noise if being more than.
Preferably, step S4 includes the following steps: S41: will be described in former frame in the point cloud model of the target face
The point cloud model cloud Pr as a reference point that the three-dimensional coordinate of the index point of target face is formed, the target face of a later frame
The point cloud model that the three-dimensional coordinate of index point is formed passes through the reference point clouds Pr and the target point as target point cloud Pt
The shared index point of cloud Pt calculates spin matrix R and translation matrix T;S42: by institute in the point cloud model of the target face
There is a cloud to be registrated on a width specified point cloud according to registration equation P r=R*Pt+T, the target face for obtaining the rough registration is three-dimensional
Point cloud model, wherein the specified point cloud is positive face point cloud.
Preferably, step S5 includes the following steps: S51: before in the target face three-dimensional point cloud model of the rough registration
The target face three-dimensional point cloud model of the target face three-dimensional point cloud model cloud as a reference point of one frame, a later frame is made
For target point cloud;To each point in the target point cloud, the closest approach in the reference point clouds is matched, the correspondence after asking matching
The point the smallest rigid body translation of distance root mean square, to obtain initial translation matrix and initial rotation vector;Use the initial translation
Matrix and initial rotation vector carry out switch target point cloud, and iteration optimization is obtained until meeting iteration error less than given threshold
Spin matrix and translation matrix after must being registrated;S52: according to the spin matrix and translation matrix after the registration, in conjunction with registration
The target face three-dimensional point cloud model of the rough registration is registrated by equation, and the target face for obtaining the accuracy registration is three-dimensional
Point cloud model.
Preferably, step S6 includes the following steps: S61: by the target face three-dimensional point cloud model after the accuracy registration
Fusion duplicate removal is carried out, fused target face three-dimensional point cloud model is formed;S62: three-dimensional to the fused target face
Point cloud model carries out Poisson reconstruction, generates target face wire frame model;S63: grid is carried out to the target face wire frame model
Fairing;S64: the positive face color image of first frame based on acquisition optimizes the depth image of the positive face of the first frame, after fairing
The target face wire frame model carries out texture enhancing, the target face wire frame model after obtaining the optimization.
Preferably, according to the target face wire frame model and the color image and the depth image after the optimization
Corresponding relationship, and making texture maps includes: S71: grid vertex and coloured silk in the target face wire frame model after obtaining the optimization
Chromatic graph is as the corresponding relationship between pixel;S72: on the basis of the corresponding relationship, pass through the neighborhood of the grid vertex
Information estimates the positional relationship when grid vertex is acquired relative to each depth camera, with the determination grid vertex pair
The pixel answered;S73: according to the pixel, initial texture figure is made;S74: image seam is removed on the initial texture figure
The color difference of gap carries out color blend amendment, obtains final texture maps.
The present invention also provides a kind of three-dimensional facial reconstruction systems, comprising: first unit: obtaining target under at least two visual angles
The color image and depth image of face, and judge whether the color image and the depth image are aligned;Second unit:
In the case that the color image and the depth image are aligned, target described in the color image is extracted by deep learning
The two-dimensional coordinate of the index point of face, and the target is obtained according to the corresponding relationship of the color image and the depth image
The three-dimensional coordinate of the index point of face;Third unit: the mesh is estimated according to the three-dimensional coordinate of the index point of the target face
The position of face and the threshold value of set depth value are marked, to filter out from the original object face three-dimensional point cloud model of target face
The point cloud model of target face, wherein the original object face three-dimensional point cloud model is according to the color image and described
The model that depth image obtains;Unit the 4th: the point cloud model of directions different in the point cloud model of the target face is carried out
Point cloud slightly matches, and obtains the target face three-dimensional point cloud model of rough registration;Unit the 5th: by the target face three of the rough registration
It ties up point cloud model and carries out accuracy registration, obtain the target face three-dimensional point cloud model of accuracy registration;Unit the 6th: will be described accurate
The target face three-dimensional point cloud model of registration carries out fusion duplicate removal, gridding and Mesh smoothing, and carries out texture and enhance to obtain
Target face wire frame model after optimization;Unit the 7th: according to the target face wire frame model and the colour after the optimization
The corresponding relationship of image and the depth image makes texture maps, and according to the texture maps to the target face after the optimization
Grid model carries out texture mapping, obtains final target face wire frame model.
The present invention also provides a kind of computer readable storage medium, the computer-readable recording medium storage has computer
Program, when the computer program is executed by processor realize as above any the method the step of.
The invention has the benefit that providing a kind of three-dimensional facial reconstruction method and system, it is based on true facial image
Information, rather than establish face template and progress three-dimensional facial reconstruction is matched by machine;Pass through the extraction of face three-dimensional symbol point
The alignment of different posture faces is realized with registration;Error has been carried out to the point cloud after essence registration to be averaged, and improves face three-dimensional
The accuracy of point cloud model;Illumination model is estimated based on color image, depth is directly optimized by the connection of depth and normal, is shown
Write the grain details of grid after improving fairing;Based on as corresponding, by the realm information of grid vertex and its relative to
The positional relationship of depth camera accurately determines the mapping relations of grid and color image, and the faceform after making texture mapping more connects
The effect of nearly real human face, the problem of overcoming and establish face template by machine learning in the prior art, lead to distortion.
Detailed description of the invention
Fig. 1 is a kind of schematic diagram of three-dimensional facial reconstruction method in the embodiment of the present invention.
Fig. 2 (a)-Fig. 2 (d) is target face rotary course schematic diagram in the embodiment of the present invention.
Fig. 3 is the schematic diagram of the index point of target face in the embodiment of the present invention.
Fig. 4 is the method signal of the original object face three-dimensional point cloud model optimization of target face in the embodiment of the present invention
Figure.
Fig. 5 is the thick matching process schematic diagram of point cloud of the point cloud model of target face in the embodiment of the present invention.
Fig. 6 is the method schematic diagram of the target face three-dimensional point cloud model accuracy registration of rough registration in the embodiment of the present invention.
Fig. 7 is that the target face three-dimensional point cloud model of accuracy registration in the embodiment of the present invention carries out gridding and grid optimization
And the method schematic diagram of texture enhancing.
According to pair of target face wire frame model and color image and depth image after optimization in Fig. 8 embodiment of the present invention
It should be related to, and make the method schematic diagram of texture maps.
Fig. 9 is the schematic diagram of target face wire frame model in the embodiment of the present invention.
Figure 10 is a kind of schematic diagram of three-dimensional facial reconstruction system in the embodiment of the present invention.
Specific embodiment
In order to which technical problem to be solved of the embodiment of the present invention, technical solution and beneficial effect is more clearly understood,
The present invention is further described in detail below with reference to the accompanying drawings and embodiments.It should be appreciated that specific implementation described herein
Example is only used to explain the present invention, is not intended to limit the present invention.
It should be noted that it can be directly another when element is referred to as " being fixed on " or " being set to " another element
On one element or indirectly on another element.When an element is known as " being connected to " another element, it can
To be directly to another element or be indirectly connected on another element.In addition, connection can be for fixing
Effect is also possible to act on for circuit communication.
It is to be appreciated that term " length ", " width ", "upper", "lower", "front", "rear", "left", "right", "vertical",
The orientation or positional relationship of the instructions such as "horizontal", "top", "bottom" "inner", "outside" is that orientation based on the figure or position are closed
System is merely for convenience of the description embodiment of the present invention and simplifies description, rather than the device or element of indication or suggestion meaning must
There must be specific orientation, be constructed and operated in a specific orientation, therefore be not considered as limiting the invention.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance
Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
Implicitly include one or more this feature.In the description of the embodiment of the present invention, the meaning of " plurality " is two or two
More than, unless otherwise specifically defined.
Part explanation of nouns:
Depth image: in 3D computer graphics, depth map is comprising having at a distance from the surface of the scenario objects of viewpoint
The image or image channel of the information of pass.Wherein, depth map is similar to gray level image, and only its each pixel value is sensor
Actual range apart from object.The RGB image and Depth image of usual RGB-D camera acquisition are registrations, thus pixel it
Between have one-to-one corresponding relationship.
Point cloud registering: point cloud registering is exactly the coordinate position transformation relation found out between a cloud.In practical projects, to one
Situations such as often appearance point cloud is incomplete for the point cloud that a object reconstruction goes out, dislocation, to obtain complete data model, needing will be each
A cloud, which is merged under same coordinate system, generates complete point cloud model, this process just needs to realize by point cloud registering.
Point cloud registering be exactly by under different perspectives point cloud data by rotation translation rigid transformation unified integration to specified coordinate system it
Under process.Its output of process spin matrix R and translation matrix T is converted by RT and is realized being overlapped for source point cloud and target point cloud.
Principal component analysis(Principal Component Analysis, PCA): it is a kind of statistical method.By orthogonal
There may be the variables of correlation to be converted to one group of linearly incoherent variable by one group for transformation, this group of variable after conversion, which is named, to be led
Ingredient.In actual subject, for comprehensive problem analysis, much variables (or factor) related with this are often proposed, because often
A variable all reflects certain information of this project to varying degrees.Principal component analysis is by K. Pearson (Karl first
Pearson) nonrandom variable is introduced, the method is generalized to the situation of random vector by H. Hotelling thereafter.Information it is big
It is small usually to be measured with sum of squares of deviations or variance.
Texture mapping: in computer graphics, an image (texture) is mapped to the mistake on the surface of 3D rendering object
Journey is referred to as texture mapping.Texture mapping provides details abundant to object, has simulated the appearance of object complexity.
Embodiment 1
As shown in Figure 1, the present invention provides a kind of three-dimensional facial reconstruction method, include the following steps:
S1: the color image and depth image of target face under at least two visual angles are obtained, and judges the color image
Whether it is aligned with the depth image;
S2: in the case where the color image and the depth image are aligned, the colour is extracted by deep learning
The two-dimensional coordinate of the index point of target face described in image, and closed according to the color image and the corresponding of the depth image
System obtains the three-dimensional coordinate of the index point of the target face;
S3: position and the set depth of the target face are estimated according to the three-dimensional coordinate of the index point of the target face
The threshold value of value, to filter out the point cloud model of target face from the original object face three-dimensional point cloud model of target face,
In, the original object face three-dimensional point cloud model is the model obtained according to the color image and the depth image;
S4: cloud is carried out to the point cloud model of directions different in the point cloud model of the target face and is slightly matched, is obtained thick
The target face three-dimensional point cloud model of registration;
S5: the target face three-dimensional point cloud model of the rough registration is subjected to accuracy registration, obtains the target of accuracy registration
Face three-dimensional point cloud model;
S6: the target face three-dimensional point cloud model of the accuracy registration is subjected to fusion duplicate removal, gridding and grid light
It is suitable, and carry out texture and enhance the target face wire frame model after being optimized;
S7: according to the correspondence of target face wire frame model and the color image and the depth image after the optimization
Relationship makes texture maps, and carries out texture mapping to the target face wire frame model after the optimization according to the texture maps, obtains
To final target face wire frame model.
As shown in Fig. 2 (a)-Fig. 2 (d), in an embodiment of the present invention, is pressed using RGB-D camera and angular distances is waited to obtain
The color image and depth image of target face under at least two visual angles.It is obtained deeply in the present embodiment by the way that RGB-D camera is synchronous
Image and color image are spent, wherein depth camera may be based on the camera of the modes such as structure light measurement method or TOF measurement method.
RGB-D camera is placed in the positive front end of face, adjustment distance arrives within the optimal acquisition distance range of depth camera, target person
The initial position of face generally faces camera, and then face rotates in one direction, and rotational angle is not more than 60 °, then returns just
Turn to identical angular range to another way again afterwards, one group of image of 15 ° every turn or so acquisitions, entire mistake in rotation process
Journey acquires about 12 groups of images altogether, while ensuring that each region of rotation process face can clearly be obtained.It is understood that
In a kind of embodiment, the group number for acquiring image is more, and the model of three-dimensional facial reconstruction is more accurate.
It is understood that being the relative angle for acquiring the camera and target face of image in the present invention, can be
The rotation of target face is also possible to camera rotation, it is only necessary to the relative rotation between camera and target face.
Multiple series of images in rotary course is chosen by equal angular distances, is mapped to color image according to the outer parameter of RGB-D camera
Depth image, judges the alignment condition of the two, if it is not, then being resurveyed, repeatedly needs to carry out the outer ginseng of camera after failure
Number verifying;If so, carrying out next step.
Include: according to the three-dimensional coordinate that the corresponding relationship of color image and depth image obtains the index point of target face
The intrinsic parameter of depth camera in RGB-D camera are as follows: pixel focal length fx,fy, principal point deviation u0, v0;The then depth map
The picture point [u, v] of picture and three-dimensional coordinate [x is reconstructed under the depth camera coordinate systemc,yc,zc] transformation relation are as follows:
xc=zc·(u-u0)/fx
yc=zc·(v-v0)/fy
zc=zc
Wherein, zcFor the depth value on the pixel of the depth image.
As shown in figure 3, being sat by 68 index points that the deep learning algorithm of dlib extracts target face in color image
The index point training set of mark composition target face.Then the pixel coordinate and depth image and color image of index point are obtained
Corresponding relationship converts depth image coordinate for the pixel coordinate of index point according to the mapping relations of color image and depth image
The pixel coordinate fastened combines the three-dimensional coordinate of above-mentioned transformation relation calculating target face index point.
As shown in figure 4, the original object face three-dimensional point cloud model of the target face needs to be optimized, because former
Beginning target face three-dimensional point cloud model includes other non-targeted face point cloud informations, such as background information.In the present embodiment, it walks
First these clouds are rejected in rapid S3, while to guarantee subsequent step progress and point Yun Zhiliang, a normal is carried out to cloud
Calculating and denoising, specifically:
S31: method line computation is carried out to the point cloud data in the point cloud model of the target face: according to the target face
Point cloud model in described cloud generate K-D tree, establish the topological relation of described cloud;The neighborhood of any point cloud is searched, it is right
The neighborhood point principal component of described cloud analyzes dimensionality reduction to two-dimensional surface, and the normal that the two-dimensional surface is arranged is described cloud
Normal is oriented the normal of described cloud further according to the position of depth camera, Completion Techniques line computation;
S32: traversal calculates the point cloud in the point cloud model of the target face and takes mean value at a distance from each point in neighborhood, sentences
Whether the mean value of breaking is more than pre-set threshold value, determines that described cloud is removed for noise if being more than.
So far the point cloud model for filtering out target face only including target face information is had been obtained for.
As shown in figure 5, next, the three-dimensional that the application passes through the index point of the target face of the point cloud model of target face
Coordinate first calculates spin matrix R and translation matrix T, and then the point cloud model of target face is once registrated, as the application
Point cloud slightly match, specifically comprise the following steps:
S41: by the three-dimensional coordinate of the index point of the target face of former frame in the point cloud model of the target face
The point cloud model of formation cloud Pr as a reference point, the point cloud that the three-dimensional coordinate of the index point of the target face of a later frame is formed
Model calculates rotation as target point cloud Pt, and by the shared index point of the reference point clouds Pr and the target point cloud Pt
Torque battle array R and translation matrix T;
In an embodiment of the present invention, decomposing rough registration by SVD determines a later frame point cloud registering to former frame point
The spin matrix R and translation matrix T of cloud.
S42: all the points cloud in the point cloud model of the target face is registrated to one according to registration equation P r=R*Pt+T
On width specified point cloud, the target face three-dimensional point cloud model of the rough registration is obtained, wherein the specified point cloud is positive face point
Cloud.
As shown in fig. 6, on the basis of the target face three-dimensional point cloud model of rough registration, then pass through target face three-dimensional point
The all the points cloud computing spin matrix and translation matrix of cloud model, then the point cloud model of the target face of rough registration carries out primary
Registration obtains the target face three-dimensional point cloud model of accuracy registration, specifically comprises the following steps:
S51: by the target face three-dimensional point cloud of former frame in the target face three-dimensional point cloud model of the rough registration
Model cloud as a reference point, the target face three-dimensional point cloud model of a later frame is as target point cloud;To the target point cloud
In each point, match the closest approach in the reference point clouds, the smallest rigid body of corresponding points distance root mean square after asking matching becomes
It changes, to obtain initial translation matrix and initial rotation vector;It is converted using the initial translation matrix and initial rotation vector
Target point cloud, iteration optimization are less than given threshold, spin matrix and translation after being registrated until meeting iteration error
Matrix;
S52: according to the spin matrix and translation matrix after the registration, in conjunction with registration equation by the target of the rough registration
Face three-dimensional point cloud model is registrated, and the target face three-dimensional point cloud model of the accuracy registration is obtained.
In an embodiment of the present invention, for each point in target point cloud, closest approach in matching reference minutiae cloud,
Acquiring makes above-mentioned corresponding points adjust the distance the smallest rigid body translation of root mean square, acquires translation parameters and rotation parameter.Use acquisition
Transition matrix carrys out switch target point cloud, and iteration optimization terminates iteration error less than given threshold until meeting, obtains final accurate
Spin matrix and translation matrix.
When matching frame by frame, since the point cloud of a later frame is registrated with former frame, each group of point cloud registering can be generated partially
Difference causes the deviation of last point cloud and first frame accumulative.Cumulative errors can be dispersed using the method for point-cloud fitting, obtained more
Accurate registration result.
The cumulative errors put between cloud after matching are assigned between every amplitude point cloud, to obtain more accurate matching result, separately
Outside, need to recalculate the normal that cloud is put after slightly matching before step S07.2 matching.
As shown in fig. 7, the target face three-dimensional point cloud model of accuracy registration is carried out gridding and grid optimization, and texture
Enhancing, includes the following steps:
S61: the target face three-dimensional point cloud model after the accuracy registration is subjected to fusion duplicate removal, forms fused mesh
Mark face three-dimensional point cloud model;
In a kind of specific embodiment, suitable distance threshold can be arranged according to the error distance of accuracy registration, than
Compared with the Euclidean distance put between cloud each point after accuracy registration, redundant points are deleted if distance is less than threshold value.
S62: Poisson reconstruction is carried out to the fused target face three-dimensional point cloud model, generates target face grid mould
Type;
In an embodiment of the present invention, point set specially is stored using octree structure, according to the position of sampling point set
Definition Octree is set, the leaf node that Octree makes each sampled point fall in depth D is then segmented;To each section of Octree
Point installation space function F, all node function F linear and can indicate vector field V, and basic function F uses the n dimension of box filtering
Convolution;In the case where uniform sampling, it is assumed that the block of division is constant, approaches instruction by vector field V using galley proof interpolation three times
The gradient of function;The solution of Poisson's equation is found out using Laplacian Matrix iteration;The position for finally estimating sampled point, then uses it
Average value carries out isosurface extraction, then obtains contour surface with marching cubes algorithm.
S63: Mesh smoothing is carried out to the target face wire frame model;
In a kind of embodiment of the invention, specially the long contracting of the mould of the Laplacian coordinate δ for the grid model rebuild
Small, direction is constant, carries out Laplacian mesh reconstruction and comes along grid.
S64: the positive face color image of first frame based on acquisition optimizes the depth image of the positive face of the first frame, to fairing
The target face wire frame model afterwards carries out texture enhancing, the target face wire frame model after obtaining the optimization.
In an embodiment of the present invention, the Lambertian based on body surface assumes and incident light is ball harmonic wave
Etc. premises, carry out illumination estimation and calculate illumination tensor and surface reflectivity, then the refinement of the shade based on each pixel depth value, lead to
Gradient constraint is crossed, smoothness constraint and depth constraints establish deviation equation calculation and directly optimize depth.Depth map after optimization is repaired
The apex coordinate of positive grid, the target face wire frame model after obtaining final details optimization.
In an embodiment of the present invention, the color of the different color images is merged, obtains the texture
Figure.Corresponding cloud of grid is aligned with color image, can obtain the corresponding relationship between grid vertex and color image pixel point,
I.e. point is as corresponding;When multi-angle of view acquires image data, generally there is overlapping region between frame and frame, the grid vertex of overlapping region has
Multiple pictures are corresponding, determined by the neighborhood information and estimation vertex of grid vertex relative to the positional relationship of each camera
Corresponding pixel points remove the color difference of image slot;It is straight in the texture maps of generation for there is color difference between different images
The amendment of row color blend is tapped into, final texture maps are obtained.
As shown in figure 8, according to target face wire frame model and the color image and the depth map after the optimization
The corresponding relationship of picture, and make texture maps and include:
S71: in the target face wire frame model after obtaining the optimization between grid vertex and color image pixel point
Corresponding relationship;
S72: on the basis of the corresponding relationship, by the neighborhood information of the grid vertex, estimate the grid top
Positional relationship when point is relative to the acquisition of each depth camera, with the corresponding pixel of the determination grid vertex;
S73: according to the pixel, initial texture figure is made;
S74: removing the color difference of image slot on the initial texture figure, carries out color blend amendment, obtains final
Texture maps.
As shown in figure 9, final texture maps are wrapped in surface mesh, final target face wire frame model is obtained.
As shown in Figure 10, the present invention also provides a kind of three-dimensional facial reconstruction systems, comprising:
First unit: the color image and depth image of target face under at least two visual angles are obtained, and judges the coloured silk
Whether chromatic graph picture and the depth image are aligned;
Second unit: in the case where the color image and the depth image are aligned, institute is extracted by deep learning
State the two-dimensional coordinate of the index point of target face described in color image, and according to the color image and the depth image
Corresponding relationship obtains the three-dimensional coordinate of the index point of the target face;
Third unit: the position of the target face is estimated according to the three-dimensional coordinate of the index point of the target face and is set
The threshold value of depthkeeping angle value, to filter out the point Yun Mo of target face from the original object face three-dimensional point cloud model of target face
Type, wherein the original object face three-dimensional point cloud model is the mould obtained according to the color image and the depth image
Type;
Unit the 4th: carrying out a cloud to the point cloud model of directions different in the point cloud model of the target face and slightly match,
Obtain the target face three-dimensional point cloud model of rough registration;
Unit the 5th: the target face three-dimensional point cloud model of the rough registration is subjected to accuracy registration, obtains accuracy registration
Target face three-dimensional point cloud model;
Unit the 6th: by the target face three-dimensional point cloud model of the accuracy registration carry out fusion duplicate removal, gridding and
Mesh smoothing, and carry out texture and enhance the target face wire frame model after being optimized;
Unit the 7th: according to the target face wire frame model and the color image and the depth image after the optimization
Corresponding relationship make texture maps, and according to the texture maps to after the optimization target face wire frame model carry out texture patch
Figure, obtains final target face wire frame model.
The three-dimensional facial reconstruction system of the embodiment includes: processor, memory and stores in the memory simultaneously
The computer program that can be run on the processor, such as obtain the color image and depth of target face under at least two visual angles
Spend image, and judge the color image and the depth image whether alignment program.The processor executes the computer
The step in above-mentioned each three-dimensional facial reconstruction method embodiment, such as step S1-S7 shown in FIG. 1 are realized when program.Alternatively,
The processor realizes the function of each unit in above-mentioned each Installation practice, such as the first list when executing the computer program
Member: the color image and depth image of target face under at least two visual angles are obtained, and judges the color image and the depth
Whether degree image is aligned.
The computer program can be divided into seven units as above and be only exemplary.One can actually be divided into
A or multiple units, one or more of units are stored in the memory, and are executed by the processor, with complete
At the present invention.One or more of units can be the series of computation machine program instruction section that can complete specific function, should
Instruction segment is for describing implementation procedure of the computer program in the three-dimensional facial reconstruction system.
The three-dimensional facial reconstruction system can also include desktop PC, notebook, palm PC and cloud service
Device etc. calculates equipment;It may include, but be not limited only to, processor, memory.It will be understood by those skilled in the art that the signal
Figure is only the example of three-dimensional facial reconstruction system, does not constitute the restriction to three-dimensional facial reconstruction system, may include than figure
Show more or fewer components, perhaps combines certain components or different components, such as the three-dimensional facial reconstruction system is also
It may include input-output equipment, network access equipment, bus etc..
Alleged processor can be central processing unit (Central Processing Unit, CPU), can also be it
His general processor, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng the processor is the control centre of the three-dimensional facial reconstruction system, entirely three-dimensional using various interfaces and connection
The various pieces of face reconstruction system.
The memory can be used for storing the computer program and/or module, and the processor is by operation or executes
Computer program in the memory and/or module are stored, and calls the data being stored in memory, described in realization
The various functions of three-dimensional facial reconstruction system.The memory can mainly include storing program area and storage data area, wherein deposit
Store up program area can application program needed for storage program area, at least one function (for example sound-playing function, image play function
Energy is equal) etc.;Storage data area, which can be stored, uses created data (such as audio data, phone directory etc.) etc. according to mobile phone.
Can also include nonvolatile memory in addition, memory may include high-speed random access memory, for example, hard disk, memory,
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge
Deposit card (Flash Card), at least one disk memory, flush memory device or other volatile solid-state parts.
The present invention realizes all or part of the process in above-described embodiment method, can also be instructed by computer program
Relevant hardware is completed, and the computer program can be stored in a computer readable storage medium, the computer program
When being executed by processor, it can be achieved that the step of above-mentioned each embodiment of the method.Wherein, the computer program includes computer
Program code, the computer program code can be source code form, object identification code form, executable file or certain centres
Form etc..The computer-readable medium may include: can carry the computer program code any entity or device,
Recording medium, USB flash disk, mobile hard disk, magnetic disk, CD, computer storage, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software
Distribution medium etc..It should be noted that the content that the computer-readable medium includes can be according to making laws in jurisdiction
Requirement with patent practice carries out increase and decrease appropriate, such as in certain jurisdictions, according to legislation and patent practice, computer
Readable medium does not include electric carrier signal and telecommunication signal.
It should be understood that the application using multiframe point cloud carry out Model Reconstruction, be different from the prior art in pass through engineering
The method for establishing face template is practised, more for true effect;Face three-dimensional symbol point is obtained based on face two-dimentional index points to realize
The rough registration of different perspectives human face model, and multiframe point cloud is more suitable for by the method that rough registration and accuracy registration combine
Registration;Details enhancing is carried out to grid model based on color image, is different from directly using depth map and people in the prior art
The model of face Model Reconstruction has better details effect.
It being rebuild unlike face three-dimensional point cloud model from most of by depth camera, the present invention relies on RGB-D camera,
Human face data is acquired by multi-angle of view, on the basis of rebuilding face three-dimensional point cloud model, a cloud is packaged and texture
Textures obtain the faceform close to real human face.It is different from the face mould rebuild at present by deep learning training simultaneously
Type, it is higher that the present invention reconstructs the model quality come, more levels off to true face effect.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that
Specific implementation of the invention is only limited to these instructions.For those skilled in the art to which the present invention belongs, it is not taking off
Under the premise of from present inventive concept, several equivalent substitute or obvious modifications can also be made, and performance or use is identical, all answered
When being considered as belonging to protection scope of the present invention.
Claims (10)
1. a kind of three-dimensional facial reconstruction method, which comprises the steps of:
S1: the color image and depth image of target face under at least two visual angles are obtained, and judges the color image and institute
State whether depth image is aligned;
S2: in the case where the color image and the depth image are aligned, the color image is extracted by deep learning
Described in target face index point two-dimensional coordinate, and obtained according to the corresponding relationship of the color image and the depth image
Take the three-dimensional coordinate of the index point of the target face;
S3: position and the set depth value of the target face are estimated according to the three-dimensional coordinate of the index point of the target face
Threshold value, to filter out the point cloud model of target face from the original object face three-dimensional point cloud model of target face, wherein institute
Stating original object face three-dimensional point cloud model is the model obtained according to the color image and the depth image;
S4: cloud is carried out to the point cloud model of directions different in the point cloud model of the target face and is slightly matched, rough registration is obtained
Target face three-dimensional point cloud model;
S5: the target face three-dimensional point cloud model of the rough registration is subjected to accuracy registration, obtains the target face of accuracy registration
Three-dimensional point cloud model;
S6: carrying out fusion duplicate removal, gridding and Mesh smoothing for the target face three-dimensional point cloud model of the accuracy registration, and
Carrying out texture enhances the target face wire frame model after being optimized;
S7: according to the corresponding relationship of target face wire frame model and the color image and the depth image after the optimization
Texture maps are made, and texture mapping is carried out to the target face wire frame model after the optimization according to the texture maps, are obtained most
Whole target face wire frame model.
2. three-dimensional facial reconstruction method as described in claim 1, which is characterized in that obtained in step S1 under at least two visual angles
The color image and depth image of target face include:
Being pressed using RGB-D camera waits angular distances to obtain the color image of the target face and the depth under at least two visual angles
Spend image.
3. three-dimensional facial reconstruction method as described in claim 1, which is characterized in that in step S2 according to the color image and
The three-dimensional coordinate that the corresponding relationship of the depth image obtains the index point of the target face includes:
The intrinsic parameter of depth camera in RGB-D camera are as follows: pixel focal length fx,fy, principal point deviation u0, v0;The then depth image
Picture point [u, v] and three-dimensional coordinate [x is reconstructed under the depth camera coordinate systemc,yc,zc] transformation relation are as follows:
xc=zc·(u-u0)/fx
yc=zc·(v-v0)/fy
zc=zc
Wherein, zcFor the depth value on the pixel of the depth image.
4. three-dimensional facial reconstruction method as described in claim 1, which is characterized in that step S3 further include:
S31: method line computation is carried out to the point cloud data in the point cloud model of the target face: according to the point of the target face
Described cloud in cloud model generates K-D tree, establishes the topological relation of described cloud;The neighborhood for searching any point cloud, to described
The neighborhood point principal component of point cloud analyzes dimensionality reduction to two-dimensional surface, and the normal that the two-dimensional surface is arranged is the method for described cloud
Line is oriented the normal of described cloud further according to the position of depth camera, Completion Techniques line computation;
S32: traversal calculates the point cloud in the point cloud model of the target face and takes mean value at a distance from each point in neighborhood, judges institute
State whether mean value is more than pre-set threshold value, determines that described cloud is removed for noise if being more than.
5. three-dimensional facial reconstruction method as described in claim 1, which is characterized in that step S4 includes the following steps:
S41: the three-dimensional coordinate of the index point of the target face of former frame in the point cloud model of the target face is formed
Point cloud model cloud Pr as a reference point, the point cloud model that the three-dimensional coordinate of the index point of the target face of a later frame is formed
Spin moment is calculated as target point cloud Pt, and by the shared index point of the reference point clouds Pr and the target point cloud Pt
Battle array R and translation matrix T;
S42: all the points cloud in the point cloud model of the target face is registrated to a width according to registration equation P r=R*Pt+T and is referred to
It pinpoints on cloud, obtains the target face three-dimensional point cloud model of the rough registration, wherein the specified point cloud is positive face point cloud.
6. three-dimensional facial reconstruction method as described in claim 1, which is characterized in that step S5 includes the following steps:
S51: by the target face three-dimensional point cloud model of former frame in the target face three-dimensional point cloud model of the rough registration
Cloud as a reference point, the target face three-dimensional point cloud model of a later frame is as target point cloud;To in the target point cloud
It is each, the closest approach in the reference point clouds is matched, the smallest rigid body translation of corresponding points distance root mean square after asking matching, with
Obtain initial translation matrix and initial rotation vector;Carry out switch target point using the initial translation matrix and initial rotation vector
Cloud, iteration optimization are less than given threshold, spin matrix and translation matrix after being registrated until meeting iteration error;
S52: according to the spin matrix and translation matrix after the registration, in conjunction with registration equation by the target face of the rough registration
Three-dimensional point cloud model is registrated, and the target face three-dimensional point cloud model of the accuracy registration is obtained.
7. three-dimensional facial reconstruction method as described in claim 1, which is characterized in that step S6 includes the following steps:
S61: the target face three-dimensional point cloud model after the accuracy registration is subjected to fusion duplicate removal, forms fused target person
Face three-dimensional point cloud model;
S62: Poisson reconstruction is carried out to the fused target face three-dimensional point cloud model, generates target face wire frame model;
S63: Mesh smoothing is carried out to the target face wire frame model;
S64: the positive face color image of first frame based on acquisition optimizes the depth image of the positive face of the first frame, after fairing
The target face wire frame model carries out texture enhancing, the target face wire frame model after obtaining the optimization.
8. three-dimensional facial reconstruction method as described in claim 1, which is characterized in that according to the target face net after the optimization
The corresponding relationship of lattice model and the color image and the depth image, and make texture maps and include:
S71: corresponding between grid vertex and color image pixel point in the target face wire frame model after obtaining the optimization
Relationship;
S72: on the basis of the corresponding relationship, by the neighborhood information of the grid vertex, estimate the grid vertex phase
Positional relationship when for the acquisition of each depth camera, with the corresponding pixel of the determination grid vertex;
S73: according to the pixel, initial texture figure is made;
S74: removing the color difference of image slot on the initial texture figure, carries out color blend amendment, obtains final texture
Figure.
9. a kind of three-dimensional facial reconstruction system characterized by comprising
First unit: the color image and depth image of target face under at least two visual angles are obtained, and judges the cromogram
Whether picture and the depth image are aligned;
Second unit: in the case where the color image and the depth image are aligned, the coloured silk is extracted by deep learning
The two-dimensional coordinate of the index point of target face described in chromatic graph picture, and according to the correspondence of the color image and the depth image
The three-dimensional coordinate of the index point of target face described in Relation acquisition;
Third unit: the position of the target face is estimated according to the three-dimensional coordinate of the index point of the target face and sets depth
The threshold value of angle value, to filter out the point cloud model of target face from the original object face three-dimensional point cloud model of target face,
Wherein, the original object face three-dimensional point cloud model is the model obtained according to the color image and the depth image;
Unit the 4th: cloud is carried out to the point cloud model of directions different in the point cloud model of the target face and is slightly matched, is obtained
The target face three-dimensional point cloud model of rough registration;
Unit the 5th: the target face three-dimensional point cloud model of the rough registration is subjected to accuracy registration, obtains the mesh of accuracy registration
Mark face three-dimensional point cloud model;
Unit the 6th: the target face three-dimensional point cloud model of the accuracy registration is subjected to fusion duplicate removal, gridding and grid
Fairing, and carry out texture and enhance the target face wire frame model after being optimized;
Unit the 7th: according to pair of target face wire frame model and the color image and the depth image after the optimization
It should be related to production texture maps, and texture mapping is carried out to the target face wire frame model after the optimization according to the texture maps,
Obtain final target face wire frame model.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In being realized when the computer program is executed by processor such as the step of claim 1-8 any the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910524707.5A CN110363858B (en) | 2019-06-18 | 2019-06-18 | Three-dimensional face reconstruction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910524707.5A CN110363858B (en) | 2019-06-18 | 2019-06-18 | Three-dimensional face reconstruction method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110363858A true CN110363858A (en) | 2019-10-22 |
CN110363858B CN110363858B (en) | 2022-07-01 |
Family
ID=68216329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910524707.5A Active CN110363858B (en) | 2019-06-18 | 2019-06-18 | Three-dimensional face reconstruction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110363858B (en) |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110827408A (en) * | 2019-10-31 | 2020-02-21 | 上海师范大学 | Real-time three-dimensional reconstruction method based on depth sensor |
CN111063016A (en) * | 2019-12-31 | 2020-04-24 | 螳螂慧视科技有限公司 | Multi-depth lens face modeling method and system, storage medium and terminal |
CN111160208A (en) * | 2019-12-24 | 2020-05-15 | 河南中原大数据研究院有限公司 | Three-dimensional face super-resolution method based on multi-frame point cloud fusion and variable model |
CN111192201A (en) * | 2020-04-08 | 2020-05-22 | 腾讯科技(深圳)有限公司 | Method and device for generating face image and training model thereof, and electronic equipment |
CN111199579A (en) * | 2020-01-02 | 2020-05-26 | 腾讯科技(深圳)有限公司 | Method, device, equipment and medium for building three-dimensional model of target object |
CN111210510A (en) * | 2020-01-16 | 2020-05-29 | 腾讯科技(深圳)有限公司 | Three-dimensional face model generation method and device, computer equipment and storage medium |
CN111243093A (en) * | 2020-01-07 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Three-dimensional face grid generation method, device, equipment and storage medium |
CN111402394A (en) * | 2020-02-13 | 2020-07-10 | 清华大学 | Three-dimensional exaggerated cartoon face generation method and device |
CN111507340A (en) * | 2020-04-16 | 2020-08-07 | 北京深测科技有限公司 | Target point cloud data extraction method based on three-dimensional point cloud data |
CN111612912A (en) * | 2020-05-26 | 2020-09-01 | 广州纳丽生物科技有限公司 | Rapid three-dimensional reconstruction and optimization method based on Kinect2 camera face contour point cloud model |
CN111612920A (en) * | 2020-06-28 | 2020-09-01 | 广州欧科信息技术股份有限公司 | Method and equipment for generating point cloud three-dimensional space image |
CN111710035A (en) * | 2020-07-16 | 2020-09-25 | 腾讯科技(深圳)有限公司 | Face reconstruction method and device, computer equipment and storage medium |
CN111739167A (en) * | 2020-06-16 | 2020-10-02 | 北京百度网讯科技有限公司 | 3D human head reconstruction method, device, equipment and medium |
CN111753712A (en) * | 2020-06-22 | 2020-10-09 | 中国电力科学研究院有限公司 | Method, system and equipment for monitoring safety of power production personnel |
CN112002014A (en) * | 2020-08-31 | 2020-11-27 | 中国科学院自动化研究所 | Three-dimensional face reconstruction method, system and device for fine structure |
CN112085835A (en) * | 2020-08-31 | 2020-12-15 | 腾讯科技(深圳)有限公司 | Three-dimensional cartoon face generation method and device, electronic equipment and storage medium |
CN112200905A (en) * | 2020-10-15 | 2021-01-08 | 革点科技(深圳)有限公司 | Three-dimensional face completion method |
CN112233142A (en) * | 2020-09-29 | 2021-01-15 | 深圳宏芯宇电子股份有限公司 | Target tracking method, device and computer readable storage medium |
CN112220444A (en) * | 2019-11-20 | 2021-01-15 | 北京健康有益科技有限公司 | Pupil distance measuring method and device based on depth camera |
CN112284291A (en) * | 2020-10-22 | 2021-01-29 | 华中科技大学鄂州工业技术研究院 | Three-dimensional scanning method and device capable of obtaining physical texture |
CN112308963A (en) * | 2020-11-13 | 2021-02-02 | 四川川大智胜软件股份有限公司 | Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system |
CN112435326A (en) * | 2020-11-20 | 2021-03-02 | 深圳市慧鲤科技有限公司 | Printable model file generation method and related product |
CN112561784A (en) * | 2020-12-17 | 2021-03-26 | 咪咕文化科技有限公司 | Image synthesis method, image synthesis device, electronic equipment and storage medium |
CN112562082A (en) * | 2020-08-06 | 2021-03-26 | 长春理工大学 | Three-dimensional face reconstruction method and system |
CN112802071A (en) * | 2021-01-22 | 2021-05-14 | 北京农业智能装备技术研究中心 | Three-dimensional reconstruction effect evaluation method and system |
CN112797916A (en) * | 2020-12-31 | 2021-05-14 | 新拓三维技术(深圳)有限公司 | Tracking-based automatic scanning detection system, method and readable storage medium |
CN113034385A (en) * | 2021-03-01 | 2021-06-25 | 嘉兴丰鸟科技有限公司 | Grid generating and rendering method based on blocks |
CN113240720A (en) * | 2021-05-25 | 2021-08-10 | 中德(珠海)人工智能研究院有限公司 | Three-dimensional surface reconstruction method and device, server and readable storage medium |
CN113343925A (en) * | 2021-07-02 | 2021-09-03 | 厦门美图之家科技有限公司 | Face three-dimensional reconstruction method and device, electronic equipment and storage medium |
CN113538694A (en) * | 2021-07-06 | 2021-10-22 | 海信视像科技股份有限公司 | Plane reconstruction method and display device |
CN113592994A (en) * | 2021-09-27 | 2021-11-02 | 贝壳技术有限公司 | Method, apparatus and storage medium for texture mapping |
CN113596432A (en) * | 2021-07-30 | 2021-11-02 | 成都市谛视科技有限公司 | 3D video production method, device and equipment with variable visual angle and storage medium |
CN113610971A (en) * | 2021-09-13 | 2021-11-05 | 杭州海康威视数字技术股份有限公司 | Fine-grained three-dimensional model construction method and device and electronic equipment |
CN113674161A (en) * | 2021-07-01 | 2021-11-19 | 清华大学 | Face deformity scanning completion method and device based on deep learning |
CN113838176A (en) * | 2021-09-16 | 2021-12-24 | 网易(杭州)网络有限公司 | Model training method, three-dimensional face image generation method and equipment |
WO2022001236A1 (en) * | 2020-06-30 | 2022-01-06 | 北京市商汤科技开发有限公司 | Three-dimensional model generation method and apparatus, and computer device and storage medium |
CN113902854A (en) * | 2021-10-18 | 2022-01-07 | 深圳追一科技有限公司 | Three-dimensional face model reconstruction method and device, electronic equipment and storage medium |
CN114169022A (en) * | 2021-10-29 | 2022-03-11 | 深圳精匠云创科技有限公司 | Method and system for engraving 3D surface of engraving target on blank |
CN114219920A (en) * | 2021-12-14 | 2022-03-22 | 魔珐(上海)信息科技有限公司 | Three-dimensional face model construction method and device, storage medium and terminal |
CN114858086A (en) * | 2022-03-25 | 2022-08-05 | 先临三维科技股份有限公司 | Three-dimensional scanning system, method and device |
CN114863030A (en) * | 2022-05-23 | 2022-08-05 | 广州数舜数字化科技有限公司 | Method for generating user-defined 3D model based on face recognition and image processing technology |
WO2023273093A1 (en) * | 2021-06-30 | 2023-01-05 | 奥比中光科技集团股份有限公司 | Human body three-dimensional model acquisition method and apparatus, intelligent terminal, and storage medium |
CN115937546A (en) * | 2022-11-30 | 2023-04-07 | 北京百度网讯科技有限公司 | Image matching method, three-dimensional image reconstruction method, image matching device, three-dimensional image reconstruction device, electronic apparatus, and medium |
CN116664796A (en) * | 2023-04-25 | 2023-08-29 | 北京天翔睿翼科技有限公司 | Lightweight head modeling system and method |
CN116778095A (en) * | 2023-08-22 | 2023-09-19 | 苏州海赛人工智能有限公司 | Three-dimensional reconstruction method based on artificial intelligence |
CN116912402A (en) * | 2023-06-30 | 2023-10-20 | 北京百度网讯科技有限公司 | Face reconstruction method, device, electronic equipment and storage medium |
CN117422847A (en) * | 2023-10-27 | 2024-01-19 | 神力视界(深圳)文化科技有限公司 | Model repairing method, device, electronic equipment and computer storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130089464A (en) * | 2012-02-02 | 2013-08-12 | 한국과학기술연구원 | Method for reconstructing three dimensional facial shape |
CN103927747A (en) * | 2014-04-03 | 2014-07-16 | 北京航空航天大学 | Face matching space registration method based on human face biological characteristics |
CN106327571A (en) * | 2016-08-23 | 2017-01-11 | 北京的卢深视科技有限公司 | Three-dimensional face modeling method and three-dimensional face modeling device |
CN106709947A (en) * | 2016-12-20 | 2017-05-24 | 西安交通大学 | RGBD camera-based three-dimensional human body rapid modeling system |
CN106920274A (en) * | 2017-01-20 | 2017-07-04 | 南京开为网络科技有限公司 | Mobile terminal 2D key points rapid translating is the human face model building of 3D fusion deformations |
CN108154550A (en) * | 2017-11-29 | 2018-06-12 | 深圳奥比中光科技有限公司 | Face real-time three-dimensional method for reconstructing based on RGBD cameras |
CN109087386A (en) * | 2018-06-04 | 2018-12-25 | 成都通甲优博科技有限责任公司 | A kind of face three-dimensional rebuilding method and system comprising dimensional information |
CN109087388A (en) * | 2018-07-12 | 2018-12-25 | 南京邮电大学 | Object dimensional modeling method based on depth transducer |
CN109377551A (en) * | 2018-10-16 | 2019-02-22 | 北京旷视科技有限公司 | A kind of three-dimensional facial reconstruction method, device and its storage medium |
CN109472820A (en) * | 2018-10-19 | 2019-03-15 | 清华大学 | Monocular RGB-D camera real-time face method for reconstructing and device |
-
2019
- 2019-06-18 CN CN201910524707.5A patent/CN110363858B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130089464A (en) * | 2012-02-02 | 2013-08-12 | 한국과학기술연구원 | Method for reconstructing three dimensional facial shape |
CN103927747A (en) * | 2014-04-03 | 2014-07-16 | 北京航空航天大学 | Face matching space registration method based on human face biological characteristics |
CN106327571A (en) * | 2016-08-23 | 2017-01-11 | 北京的卢深视科技有限公司 | Three-dimensional face modeling method and three-dimensional face modeling device |
CN106709947A (en) * | 2016-12-20 | 2017-05-24 | 西安交通大学 | RGBD camera-based three-dimensional human body rapid modeling system |
CN106920274A (en) * | 2017-01-20 | 2017-07-04 | 南京开为网络科技有限公司 | Mobile terminal 2D key points rapid translating is the human face model building of 3D fusion deformations |
CN108154550A (en) * | 2017-11-29 | 2018-06-12 | 深圳奥比中光科技有限公司 | Face real-time three-dimensional method for reconstructing based on RGBD cameras |
CN109087386A (en) * | 2018-06-04 | 2018-12-25 | 成都通甲优博科技有限责任公司 | A kind of face three-dimensional rebuilding method and system comprising dimensional information |
CN109087388A (en) * | 2018-07-12 | 2018-12-25 | 南京邮电大学 | Object dimensional modeling method based on depth transducer |
CN109377551A (en) * | 2018-10-16 | 2019-02-22 | 北京旷视科技有限公司 | A kind of three-dimensional facial reconstruction method, device and its storage medium |
CN109472820A (en) * | 2018-10-19 | 2019-03-15 | 清华大学 | Monocular RGB-D camera real-time face method for reconstructing and device |
Non-Patent Citations (4)
Title |
---|
LETICIA LÓPEZ ET AL.: "Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population", 《AIP CONFERENCE PROCEEDINGS》, 17 January 2015 (2015-01-17), pages 77 - 81 * |
吴子扬: "表情稳健的三维人脸重建与识别", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》, 15 January 2015 (2015-01-15), pages 138 - 1494 * |
祁乐阳: "基于双目立体视觉的人脸三维重建关键技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》, 15 February 2018 (2018-02-15), pages 138 - 1428 * |
马莉: "基于双目视觉三维重建技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》, 15 August 2012 (2012-08-15), pages 138 - 1067 * |
Cited By (78)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110827408B (en) * | 2019-10-31 | 2023-03-28 | 上海师范大学 | Real-time three-dimensional reconstruction method based on depth sensor |
CN110827408A (en) * | 2019-10-31 | 2020-02-21 | 上海师范大学 | Real-time three-dimensional reconstruction method based on depth sensor |
CN112220444B (en) * | 2019-11-20 | 2021-06-29 | 北京健康有益科技有限公司 | Pupil distance measuring method and device based on depth camera |
CN112220444A (en) * | 2019-11-20 | 2021-01-15 | 北京健康有益科技有限公司 | Pupil distance measuring method and device based on depth camera |
CN111160208A (en) * | 2019-12-24 | 2020-05-15 | 河南中原大数据研究院有限公司 | Three-dimensional face super-resolution method based on multi-frame point cloud fusion and variable model |
CN111160208B (en) * | 2019-12-24 | 2023-04-07 | 陕西西图数联科技有限公司 | Three-dimensional face super-resolution method based on multi-frame point cloud fusion and variable model |
CN111063016A (en) * | 2019-12-31 | 2020-04-24 | 螳螂慧视科技有限公司 | Multi-depth lens face modeling method and system, storage medium and terminal |
US20220165031A1 (en) * | 2020-01-02 | 2022-05-26 | Tencent Technology (Shenzhen) Company Limited | Method for constructing three-dimensional model of target object and related apparatus |
CN111199579B (en) * | 2020-01-02 | 2023-01-24 | 腾讯科技(深圳)有限公司 | Method, device, equipment and medium for building three-dimensional model of target object |
CN111199579A (en) * | 2020-01-02 | 2020-05-26 | 腾讯科技(深圳)有限公司 | Method, device, equipment and medium for building three-dimensional model of target object |
US12014461B2 (en) * | 2020-01-02 | 2024-06-18 | Tencent Technology (Shenzhen) Company Limited | Method for constructing three-dimensional model of target object and related apparatus |
CN111243093A (en) * | 2020-01-07 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Three-dimensional face grid generation method, device, equipment and storage medium |
CN111210510A (en) * | 2020-01-16 | 2020-05-29 | 腾讯科技(深圳)有限公司 | Three-dimensional face model generation method and device, computer equipment and storage medium |
CN111210510B (en) * | 2020-01-16 | 2021-08-06 | 腾讯科技(深圳)有限公司 | Three-dimensional face model generation method and device, computer equipment and storage medium |
WO2021143282A1 (en) * | 2020-01-16 | 2021-07-22 | 腾讯科技(深圳)有限公司 | Three-dimensional facial model generation method and apparatus, computer device and storage medium |
CN111402394B (en) * | 2020-02-13 | 2022-09-20 | 清华大学 | Three-dimensional exaggerated cartoon face generation method and device |
CN111402394A (en) * | 2020-02-13 | 2020-07-10 | 清华大学 | Three-dimensional exaggerated cartoon face generation method and device |
CN111192201A (en) * | 2020-04-08 | 2020-05-22 | 腾讯科技(深圳)有限公司 | Method and device for generating face image and training model thereof, and electronic equipment |
CN111507340B (en) * | 2020-04-16 | 2023-09-01 | 北京深测科技有限公司 | Target point cloud data extraction method based on three-dimensional point cloud data |
CN111507340A (en) * | 2020-04-16 | 2020-08-07 | 北京深测科技有限公司 | Target point cloud data extraction method based on three-dimensional point cloud data |
CN111612912A (en) * | 2020-05-26 | 2020-09-01 | 广州纳丽生物科技有限公司 | Rapid three-dimensional reconstruction and optimization method based on Kinect2 camera face contour point cloud model |
CN111612912B (en) * | 2020-05-26 | 2024-01-30 | 广州纳丽生物科技有限公司 | Kinect2 camera face contour point cloud model-based rapid three-dimensional reconstruction and optimization method |
CN111739167B (en) * | 2020-06-16 | 2023-10-03 | 北京百度网讯科技有限公司 | 3D human head reconstruction method, device, equipment and medium |
CN111739167A (en) * | 2020-06-16 | 2020-10-02 | 北京百度网讯科技有限公司 | 3D human head reconstruction method, device, equipment and medium |
CN111753712A (en) * | 2020-06-22 | 2020-10-09 | 中国电力科学研究院有限公司 | Method, system and equipment for monitoring safety of power production personnel |
CN111612920A (en) * | 2020-06-28 | 2020-09-01 | 广州欧科信息技术股份有限公司 | Method and equipment for generating point cloud three-dimensional space image |
CN111612920B (en) * | 2020-06-28 | 2023-05-05 | 广州欧科信息技术股份有限公司 | Method and equipment for generating point cloud three-dimensional space image |
US11475624B2 (en) | 2020-06-30 | 2022-10-18 | Beijing Sensetime Technology Development Co., Ltd. | Method and apparatus for generating three-dimensional model, computer device and storage medium |
KR20220006653A (en) * | 2020-06-30 | 2022-01-17 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | 3D model creation method, apparatus, computer device and storage medium |
JP2022533464A (en) * | 2020-06-30 | 2022-07-22 | ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド | Three-dimensional model generation method and apparatus, computer equipment, and storage medium |
WO2022001236A1 (en) * | 2020-06-30 | 2022-01-06 | 北京市商汤科技开发有限公司 | Three-dimensional model generation method and apparatus, and computer device and storage medium |
KR102442486B1 (en) * | 2020-06-30 | 2022-09-13 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | 3D model creation method, apparatus, computer device and storage medium |
CN111710035B (en) * | 2020-07-16 | 2023-11-07 | 腾讯科技(深圳)有限公司 | Face reconstruction method, device, computer equipment and storage medium |
CN111710035A (en) * | 2020-07-16 | 2020-09-25 | 腾讯科技(深圳)有限公司 | Face reconstruction method and device, computer equipment and storage medium |
CN112562082A (en) * | 2020-08-06 | 2021-03-26 | 长春理工大学 | Three-dimensional face reconstruction method and system |
CN112085835B (en) * | 2020-08-31 | 2024-03-22 | 腾讯科技(深圳)有限公司 | Three-dimensional cartoon face generation method and device, electronic equipment and storage medium |
CN112002014B (en) * | 2020-08-31 | 2023-12-15 | 中国科学院自动化研究所 | Fine structure-oriented three-dimensional face reconstruction method, system and device |
CN112002014A (en) * | 2020-08-31 | 2020-11-27 | 中国科学院自动化研究所 | Three-dimensional face reconstruction method, system and device for fine structure |
CN112085835A (en) * | 2020-08-31 | 2020-12-15 | 腾讯科技(深圳)有限公司 | Three-dimensional cartoon face generation method and device, electronic equipment and storage medium |
CN112233142A (en) * | 2020-09-29 | 2021-01-15 | 深圳宏芯宇电子股份有限公司 | Target tracking method, device and computer readable storage medium |
CN112200905A (en) * | 2020-10-15 | 2021-01-08 | 革点科技(深圳)有限公司 | Three-dimensional face completion method |
CN112200905B (en) * | 2020-10-15 | 2023-08-22 | 革点科技(深圳)有限公司 | Three-dimensional face complement method |
CN112284291A (en) * | 2020-10-22 | 2021-01-29 | 华中科技大学鄂州工业技术研究院 | Three-dimensional scanning method and device capable of obtaining physical texture |
CN112308963A (en) * | 2020-11-13 | 2021-02-02 | 四川川大智胜软件股份有限公司 | Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system |
CN112308963B (en) * | 2020-11-13 | 2022-11-08 | 四川川大智胜软件股份有限公司 | Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system |
CN112435326A (en) * | 2020-11-20 | 2021-03-02 | 深圳市慧鲤科技有限公司 | Printable model file generation method and related product |
CN112561784B (en) * | 2020-12-17 | 2024-04-09 | 咪咕文化科技有限公司 | Image synthesis method, device, electronic equipment and storage medium |
CN112561784A (en) * | 2020-12-17 | 2021-03-26 | 咪咕文化科技有限公司 | Image synthesis method, image synthesis device, electronic equipment and storage medium |
CN112797916A (en) * | 2020-12-31 | 2021-05-14 | 新拓三维技术(深圳)有限公司 | Tracking-based automatic scanning detection system, method and readable storage medium |
CN112802071B (en) * | 2021-01-22 | 2024-06-11 | 北京农业智能装备技术研究中心 | Three-dimensional reconstruction effect evaluation method and system |
CN112802071A (en) * | 2021-01-22 | 2021-05-14 | 北京农业智能装备技术研究中心 | Three-dimensional reconstruction effect evaluation method and system |
CN113034385A (en) * | 2021-03-01 | 2021-06-25 | 嘉兴丰鸟科技有限公司 | Grid generating and rendering method based on blocks |
CN113240720A (en) * | 2021-05-25 | 2021-08-10 | 中德(珠海)人工智能研究院有限公司 | Three-dimensional surface reconstruction method and device, server and readable storage medium |
WO2023273093A1 (en) * | 2021-06-30 | 2023-01-05 | 奥比中光科技集团股份有限公司 | Human body three-dimensional model acquisition method and apparatus, intelligent terminal, and storage medium |
CN113674161A (en) * | 2021-07-01 | 2021-11-19 | 清华大学 | Face deformity scanning completion method and device based on deep learning |
CN113343925A (en) * | 2021-07-02 | 2021-09-03 | 厦门美图之家科技有限公司 | Face three-dimensional reconstruction method and device, electronic equipment and storage medium |
CN113343925B (en) * | 2021-07-02 | 2023-08-29 | 厦门美图宜肤科技有限公司 | Face three-dimensional reconstruction method and device, electronic equipment and storage medium |
CN113538694A (en) * | 2021-07-06 | 2021-10-22 | 海信视像科技股份有限公司 | Plane reconstruction method and display device |
CN113596432B (en) * | 2021-07-30 | 2024-04-30 | 成都市谛视科技有限公司 | Visual angle variable 3D video production method, visual angle variable 3D video production device, visual angle variable 3D video production equipment and storage medium |
CN113596432A (en) * | 2021-07-30 | 2021-11-02 | 成都市谛视科技有限公司 | 3D video production method, device and equipment with variable visual angle and storage medium |
CN113610971A (en) * | 2021-09-13 | 2021-11-05 | 杭州海康威视数字技术股份有限公司 | Fine-grained three-dimensional model construction method and device and electronic equipment |
CN113838176A (en) * | 2021-09-16 | 2021-12-24 | 网易(杭州)网络有限公司 | Model training method, three-dimensional face image generation method and equipment |
CN113838176B (en) * | 2021-09-16 | 2023-09-15 | 网易(杭州)网络有限公司 | Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment |
CN113592994B (en) * | 2021-09-27 | 2021-12-07 | 贝壳技术有限公司 | Method, apparatus and storage medium for texture mapping |
CN113592994A (en) * | 2021-09-27 | 2021-11-02 | 贝壳技术有限公司 | Method, apparatus and storage medium for texture mapping |
CN113902854A (en) * | 2021-10-18 | 2022-01-07 | 深圳追一科技有限公司 | Three-dimensional face model reconstruction method and device, electronic equipment and storage medium |
CN114169022A (en) * | 2021-10-29 | 2022-03-11 | 深圳精匠云创科技有限公司 | Method and system for engraving 3D surface of engraving target on blank |
CN114219920A (en) * | 2021-12-14 | 2022-03-22 | 魔珐(上海)信息科技有限公司 | Three-dimensional face model construction method and device, storage medium and terminal |
CN114858086A (en) * | 2022-03-25 | 2022-08-05 | 先临三维科技股份有限公司 | Three-dimensional scanning system, method and device |
CN114863030B (en) * | 2022-05-23 | 2023-05-23 | 广州数舜数字化科技有限公司 | Method for generating custom 3D model based on face recognition and image processing technology |
CN114863030A (en) * | 2022-05-23 | 2022-08-05 | 广州数舜数字化科技有限公司 | Method for generating user-defined 3D model based on face recognition and image processing technology |
CN115937546A (en) * | 2022-11-30 | 2023-04-07 | 北京百度网讯科技有限公司 | Image matching method, three-dimensional image reconstruction method, image matching device, three-dimensional image reconstruction device, electronic apparatus, and medium |
CN116664796A (en) * | 2023-04-25 | 2023-08-29 | 北京天翔睿翼科技有限公司 | Lightweight head modeling system and method |
CN116664796B (en) * | 2023-04-25 | 2024-04-02 | 北京天翔睿翼科技有限公司 | Lightweight head modeling system and method |
CN116912402A (en) * | 2023-06-30 | 2023-10-20 | 北京百度网讯科技有限公司 | Face reconstruction method, device, electronic equipment and storage medium |
CN116778095B (en) * | 2023-08-22 | 2023-10-27 | 苏州海赛人工智能有限公司 | Three-dimensional reconstruction method based on artificial intelligence |
CN116778095A (en) * | 2023-08-22 | 2023-09-19 | 苏州海赛人工智能有限公司 | Three-dimensional reconstruction method based on artificial intelligence |
CN117422847A (en) * | 2023-10-27 | 2024-01-19 | 神力视界(深圳)文化科技有限公司 | Model repairing method, device, electronic equipment and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110363858B (en) | 2022-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363858A (en) | A kind of three-dimensional facial reconstruction method and system | |
CN112002014B (en) | Fine structure-oriented three-dimensional face reconstruction method, system and device | |
CN106709947B (en) | Three-dimensional human body rapid modeling system based on RGBD camera | |
CN111063021B (en) | Method and device for establishing three-dimensional reconstruction model of space moving target | |
Li et al. | Global correspondence optimization for non‐rigid registration of depth scans | |
CN109754459B (en) | Method and system for constructing human body three-dimensional model | |
CN113111861A (en) | Face texture feature extraction method, 3D face reconstruction method, device and storage medium | |
US9147279B1 (en) | Systems and methods for merging textures | |
CN103927742A (en) | Global automatic registering and modeling method based on depth images | |
CN110910433A (en) | Point cloud matching method based on deep learning | |
CN109766866A (en) | A kind of human face characteristic point real-time detection method and detection system based on three-dimensional reconstruction | |
Song et al. | Volumetric stereo and silhouette fusion for image-based modeling | |
CN110909778A (en) | Image semantic feature matching method based on geometric consistency | |
Kang et al. | Competitive learning of facial fitting and synthesis using uv energy | |
CN113593001A (en) | Target object three-dimensional reconstruction method and device, computer equipment and storage medium | |
Pacheco et al. | Reconstruction of high resolution 3D objects from incomplete images and 3D information | |
Hung et al. | Multipass hierarchical stereo matching for generation of digital terrain models from aerial images | |
US8948498B1 (en) | Systems and methods to transform a colored point cloud to a 3D textured mesh | |
CN115761116A (en) | Monocular camera-based three-dimensional face reconstruction method under perspective projection | |
Tylecek et al. | Depth map fusion with camera position refinement | |
Liang et al. | Better together: shading cues and multi-view stereo for reconstruction depth optimization | |
CN112146647B (en) | Binocular vision positioning method and chip for ground texture | |
CN115049764A (en) | Training method, device, equipment and medium for SMPL parameter prediction model | |
CN113379890A (en) | Character bas-relief model generation method based on single photo | |
He et al. | 3D reconstruction of Chinese hickory trees for mechanical harvest |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |