CN112308963B - Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system - Google Patents

Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system Download PDF

Info

Publication number
CN112308963B
CN112308963B CN202011267696.6A CN202011267696A CN112308963B CN 112308963 B CN112308963 B CN 112308963B CN 202011267696 A CN202011267696 A CN 202011267696A CN 112308963 B CN112308963 B CN 112308963B
Authority
CN
China
Prior art keywords
texture
dimensional
point
triangular
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011267696.6A
Other languages
Chinese (zh)
Other versions
CN112308963A (en
Inventor
吕坤
郭燕琼
荆海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wisesoft Co Ltd
Original Assignee
Wisesoft Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wisesoft Co Ltd filed Critical Wisesoft Co Ltd
Priority to CN202011267696.6A priority Critical patent/CN112308963B/en
Publication of CN112308963A publication Critical patent/CN112308963A/en
Application granted granted Critical
Publication of CN112308963B publication Critical patent/CN112308963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to the field of face image processing, in particular to a non-inductive three-dimensional face reconstruction method and an acquisition reconstruction system. The reconstruction method of the invention adopts the circular ring type double-circle center calibration plate for calibration, carries out global fusion and curved surface reconstruction by a implicit function method, and carries out smoothing treatment and optimization on the three-dimensional curved surface model, thereby greatly improving the precision of the final output image and reducing the requirement on equipment. Meanwhile, the invention adopts a three-column instrument structure, so that the instrument is convenient to install, the instrument size is reduced, the stability of the invention is increased, and the modularized design structure of the invention is simpler and more convenient, is convenient for quick production and greatly reduces the cost; according to the invention, the target face is acquired by the three acquisition units, so that the fine matching of the same-name points is realized, the resolution of the three-dimensional object is increased, the reconstruction precision of three-dimensional data is improved, and the speeds of face acquisition and three-dimensional reconstruction are also improved.

Description

Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system
Technical Field
The invention relates to the field of face image processing, in particular to a non-inductive three-dimensional face reconstruction method and an acquisition reconstruction system.
Background
In the existing three-dimensional face acquisition and recognition technology, optical reconstruction is divided into passive optical reconstruction and active optical reconstruction. The passive optical reconstruction comprises binocular stereoscopic vision reconstruction and multi-eye stereoscopic vision reconstruction; active optical reconstruction is divided into TOF schemes and structured light schemes. Wherein, the TOF scheme directly obtains three-dimensional surface reconstruction by measuring the face depth information at a certain distance by using a flight time method. The three-dimensional reconstruction technology based on the structured light projects the coded light with a certain rule on an object, the structured light projection carries out synchronous scanning imaging, and the three-dimensional surface reconstruction is carried out through phase analysis; that is, the homonymy point matching and phase resolving in the stereo vision are realized by utilizing the information of space and time modulation of the projection light, so that a high-precision three-dimensional model is obtained.
Specifically, the principle of using the geometric information of the structured light projection to obtain the three-dimensional information of the object surface is as follows: the method comprises the steps of sequentially projecting coding stripes (coding the stripes so as to distinguish the ordinal number of each stripe projected on the surface of an object) onto the surface of the object, forming patterns on the object, shooting the patterns by a camera, collecting image movement patterns, and then calculating according to a trigonometry method and equipment structure parameters to obtain three-dimensional coordinate values of the surface of the object.
If the position and direction of the projector and the camera are known, the position of the same coding stripe in the projection light and the position of the same coding stripe in the image are substituted into the spatial triangular relation, and the position and the depth information of any point on the object plane can be calculated. Conventional systems are configured with single projector + monocular imaging, and also with dual projector + binocular imaging.
In foreign countries, the U.S. university S.Zhang team measures fast phase unwrapping algorithm, local mirror reflection suppression, phase compensation technology, high-speed projection and real-time three-dimensional display technology and the like in three dimensions. The subject group of yerba-yubi has a breakthrough in the binary coded structured light illumination three-dimensional measurement technology; the D.Y.Lau subject group of Kentucky university in America adopts double-frequency stripe structured light to realize high-speed dynamic three-dimensional measurement.
In China, the research and development of a three-dimensional measurement system based on structured light, a handheld Kolor3D type infrared grating illuminating human body three-dimensional scanner developed by Beijing Congyuan image company, has the scanning precision of 1-5 mm (depending on the working distance); the JRCB-D human body scanner adopts halogen structure light illumination, the single measurement precision is less than 0.1mm, and the measurement time can be completed within 10 s.
At present, the three-dimensional measurement acquisition and reconstruction basic technologies at home and abroad are similar, the precision improving effect is poor, the required researched and developed instruments and equipment have high precision, the system is complex, the price is high, and the method is not suitable for popularization and research.
Based on the method, the invention carries out full-face measurement and acquisition based on the structured light technology, develops a new three-dimensional reconstruction method, improves the reconstruction precision of the human face model, and provides an acquisition and reconstruction system which has higher precision, simple structure, stable system and low cost.
Disclosure of Invention
The invention aims to solve the problems of poor precision improving effect, high requirement on required equipment, complex structure and high cost in the prior art, and provides a non-inductive three-dimensional face reconstruction method and an acquisition reconstruction system.
In order to achieve the above purpose, the invention provides the following technical scheme:
a method of non-perceptual three-dimensional face reconstruction, the method comprising the steps of:
s1: calibrating the target image by using a circular ring type double-circle center calibration plate, and calibrating camera parameters and image surface space parameters;
s2: projecting a structured light image to a target face according to the camera parameters and the image plane space parameters and collecting reflected structured light image data and texture images;
s3: performing dephasing on the structured light image data to obtain depth information of a target face;
s4: generating a point cloud by adopting an ICP point cloud algorithm to obtain a three-dimensional point cloud model;
s5: carrying out global fusion and curved surface reconstruction on the three-dimensional point cloud model by adopting an implicit function method to obtain a three-dimensional curved surface model;
s6: performing smoothing treatment and optimization on the three-dimensional curved surface model by adopting a least square method, filling the cavity and eliminating abnormal points;
s7: and performing texture fusion on the texture image and the three-dimensional curved surface model, and outputting a three-dimensional portrait model.
The invention uses the circular ring type double-circle center calibration plate for calibration, thereby greatly improving the calibration precision. The invention also carries out global fusion and curved surface reconstruction by a implicit function method, and carries out smoothing treatment and optimization on the three-dimensional curved surface model, thereby greatly improving the precision of the final output image and reducing the requirement on equipment.
As a preferred embodiment of the present invention, the step S1 includes the following steps:
s11: calibrating internal and external parameters of the camera through a circular ring type double-circle-center calibration plate;
s12: calibrating phase fitting parameters of six planes in space through six space attitudes of the calibration plate in a measurement range of a measured object;
s13: and (4) carrying out inter-unit calibration of the full-face camera, applying a standard three-dimensional object, simultaneously shooting through a left unit and a right unit, carrying out point cloud matching calculation on three-dimensional corresponding points, and reversely solving a baseline, a camera and image plane space parameters according to the single camera system parameters obtained in the step (S12).
As a preferred embodiment of the present invention, in the step S11, the relationship between the coordinates (XW, YW, ZW) of a point P in the circular ring type double-center calibration plate in the world coordinate system and the coordinates (u, v) of the projection point PW on the camera imaging screen is as follows:
Figure GDA0003859679200000041
wherein, in the camera external parameter matrix: r is the rotation matrix and T is the translation vector.
As a preferable embodiment of the present invention, the structured light image in step S2 is an infrared non-sensitive high-speed structured light stripe encoded image. According to the invention, by adopting the infrared non-inductive high-speed structural light stripe coded image, the influence on a measured person can be reduced, and meanwhile, the measurement precision is greatly improved.
As a preferred scheme of the invention, in the step S2, a projector is adopted to project structured light stripes with three frequencies on the left side and the right side respectively, a camera is used for synchronous acquisition, and N pictures with the phase difference of 2 pi/N are acquired in sequence at each frequency, wherein N is more than or equal to 3. According to the invention, a three-frequency method of N-step phase shift is used for collection, and through the structured light stripes of three frequencies and N pictures of each frequency, the speed of subsequent calculation is improved, and error transmission among pixel points in space phase expansion is effectively avoided.
As a preferred embodiment of the present invention, the phase function Φ (x, y) of each set of deformed fringe images in the photograph is:
Figure GDA0003859679200000042
I(x,y)=R(x,y)[A(x,y)+B(x,y)cosφ(x,y)]
wherein, I n (x, y) is the intensity function of the fringes, R (x, y) is the distribution of the object surface reflectivity, A (x, y) is the background light intensity, φ (x, y) is the phase function of the fringe deformation, and B (x, y)/A (x, y) represents the contrast of the fringes. The invention realizes phase expansion by a three-frequency method of N-step phase shift, and the expanded phase assists the parallax matching process of the binocular stereo, thereby improving the matching precision of the binocular stereo, reducing the mismatching rate, ensuring the effectiveness and reliability of the phase expansion results of the left camera and the right camera for obtaining the coded images, and providing a data base for realizing high-precision homonymy point matching.
As a preferred embodiment of the present invention, the step S3 includes the following steps:
s31: calculating a truncated phase map phi of each set of the deformed fringe images w (m, n, t), wherein m, n represents the spatial extent of the stripes, and t =1,2 represents the time variable;
s32: calculating the truncation phase difference and the 2 pi-based discontinuity number of the adjacent two groups of truncation phase diagrams at the same pixel point;
s33: according to the formula phi u (v k )=U[φ w (v k ),v·φ u (v k-1 )]Expanding the phase of the truncated phase diagram to obtain the depth information of the target face, wherein the slope of least square fitting of the expanded phase is
Figure GDA0003859679200000051
Wherein
Figure GDA0003859679200000052
k =1,2,3,s represents the maximum period of the fringe template, and the starting phase value phi of the truncated phase map is unwrapped u (1)=φ w (1)。
As a preferred embodiment of the present invention, the step S4 includes the following steps:
s41: performing coarse registration by adopting a closest point iterative algorithm according to the depth information to obtain a point cloud transformation initial value;
s42: performing fine registration by adopting an ICP (inductively coupled plasma) algorithm according to the initial transformation value to obtain a point set of a plurality of point clouds;
s43: optimizing the point set by adopting a global optimization algorithm;
s44: and performing point cloud fusion on the point sets by adopting voxel filtering to obtain a complete three-dimensional point cloud model with uniform density. According to the invention, the point cloud is roughly spliced, and the rough splicing result is used as the initialization of the ICP algorithm, so that the registration success rate is greatly improved, and the registration error is reduced. Meanwhile, aiming at the problem of unclosed loop in the registration process of multiple pieces of point clouds, the accumulated error is reasonably distributed to each piece of point cloud by using a global optimization algorithm, so that the influence of the registration error is counteracted, and a finished three-dimensional point cloud model with uniform density is output.
As a preferred aspect of the present invention, the ICP algorithm of step S42 adopts the following registration conditions:
Figure GDA0003859679200000061
wherein, { P i I =1, \ 8230;, n } represents the first set of points in space, { Q } i I =1, \ 8230;, n } represents a second set of points in space, R is the rotation matrix and T is the translation vector.
As a preferable embodiment of the present invention, the step S5 includes:
s51: constructing an integral relation of a sampling point and an indicating function through a gradient relation of a point set in the three-dimensional point cloud model, generating a vector field of the point set through the integral relation by using a blocking method, meanwhile, calculating an approximate indicating function gradient field to construct a Poisson equation, and solving an approximate solution of the Poisson equation by using matrix iteration to obtain a closed curved surface;
s52: triangularization is carried out by adopting a moving cube algorithm to complete isosurface extraction, and the closed curved surface is converted into an isosurface model consisting of triangular surface patches;
s53: and calculating the density of each point in the model, marking the point as an invalid point when the density of a certain point is smaller than a density threshold value, and deleting a triangular surface connected with the invalid point to obtain the optimized three-dimensional curved surface model.
The Poisson surface reconstruction algorithm based on the implicit function method not only can consider global factors, but also considers local point cloud characteristics, meanwhile, in the reconstruction process, the closed surface is converted into an isosurface model formed by triangular surface patches through isosurface extraction, invalid points are deleted, and the accuracy and reliability of the reconstructed three-dimensional surface model are greatly improved.
As a preferable embodiment of the present invention, the step S51 includes:
s511: inputting the three-dimensional point cloud model, wherein each point of the model comprises three-dimensional coordinates and an internal normal direction;
s512: establishing an octree according to the three-dimensional point cloud model, wherein each node of the octree is added with a node function, and a normal field formed by the internal normal can be represented as linear summation of the node functions;
s513: constructing a Poisson equation according to the octree, wherein the formula of the Poisson equation phi is as follows:
Figure GDA0003859679200000071
wherein, b o Is a divergence vector, L is the Laplacian of the matrix, whose order number is consistent with the number of leaf nodes of the octree, F o Is an octreeDescription function of point cloud subspace, o is leaf node of octree, p is child node of cotyledon of o, v o Is the normal vector of the node in the octree node o;
s514: and (5) solving the approximate solution of the Poisson equation by using matrix iteration to obtain a reconstructed closed curved surface. By adopting the octree structure, the accuracy and the reliability of the model are greatly improved.
As a preferable embodiment of the present invention, the least square method in step S6 includes the following steps:
s61: inputting all point sets to be smoothed and smoothing parameter radiuses;
s62: traversing each point in each point set to be smoothed, and searching the nearest point set in the range of the radius of the smoothing parameter;
s63: performing surface fitting on each point in the point set to be smoothed and the corresponding nearest point set, projecting the point onto the curved surface, and replacing the coordinates of the point in the point set to be smoothed with the coordinates of the corresponding projection point;
s64: and outputting the smoothed point set after the point set to be smoothed is smoothed.
As a preferable embodiment of the present invention, the step S7 includes:
s71: calibrating internal parameters and external parameters of a texture camera by adopting a plane calibration method to obtain the mapping relation between each triangular patch in the three-dimensional curved surface model and the texture image acquired by the texture camera from multiple visual angles;
wherein the internal parameters comprise principal point, focal length, distortion coefficient, and the external parameters comprise translation matrix and rotation matrix;
s72: carrying out color white balance and brightness normalization pretreatment on the texture image;
s73: performing visibility judgment on the triangular surface patches, and selecting the corresponding texture images to obtain a plurality of corresponding texture triangular surface patches;
s74: calculating a fusion weight coefficient according to the normal direction of the triangular surface patch, and performing texture fusion on the texture triangular surface patches according to the fusion weight coefficient;
s75: replacing the corresponding triangular patch in the three-dimensional curved surface model with the fused texture triangular patch;
s76: and outputting the three-dimensional portrait model after all the triangular patches of the three-dimensional curved surface model are subjected to texture fusion.
As a preferable embodiment of the present invention, the selecting of the texture image in step S73 includes the following conditions:
1) An included angle between a vector formed by a connecting line of the center of the triangular surface patch and the optical center of the texture camera and the normal direction of the triangular surface patch is not more than 90 degrees;
2) The triangular surface patch and the mapped texture triangular surface have no shielding relation;
3) The triangular patch needs to be projected into the texture triangle after affine transformation.
As a preferable aspect of the present invention, the visibility determination in step S73 includes the steps of:
s731: constructing a two-dimensional recording matrix with the same size as the texture image, wherein the initial value of all matrix elements is + ∞;
s732: traversing each triangular patch, acquiring a two-dimensional projection triangle of the triangular patch in the texture camera, calculating the distance d from the center of the triangular patch to the optical center of the texture camera, and updating elements in the matrix, which are positioned in the projection triangle;
wherein, the updating mode is as follows: when the matrix element is positioned in the projection triangle and the value of the matrix element is greater than d, replacing the value of the matrix element with d;
s733: traversing all the triangular patches, acquiring a two-dimensional projection triangle of each triangular patch in the texture camera, calculating the distance d from the center of the triangular patch to the optical center of the texture camera, and judging the visibility of the triangular patch;
wherein the visibility determination condition is: if matrix elements exist in the matrix R and meet the condition that the matrix elements are positioned in the projection triangle and the value of the matrix elements is smaller than d, the triangular patch is visible in the camera; otherwise, it is invisible. The invention regulates and controls the number of the texture triangular surface patches participating in the texture fusion through the visibility judgment, so that the effect of reflecting the texture fusion is more real.
As a preferable embodiment of the present invention, the step S74 includes:
s741: calculating a fusion weight for each texture triangular patch, the fusion weight
Figure GDA0003859679200000094
Is calculated as follows:
Figure GDA0003859679200000091
wherein,
Figure GDA0003859679200000092
forming an included angle between a vector formed by coordinates from the center of the texture triangular surface patch to the optical center of the texture camera and a normal line of the texture triangular surface patch, wherein i is the serial number of the texture camera, and j is the serial number of the texture triangular surface patch;
s742: traversing each texture triangular patch, acquiring a texture triangular patch set X of the texture triangular patch which is adjacent to the texture triangular patch in a three-dimensional space, and smoothing the fusion weight according to the following formula:
Figure GDA0003859679200000093
wherein | X | is the potential of the set, namely the number of the adjacent triangular patches;
s743: traversing each texture triangular patch, and normalizing the fusion weight according to the following formula:
Figure GDA0003859679200000101
s744: traversing each texture triangular patch, performing affine transformation on the texture triangle of the texture triangular patch under each texture camera view angle, performing weighted summation to fuse the texture triangular patches, and mapping the fused texture triangular patches to the corresponding triangular patches;
the formula of the weighted sum is as follows:
Figure GDA0003859679200000102
wherein,
Figure GDA0003859679200000103
is a texture triangle after being fused,
Figure GDA0003859679200000104
affine transformed texture triangles.
A non-inductive three-dimensional face acquisition and reconstruction system comprises a cloud server and an acquisition device, wherein the cloud server can execute the non-inductive three-dimensional face reconstruction method;
the collecting device comprises a front collecting unit, a side collecting unit, a communication module, a supporting upright post and a base;
a controller and a heat dissipation module are arranged in the base, and the controller is electrically connected with the front acquisition unit and the side acquisition unit;
the communication module is in communication connection with the cloud server and is used for sending the acquired image data to the cloud server for processing;
the number of the side surface acquisition units is two, the two side surface acquisition units are symmetrically distributed on two sides of the front surface acquisition unit, and the optical axes of the front surface acquisition unit and the two side surface acquisition units are intersected in front of the front surface acquisition unit;
the number of the supporting upright posts is 3; the front collecting unit and the side collecting unit are respectively arranged on the base through the supporting upright posts;
the front collecting unit is connected with the side collecting unit through a reinforcing beam connecting rod; in the installation process, the reinforcing beam connecting rod can rotate around the supporting upright column connected with the front acquisition unit. The invention adopts a three-column instrument structure, is convenient and fast to install, reduces the instrument size, simultaneously increases the stability of the invention, modularizes to ensure that the structure designed by the invention is simpler and more convenient, is convenient for fast production and greatly reduces the cost; according to the invention, the target face is acquired by the three acquisition units, so that the fine matching of the same-name points is realized, the resolution of the three-dimensional object is increased, the reconstruction precision of three-dimensional data is improved, and the speeds of face acquisition and three-dimensional reconstruction are also improved.
As a preferable aspect of the present invention, the side surface collecting unit includes an infrared structured light collecting camera and a texture camera.
As a preferable scheme of the present invention, the infrared structured light collection camera projects an infrared non-sensitive high-speed structured light stripe encoded image.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention uses the circular ring type double-circle center calibration plate for calibration, thereby greatly improving the calibration precision. The invention also carries out global fusion and curved surface reconstruction by a hidden function method, and carries out smoothing processing and optimization on the three-dimensional curved surface model, thereby greatly improving the precision of the final output image and reducing the requirement on equipment.
2. According to the invention, by adopting the infrared noninductive high-speed structured light stripe coding image, the influence on a measured person can be reduced, and meanwhile, the measurement precision is greatly improved.
3. According to the invention, a three-frequency method of N-step phase shift is used for collection, and through the structured light stripes of three frequencies and N pictures of each frequency, the speed of subsequent calculation is improved, and error transmission among pixel points in space phase expansion is effectively avoided.
4. The invention realizes phase expansion by a three-frequency method of N-step phase shift, and the expanded phase assists the parallax matching process of the binocular stereo, thereby improving the matching precision of the binocular stereo, reducing the mismatching rate, ensuring the effectiveness and reliability of the phase expansion results of the left camera and the right camera for obtaining the coded images, and providing a data base for realizing high-precision homonymy point matching.
5. According to the invention, the point cloud is roughly spliced, and the result of rough splicing is used as the initialization of the ICP algorithm, so that the registration success rate is greatly improved, and the registration error is reduced. Meanwhile, aiming at the problem of unclosed loop in the registration process of multiple pieces of point clouds, the accumulated error is reasonably distributed to each piece of point cloud by using a global optimization algorithm, so that the influence of the registration error is counteracted, and a finished three-dimensional point cloud model with uniform density is output.
6. The Poisson surface reconstruction algorithm based on the implicit function method not only can consider global factors, but also considers local point cloud characteristics, meanwhile, in the reconstruction process, the closed surface is converted into an isosurface model formed by triangular surface patches through isosurface extraction, invalid points are deleted, and the accuracy and reliability of the reconstructed three-dimensional surface model are greatly improved.
7. The invention regulates and controls the number of the texture triangular surface patches participating in the texture fusion through the visibility judgment, so that the effect of reflecting the texture fusion is more real.
8. The invention adopts a three-column instrument structure, is convenient and fast to install, reduces the instrument size, simultaneously increases the stability of the invention, modularizes to ensure that the structure designed by the invention is simpler and more convenient, is convenient for fast production and greatly reduces the cost; according to the invention, the target face is acquired through the three acquisition units, so that the fine matching of the same-name points is realized, the resolution of the three-dimensional object is increased, the reconstruction precision of three-dimensional data is improved, and the speeds of face acquisition and three-dimensional reconstruction are also improved.
Drawings
Fig. 1 is a schematic flow chart of a method for reconstructing a non-sensory three-dimensional face according to embodiment 1 of the present invention;
fig. 2 is a schematic diagram of a high-precision circular double-circle-center calibration plate in a non-inductive three-dimensional face reconstruction method according to embodiment 2 of the present invention;
fig. 3 is an enlarged schematic view of a high-precision circular double-circle center calibration plate in the non-inductive three-dimensional face reconstruction method according to embodiment 2 of the present invention;
fig. 4 is a truncated phase diagram when the fringe changes from 1 to 8 in the method for reconstructing a non-inductive three-dimensional face according to embodiment 2 of the present invention;
fig. 5 is a time-based expansion diagram of a certain pixel in a truncated phase diagram in the method for reconstructing a three-dimensional face according to embodiment 2 of the present invention;
fig. 6 is a schematic view of a texture fusion processing flow in a non-sensory three-dimensional face reconstruction method according to embodiment 2 of the present invention;
fig. 7 is a schematic diagram of a normal included angle in a non-sensory three-dimensional face reconstruction method according to embodiment 2 of the present invention;
fig. 8 is a schematic structural diagram of an acquisition device in a non-inductive three-dimensional face acquisition and reconstruction system according to embodiment 3 of the present invention;
fig. 9 is a schematic structural diagram of a side surface acquisition unit in a non-inductive three-dimensional face acquisition and reconstruction system according to embodiment 3 of the present invention;
fig. 10 is a schematic view of a ventilation convection hole of a non-inductive three-dimensional face acquisition and reconstruction system according to embodiment 3 of the present invention;
fig. 11 is a schematic diagram of a system architecture of a non-inductive three-dimensional face acquisition and reconstruction system according to embodiment 3 of the present invention;
fig. 12 is a schematic diagram of a logical layered architecture of a non-sensory three-dimensional face acquisition and reconstruction system according to embodiment 3 of the present invention;
fig. 13 is a schematic diagram of analyzing precision test data of a non-inductive three-dimensional face acquisition and reconstruction system according to embodiment 3 of the present invention;
the mark in the figure is: the method comprises the following steps of 1-front image acquisition unit, 2-side image acquisition unit, 3-base, 4-support upright post, 5-switch, 6-infrared camera, 7-infrared projection galvanometer, 8-texture camera, 9-metal mounting rack, 10-reinforced cross beam connecting rod, 11-foot pad and 12-glass window.
Detailed Description
The present invention will be described in further detail with reference to test examples and specific embodiments. It should be understood that the scope of the above-described subject matter is not limited to the following examples, and any techniques implemented based on the disclosure of the present invention are within the scope of the present invention.
Example 1
As shown in fig. 1, a method for reconstructing a three-dimensional human face without sensation includes the following steps:
s1: calibrating the target image by using a circular ring type double-circle center calibration plate, and calibrating camera parameters and image surface space parameters;
s2: projecting a structured light image to a target face according to the camera parameters and the image plane space parameters and collecting reflected structured light image data and texture images;
s3: performing dephasing on the structured light image data to obtain depth information of a target face;
s4: generating a point cloud by adopting an ICP point cloud algorithm to obtain a three-dimensional point cloud model;
s5: performing global fusion and curved surface reconstruction on the three-dimensional point cloud model by adopting a hidden function method to obtain a three-dimensional curved surface model;
s6: performing smoothing treatment and optimization on the three-dimensional curved surface model by adopting a least square method, filling the cavity and eliminating abnormal points;
s7: and performing texture fusion on the texture image and the three-dimensional curved surface model, and outputting a three-dimensional portrait model.
Wherein, the step S1 includes the following steps:
s11: calibrating internal and external parameters of the camera through a circular ring type double-circle center calibration plate;
s12: calibrating phase fitting parameters of six planes in space through six space postures of a calibration plate in a measurement range of a measured object;
s13: and (4) calibrating the units of the full-face camera, applying a standard three-dimensional object, shooting simultaneously through the left unit and the right unit, performing point cloud matching calculation on the three-dimensional corresponding points, and reversely solving the baseline, the camera and the image plane space parameters according to the single camera system parameters obtained in the step (S12).
In the step S2, the projector respectively projects the structured light stripes with three frequencies on the left side and the right side, the camera carries out synchronous acquisition, and each frequency acquires N pictures with the phase difference of 2 pi/N in sequence, wherein N is more than or equal to 3. The phase function phi (x, y) of each group of deformed fringe images in the picture is as follows:
Figure GDA0003859679200000151
I(x,y)=R(x,y)[A(x,y)+B(x,y)cosφ(x,y)]
wherein, I n (x, y) is the intensity function of the fringes, R (x, y) is the distribution of the object surface reflectivity, A (x, y) is the background light intensity, φ (x, y) is the phase function of the fringe deformation, and B (x, y)/A (x, y) represents the contrast of the fringes.
The step S3 includes the following steps:
s31: calculating a truncated phase diagram phi of each set of the deformed fringe images w (m, n, t), wherein m, n represents the spatial extent of the stripes, and t =1,2 represents the time variable;
s32: calculating the truncation phase difference and the 2 pi-based discontinuity number of the adjacent two groups of truncation phase diagrams at the same pixel point;
s33: according to the formula phi u (v k )=U[φ w (v k ),v·φ u (v k-1 )]Expanding the phase of the truncated phase diagram to obtain the depth information of the target face, wherein the slope of least square fitting of the expanded phase is
Figure GDA0003859679200000161
Wherein
Figure GDA0003859679200000162
k =1,2,3,s denotes the maximum period of the fringe template, and the starting phase value phi of the truncated phase map is unwrapped u (1)=φ w (1)。
The step S4 includes the following steps:
s41: performing coarse registration by adopting a closest point iterative algorithm according to the depth information to obtain a point cloud transformation initial value;
s42: performing fine registration by adopting an ICP (inductively coupled plasma) algorithm according to the initial transformation value to obtain a point set of a plurality of point clouds;
s43: optimizing the point set by adopting a global optimization algorithm;
s44: and performing point cloud fusion on the point sets by adopting voxel filtering to obtain a complete three-dimensional point cloud model with uniform density.
The step S5 includes:
s51: constructing an integral relation between a sampling point and an indicating function through a gradient relation of a point set in the three-dimensional point cloud model, generating a vector field of the point set through the integral relation by using a blocking method, meanwhile, calculating an approximate indicating function gradient field to construct a poisson equation, and solving a poisson equation approximate solution by using matrix iteration to obtain a closed curved surface;
s52: triangularization is carried out by adopting a mobile cube algorithm to complete isosurface extraction, and the closed curved surface is converted into an isosurface model consisting of triangular surface patches;
s53: and calculating the density of each point in the model, marking the point as an invalid point when the density of a certain point is smaller than a density threshold value, and deleting a triangular surface connected with the invalid point to obtain the optimized three-dimensional curved surface model.
The step S7 includes:
s71: calibrating internal parameters and external parameters of a texture camera by adopting a plane calibration method to obtain a mapping relation between each triangular patch in the three-dimensional curved surface model and the texture image acquired by the texture camera in multiple visual angles;
wherein the internal parameters comprise a principal point, a focal length and a distortion coefficient, and the external parameters comprise a translation matrix and a rotation matrix;
s72: carrying out color white balance and brightness normalization pretreatment on the texture image;
s73: performing visibility judgment on the triangular surface patch, and selecting the corresponding texture image to obtain a plurality of corresponding texture triangular surface patches;
s74: calculating a fusion weight coefficient according to the normal direction of the triangular patch, and performing texture fusion on the texture triangular patches according to the fusion weight coefficient;
s75: replacing the corresponding triangular patch in the three-dimensional curved surface model with the fused texture triangular patch;
s76: and outputting the three-dimensional portrait model after all the triangular patches of the three-dimensional curved surface model are subjected to texture fusion.
Example 2
This example is a further description of example 1.
S1: calibrating the target image by adopting a circular ring type double-circle center calibration plate, and calibrating camera parameters and image surface space parameters;
as shown in fig. 2 and 3, the present invention uses a high-precision circular double-circle center calibration plate target image under a set posture in a measurement space to realize the precise calculation of camera parameters and system parameters. The method improves calibration accuracy. The calibration parameters are stored in a hardware memory such as a flash, and the application software starts and establishes a communication connection relation with the camera and then reads the calibration parameters to a memory.
The calibration process of the fringe structure light three-dimensional modeling system is mainly completed by the following 3 steps:
the first step is as follows: and calibrating the internal and external parameters of the camera through a high-precision calibration plate.
The relationship between the coordinates (XW, YW, ZW) of an object point P in the world coordinate system and the coordinates (u, v) of its projection point PW on the camera imaging plane in the three-dimensional scene is as follows:
Figure GDA0003859679200000181
wherein, in the camera external parameter matrix: r is the rotation matrix and T is the translation vector.
The second step: and calibrating phase fitting parameters of six planes in the space through six space postures of the calibration plate in the measurement range of the measured object.
The third step: and calibrating the whole-face camera among units, applying a standard three-dimensional object, simultaneously shooting through the left unit and the right unit, performing point cloud matching calculation on three-dimensional corresponding points, and reversely solving the parameters of the baseline, the camera, the image plane space and the like according to the system parameters of the single camera obtained in the second step, so that the calibration is completed.
S2: projecting a structured light image to a target face according to the camera parameters and the image plane space parameters and collecting reflected structured light image data and texture images;
the structured light image is an infrared non-sensitive high-speed structured light stripe coded image. The projector respectively projects structural light stripes with three frequencies on the left side and the right side, the camera carries out synchronous acquisition, and each frequency acquires N pictures with phases different by 2 pi/N in sequence, wherein N is more than or equal to 3.
The phase function phi (x, y) of each group of deformed stripe images in the picture is as follows:
Figure GDA0003859679200000182
I(x,y)=R(x,y)[A(x,y)+B(x,y)cosφ(x,y)]
wherein, I n (x, y) is the intensity function of the fringes, R (x, y) is the distribution of the object surface reflectivity, A (x, y) is the background light intensity, and B (x, y)/A (x, y) represents the contrast of the fringes.
When N =4, i.e. four phase shifts are performed, the phase shifts each time, the resulting four fringe patterns can be expressed as:
I 1 (x,y)=R(x,y)[A(x,y)+B(x,y)cosφ(x,y)+φ1]
I 2 (x,y)=R(x,y)[A(x,y)-B(x,y)sinφ(x,y)+φ2]
I 3 (x,y)=R(x,y)[A(x,y)-B(x,y)cosφ(x,y)+φ3]
I 4 (x,y)=R(x,y)[A(x,y)+B(x,y)sinφ(x,y)+φ1]
the phase function is then:
Figure GDA0003859679200000191
s3: performing dephasing on the structured light image data to obtain depth information of a target face;
the phase calculated by the arctangent function is between-pi and pi, so that the phase is truncated within (-pi, pi) and is not continuous, and in order to obtain the real surface shape of the measured object, the phase of the truncated phase needs to be expanded, and the truncated phase when the fringe changes from 1 to 8 is shown in figure 4, and figure 5 is an expansion graph of a certain pixel along time.
Solving the truncation phase phi of each set of measured stripes w (m,n,t);
tep1: firstly, calculating the truncated phase diagram phi of each group of fringe images w (m, n, t), where m, n represents the spatial extent of the stripes, and t =1,2 represents the time variable.
Solving the integral multiple of the unwrapped phase difference and 2 pi of two adjacent truncated fringe patterns at the same point:
Δφ(m,n,t)=φ w (m,n,t)-φ w (m,n,t-1)
d(m,n,t)=NINT(Δφ w (m,n,t)/2π)
step2: and calculating the truncation phase difference of two adjacent truncation phases on the time axis distribution at the same pixel point and the 2 pi-based discontinuity number. Where NINT (. Cndot.) is the operator for the nearest integer, the total integer multiple of 2 π is:
Figure GDA0003859679200000201
wherein s represents the maximum period of the fringe template;
the total unwrapped phase difference is:
φ u (m,n,s)-φ u (m,n,0)
=φ w (m,n,s)-φ w (m,n,0)-2πv(m,n,t)
firstly, defining the following operators:
①Δφ w the difference between the two truncated phases is represented, which is calculated by:
is provided with
Figure GDA0003859679200000202
Wherein I (N) represents the light intensity of the Nth (N =1,2,3, 4) fringe pattern, the subscripts represent two different sets of fringes, and A, B, C and D respectively represent the corresponding position polynomials. Then there are:
Figure GDA0003859679200000203
therefore, the method comprises the following steps:
Figure GDA0003859679200000204
②U[φ 12 ]is the expand operator, defined as follows:
Figure GDA0003859679200000205
wherein NINT (-) is a rounding operation.
Three-frequency expansion projection
Figure GDA0003859679200000206
s fringes and obtaining a truncated phase map corresponding to each frequency, and then expanding the phase using the following formula:
φ u (v k )=U[φ w (v k ),v·φ u (v k-1 )]
wherein
Figure GDA0003859679200000207
k =1,2,3, spread starting phase value phi u (1)=φ w (1) In the whole unfolding process, each point only needs to be simply calculated twice, the operation process is greatly simplified, and the processing time is shortened. The slope of the least squares fit of the unwrapped phase calculated from the above equation is:
Figure GDA0003859679200000211
according to the theory related to the three-dimensional phase unwrapping method, the time phase unwrapping is a special case when time is taken as the third dimension, and the time axis can be automatically and accurately controlled according to the requirement, so that the method is much simpler than the space phase unwrapping method in principle.
S4: generating a point cloud by adopting an ICP point cloud algorithm to obtain a three-dimensional point cloud model;
aiming at the problem that the initial position of a point cloud is sensitive and easy to fall into local optimum by an Iterative Closest Point (ICP); therefore, point clouds are roughly spliced, and the rough splicing result is used for initializing the ICP algorithm, so that the registration success rate can be improved, and the registration error can be reduced. And then, aiming at the problem of non-closed loop in the registration process of the multiple pieces of point clouds, reasonably distributing the accumulated error to each piece of point cloud by using a global optimization algorithm so as to counteract the influence of the registration error. And finally, performing point cloud fusion by using a voxel filtering method to obtain a complete three-dimensional model with uniform density.
The specific algorithm implementation of point cloud generation is as follows:
1) After the initial registration of the two sets of point set data, the distance between the two sets of point sets is reduced, and an initial iteration position is provided for the subsequent fine registration, but a subsequent fine registration process is required to achieve a good complete registration effect. The method is based on and improved by a classical ICP algorithm and is used as an implementation method of point cloud precise registration.
{P i I =1, \8230;, n } represents the first set of points in space, { Q + i I =1, \ 8230;, n } represents a second set of points in space, the objective of ICP is to describe the rotation matrix R and translation vector T as satisfying the output conditions:
Figure GDA0003859679200000212
the essence of the ICP algorithm is to continuously determine the optimal rigid transformation process of the corresponding point pair, wherein the optimal rigid transformation process is realized by using a least square method to realize the process of optimizing matching, and the operation is carried out until a certain convergence criterion of correct matching is met. The ICP point cloud algorithm is realized by the following steps:
step1: an initial condition k =0 is set, and a convergence threshold τ is set.
Step2: sequentially selecting each point P in the target point set P i And searching for a closest point Q in the reference point set Q i
Step3: and establishing registration point pairs of P and Q by a quaternion method, obtaining a geometric relation between the multi-view three-dimensional cameras after point cloud free registration, and determining parameter information of a transformation matrix R and T by transforming the point pairs.
Step4: carrying out geometric transformation on the target point set according to the R and T relation obtained in the previous step
P'=RP l + T, and obtaining a point set P' after geometric transformation.
Step5: judging whether the error between two iterations meets the condition f k -f k+1 < τ, wherein f k Representing the difference value between the transforms found at the kth time, and terminating iteration if a convergence condition is met; otherwise k = k +1, step2 is executed, and the next iteration is performed.
In the prior art, various methods are used for establishing an objective function to meet a convergence condition, and a quaternion method is used for solving rigid transformation matrixes R and T proposed by Eesl, namely a 4 multiplied by 4 symmetrical matrix Q (sigma) is firstly constructed px )。
Figure GDA0003859679200000221
Where tr denotes the trace of the matrix, I 3 Represents a 3-order unit matrix, Δ = [ A = 23 A 31 A 12 ] T
Figure GDA0003859679200000222
In which sigma px Is a covariance matrix of the point sets P and X, as shown in the formula:
Figure GDA0003859679200000223
in the formula, mu px The centroids of point sets P and X, respectively. Matrix Q (Sigma) px ) Is the unit eigenvector corresponding to the maximum eigenvalue
Figure GDA0003859679200000224
I.e. the optimal transformation matrix, then the rotation matrix
Figure GDA0003859679200000225
And translation vector
Figure GDA0003859679200000226
The calculation result is shown in the formula.
Figure GDA0003859679200000231
Figure GDA0003859679200000232
The method combines any point P in the point set P i The intersection point of the normal vector and the point set Q is taken as a matching corresponding point p i ', and point p i ' on and off plane S i As an error measure, the formula is as follows:
Figure GDA0003859679200000233
where d (p, S) represents the distance of point p from plane S.
2) Multi-view point cloud ICP processing process
In the free registration process of two groups of point cloud data, a plurality of candidate transformation matrixes can be obtained through the affine invariant characteristic of the four point groups, then each candidate matrix needs to be subjected to rigid transformation, and then ICP iteration is carried out to obtain the optimal posture of each candidate matrix. And obtaining an optimal solution after ICP iteration by calculating respective overlapping rate, and completing a free registration process of the two groups of point clouds through the optimal solution.
3) Depth information calculation of three-dimensional entity measurement:
the depth information is abstracted in the phase information. Mainly "solving wrapped phases" and "phase unwrapping". The effect of solving the wrapped phase is to truncate the depth information to-pi to pi using an inverse trigonometric functionWithin the phase of (c). The wrapping phases at this time are actually distributed according to the period of the projected sinusoidal grating, and under the condition that the relative positions of the camera, the structured light projector and the measured object are not changed, the relationship between corresponding absolute phases in two periods can be obtained according to the fact that the positions of the same point on the phase encoding image on the image are the same:
Figure GDA0003859679200000234
the data of the superposed phase re-whole line covers the whole field of view, and depth information can be obtained according to the corresponding relation between the absolute phase and the height.
S5: carrying out global fusion and curved surface reconstruction on the three-dimensional point cloud model by adopting an implicit function method to obtain a three-dimensional curved surface model;
after the multi-view point cloud data is completely registered, global fusion needs to be performed on all point clouds, and the fusion process is also a generation process of curved surface reconstruction. The curved surface reconstruction methods based on point cloud data are mainly divided into two types, one is a method based on a geometric principle to carry out curved surface reconstruction, such as a triangulation method, and the second type is a method based on a curved surface function, including a hidden function method, a parameter interpolation method and the like. The hidden function method curved surface reconstruction is to find the mapping relation of data by utilizing function fitting and realize the curved surface reconstruction by extracting an isosurface, and the main methods comprise a moving least square method, a radial basis function method, poisson curved surface reconstruction and the like. The Poisson surface reconstruction algorithm based on the implicit function method not only can consider global factors, but also gives consideration to local point cloud characteristics, and the implicit function method has obvious advantages in rapid reconstruction.
1. Poisson reconstruction: the Poisson equation belongs to one of partial differential equations, and is applied to the reconstruction and editing process of a three-dimensional model because the Poisson equation can well reserve the differential characteristics of data.
The process of constructing a three-dimensional curved surface using the poisson equation can be summarized as follows: according to the measured data point set, an integral relation of the sampling points and the indicating function is constructed through a gradient relation, a vector field of the point set is generated through the integral relation by a blocking method, and meanwhile, an approximate indicating function gradient field is calculated to construct a Poisson equation. And (3) solving the approximate solution of the Poisson equation by using matrix iteration, extracting an isosurface by using a moving cube, and finally reconstructing a model of the target face by using the data point set.
The specific algorithm for reconstructing the Poisson surface is as follows:
input point cloud data S = { S = { S = } 1 ,…,s n Each point containing its three-dimensional coordinate s.p and an internal normal to it
Figure GDA0003859679200000241
Assuming that the set of points lies on (or adjacent to) the surface of an unknown model M
Figure GDA0003859679200000242
The goal is to reconstruct a seamless triangular approximation to the model surface by estimating the indicated function of the model and extracting the iso-surface. Inputting point cloud data, wherein each point comprises three-dimensional coordinates and an internal normal direction, and the processing steps are as follows:
establishing an octree: an adaptive octree is used to represent the reconstructed surface implicit function. The octree is defined according to the input point cloud, a function is added to each node o of the octree, and the condition that a normal field consisting of internal normals can be accurately and effectively expressed as linear summation of node functions is met. Assuming that the leaf node of the octree consists of eight points p, each of which can be represented by F (p), the indicating function χ of the octree subspace 0
Figure GDA0003859679200000251
According to the above formula, the hidden function χ of the curved surface can be represented by { f 0 The function space formed by the equation is expressed as follows.
Figure GDA0003859679200000252
Where o represents a leaf node in the octree.
The basis function is set to F as the convolution of the square wave function B (x) with itself. The square wave function B (x) is defined as:
Figure GDA0003859679200000253
the basis function F can be expressed as:
Figure GDA0003859679200000254
the larger the number of convolutions n, the closer F is to the gaussian distribution function. Indicating function χ in Poisson's equation may pass description function F of octree point cloud subspace o Is obtained, and F o Then the geometric and scale transformation can be performed through the basis function F to obtain:
Figure GDA0003859679200000255
where o.c represents the center of the bounding box corresponding to node o and o.w represents the width of the bounding box corresponding to node o. To achieve sub-node accuracy, we avoid fixing the position of one sample to the center of the leaf node containing the sample point, and instead we assign the sample point to the eight nearest neighbor nodes by trilinear interpolation. Thus, the vector field
Figure GDA0003859679200000256
Can be defined as:
Figure GDA0003859679200000257
in the formula a o.p Weight coefficient, v, of the normal vector of point p corresponding to octree node o o Is the normal vector of the node in octree node o.
From the curved surface reconstruction indicator function representation, the problem can be simplified to that in the known state
Figure GDA0003859679200000258
Solving the problem of the indicating function χ of the Poisson equation on the premise of (1). Due to Δ χ and vector
Figure GDA0003859679200000261
May not be on the same function space in order to satisfy the formula
Figure GDA0003859679200000262
It is necessary to make the function Δ χ at F o Projection and vector on function space
Figure GDA0003859679200000263
Divergence of in F o The projection distance in space is the shortest.
Figure GDA0003859679200000264
And because the formula holds:
Figure GDA0003859679200000265
therefore, assume that the normal vector of data in octree node o is set to v o . Laplace operators delta chi and F of indicating function chi in function space o The constructed vector is set to v. The formula translates to:
Figure GDA0003859679200000266
the surface implicit function χ may be represented by a constructed function space. The definition matrix L is used to represent the laplacian Δ χ indicating the function χ in function space. Assuming that the number of Octree leaf nodes is m, a matrix L of m × m order is defined. Any one of the matrices L is L o.o' As F o Laplacian of and function F o The dot product of (c) is shown in the formula.
Figure GDA0003859679200000267
Setting vector fields of octree node data
Figure GDA0003859679200000268
Divergence of b o Vector for m dimensions:
Figure GDA0003859679200000269
the solution of the poisson equation can be converted into a linear equation solution problem according to the above formula, which is shown in the following formula.
LΦ=b 0
Where Φ is the equation to be solved, L is the Laplace operator of the matrix is a large-scale sparse matrix, b o Is a divergence vector.
2. Isosurface extraction
The curved surface after Poisson reconstruction is set as
Figure GDA0003859679200000271
Because the sampling point of the curved surface is not strictly positioned on the zero isosurface of the indicating function, the average value of the indicating function of the sampling point is used as the standard value r of the isosurface, and therefore, the curved surface is reconstructed
Figure GDA0003859679200000272
Expressed as:
Figure GDA0003859679200000273
and carrying out triangulation by using a marching cube algorithm (MarchingCube) algorithm so as to complete the isosurface extraction. The basic method of the MarchingCube algorithm extraction algorithm is to find out the voxels intersected with the isosurface through an octree, then find out the intersecting surfaces intersected with the voxels, and connect the intersecting surfaces with each other to construct a curved surface. And (3) the plane reconstructed by Poisson is a closed curved surface, an invalid part needs to be deleted, a threshold value is set according to the density of each point counted in the calculation process, the points with the density smaller than the threshold value are marked as invalid points, and all triangular surface patches connected with the invalid points are deleted.
S6: performing smoothing and optimization on the three-dimensional curved surface model by adopting a least square method, filling a cavity and eliminating abnormal points;
after the poisson reconstruction is finished into the closed curved surface, the closed curved surface is cut, namely the density of the grid is judged, and a triangular surface patch with the grid density lower than a threshold value is removed, so that a cavity is caused by more removed surface patch parts, and hole filling processing is required.
After the poisson reconstruction is completed, the point cloud data under multiple viewing angles have completed the data fusion process, but because abnormal points such as part of noise points, outliers and the like exist in the original data, subsequent processing is needed to obtain a smoother curved surface. Here, the smoothing process of the curved surface is performed using moving least squares.
And (3) smoothing by using a least square method:
firstly, mesh subdivision is carried out on an area where point cloud is located, appropriate node basis functions and weight functions are selected, tight support sets of all nodes are determined, and node function values are calculated. The connection nodes form a fitting curved surface. For each fixed point, the least squares fit is on its tight support. The MLS point cloud smoothing is to project the original points to the fitted curved surface, and the projected points are the smoothed points. The processing steps of performing the surface smoothing by using the moving least square method are as follows:
step1: the input point cloud requires a smoothed set of points { P } and a smoothing parameter radius r.
Step2: searching a nearest point (KD-Tree), and searching a nearest point set delta for each point P in the point set { P } within the range of the radius r P
Step3: surface fitting, each point P in the point set { P } is in the nearest neighbor point set delta P At δ P Fitting the quadratic surface or the plane II, projecting the point p on the surface II, recording the projection point as p ', and replacing the original point p with p', thereby achieving the purpose of smoothing. This process requires first calculating the inverse of the matrix.
Step4: a new smoothed set of points P 'consisting of all P' is output.
S7: and performing texture fusion on the texture image and the three-dimensional curved surface model, and outputting a three-dimensional portrait model.
In the case of multi-view three-dimensional model fusion, texture fragments from multiple cameras are mapped onto the same three-dimensional model patch, and information superposition occurs, and the texture fragments are likely to have different characteristics such as illumination, shadow, reflection and the like, so that a corresponding strategy needs to be proposed to fuse the texture fragments under different views.
Unit calibration and system calibration are required before multi-view texture fusion. The method is characterized in that the internal parameters (including principal point, focal length, distortion coefficient and the like) and the external parameters (including translation matrix and rotation matrix) of the three-dimensional texture video camera of a single unit are calibrated by using a plane calibration method, and the calibration process is consistent with the calibration process and method of the single binocular stereo camera. The inter-unit calibration is obtained by collecting a three-dimensional model of a calibration object and carrying out point cloud registration. And then calculating to obtain the mapping relation between the three-dimensional face model and the multi-view unit, namely the mapping relation between the three-dimensional model and the texture camera.
The total number of n geometric triangular patch delta on the constructed three-dimensional portrait model M is set j (j =1,2,3, \ 8230;, n), and constructing a mapping relation P of the three-dimensional model M to the M texture cameras by calibration i (i =1,2,3, \ 8230;, m), and the geometric triangular patch Δ can be obtained by using the mapping relation j Texture image T photographed to different view angles i (i =1,2,3, \8230;, m) triangular face
Figure GDA0003859679200000291
A one-to-one correspondence relationship therebetween, wherein
Figure GDA0003859679200000292
Representing the jth texture patch on the ith texture camera.
As shown in fig. 6, the texture fusion process flow includes:
determining a three-dimensional model M and a multi-view texture image T i After the mapping relationship between the two, the process of texture fusion can be carried out. The concrete implementation steps are as follows:
step1: the method comprises the following steps of preprocessing a multi-view texture image to reduce the interference of ambient light to the texture image, wherein the main steps comprise color white balance and brightness normalization processing. And then judging the visibility of each geometric triangular patch to texture cameras with different visual angles according to the shielding relation in the three-dimensional model so as to select an effective texture patch.
Step2: and determining the fusion weight of each texture camera according to the normal direction of the geometric triangular patch, and correcting the weight coefficient.
Step3: and performing weighted image fusion by the calculated weight coefficient. Obtaining a corresponding texture triangular surface patch on each texture graph, cutting out each texture triangular surface patch, performing affine transformation to the same shape, performing texture fusion on each texture triangular surface patch through the determined fusion weight of each texture triangular surface patch, and pasting the fused texture surface patches onto the corresponding geometric model triangular surface patches after the geometric transformation.
Step4: and recording the corresponding relation between the texture triangular patch with the maximum composite weight and the triangular patch of the three-dimensional model and the fused texture image as data of the complete portrait model.
Wherein, the flow also comprises the following contents:
(1) And selecting effective texture patches:
for geometric triangular patch Δ in three-dimensional model j Mapping image blocks on each texture image correspondingly by E M
Figure GDA0003859679200000293
If the following conditions are satisfied, the product is considered to be
Figure GDA0003859679200000294
For valid candidate data:
1) The angle between the vector formed by the connecting line of the center of the geometric triangular patch and the optical center of the camera and the normal direction of the patch is not more than pi/2.
2) Combined triangular patchΔ j Texture triangle with mapping
Figure GDA0003859679200000301
There is no occlusion relationship between them.
3) The geometric triangular patch needs to be projected into the texture triangle after affine transformation.
After the mapping validity judgment, the geometric triangular patch delta can be obtained j Effective three-dimensional texture mapping relationship of
Figure GDA0003859679200000302
(2) Visibility determination for geometric triangular patches
In selecting valid texture mapping relation
Figure GDA0003859679200000303
The decision may be based on the visibility of each triangular patch in each texture camera. The visibility estimation of the triangular patch in a certain texture camera is realized according to the following steps:
step1: and constructing a two-dimensional recording matrix R with the same size as the camera image, and initializing all matrix elements to be + ∞.
Step2: traversing each triangular patch, and calculating the two-dimensional projection triangle delta of each triangular patch in the camera j Calculating the center of the triangular patch to the optical center O of the camera i And will lie in the matrix at the projection triangle
Figure GDA0003859679200000304
The elements in the list are updated according to the following criteria:
if l is j ∈Δ j And d < R j (j =1,2,3, \8230;, n), then R j =d
l j Two-dimensional coordinates representing the jth element of the matrix R, R j Denotes the value of the jth element in R, l j ∈Δ j Indicating that the elements of the texture triangle in the plane of the matrix lie in the projection triangle
Figure GDA0003859679200000305
And (4) the following steps.
Step3: traversing all the triangular patches again, judging the visibility of the triangular patches, and calculating the two-dimensional projection triangle of each triangular patch in the camera
Figure GDA0003859679200000306
Calculating the center of the triangular patch to the optical center O of the camera i Is measured by the distance d. If matrix elements R with subscript j exist in the matrix R j Satisfy l j ∈Δ j And such that d > R i ,l j Representing the two-dimensional spatial position of the jth element in the matrix, then the triangular patch is visible in the camera; otherwise, it is invisible.
The judgment of visibility directly influences the number of the texture image triangles participating in texture fusion, and simultaneously reflects the texture fusion effect more truly.
(3) Multi-view fused texture weights and fusion calculations
Obtaining an efficient mapping of each geometric triangular patch onto texture images
Figure GDA0003859679200000311
Then, a fusion weight of each texture image block needs to be obtained through calculation, and an included angle between a vector formed by a connecting line from the center of the geometric triangular patch to the optical center of the camera and the normal direction of the patch is calculated as a basis for weight calculation, wherein the method specifically comprises the following steps:
step1: calculating each texture triangular patch
Figure GDA0003859679200000312
Fusion weight of
Figure GDA0003859679200000313
The following formula is calculated
Figure GDA0003859679200000314
Wherein
Figure GDA0003859679200000315
Is a triangular patch Δ j Center to texture camera cam i The vector formed by the optical center coordinates and the triangular patch delta j The angle formed by the normal of (a) can be obtained by the following formula
Figure GDA0003859679200000316
Wherein,
Figure GDA0003859679200000317
representing a triangular patch Δ j The center of (a) to the texture camera optical center,
Figure GDA0003859679200000318
representing a triangular patch Δ j The normal vector of (a). The normal included angle of the triangular patch is shown in fig. 7.
Included angle between vector formed by geometric triangle patch center of upper drawing and optical center of camera and normal direction of patch
Step2: traversing each triangular patch, and weighting each camera
Figure GDA0003859679200000319
Smoothing and normalization are performed. Specifically, a patch set X of a certain patch neighboring in a three-dimensional space is found, and a weight value is updated
Figure GDA00038596792000003110
Figure GDA00038596792000003111
The subscript j and the superscript i respectively denote the jth triangular patch and the ith camera, X denotes an index set of neighboring triangular patches, | X | is the potential of the set, that is, the number of neighboring triangular patches. Then, the triangular patch is traversed again, and the weight is compared
Figure GDA00038596792000003112
Carrying out normalization treatment:
Figure GDA0003859679200000321
step3: and traversing each triangular patch, and performing texture fusion according to the weight. Performing affine transformation on the texture triangle of the triangular patch under each camera view angle, and then performing weighted summation according to the formula:
Figure GDA0003859679200000322
Figure GDA0003859679200000323
representing a texture triangle that has undergone an affine transformation,
Figure GDA0003859679200000324
representing the merged texture triangles. Finally, the merged texture triangles
Figure GDA0003859679200000325
And mapping the three-dimensional texture to a triangular face to obtain the real three-dimensional texture of the face. And finishing the three-dimensional data and the texture collage of the three-dimensional reconstruction full face.
Example 3
A non-inductive three-dimensional face acquisition and reconstruction system comprises a cloud server and an acquisition device.
The cloud server can execute the non-inductive three-dimensional face reconstruction method in the embodiments 1 and 2;
as shown in fig. 8, the collecting device includes a front collecting unit 1, a side collecting unit 2, a communication module, a supporting column 4, a base 3, and a switch 5.
The communication module is in communication connection with the cloud server and is used for sending the acquired image data to the cloud server for processing.
A controller, a heat dissipation module and a communication module are arranged in the base 3; the heat dissipation module is used for cooling the controller; the communication module is used for exchanging data with external equipment. And simultaneously, at least 3 adjustable rubber foot pads 11 are arranged below the base 3 and used for supporting the system.
The switch 5 is arranged on the base 3 and is electrically connected with the controller; the support post 4 is hollow circular tube, the front collection unit 1 with the side collection unit 2 passes through the support post 4 is installed on the base 3, and pass through the hollow design of support post 4 with the controller electricity is connected, thereby realizes the hidden protection design of cable.
The front acquisition unit 1 and the side acquisition unit 2 are connected through a reinforcing beam connecting rod 10; during installation, the reinforcing beam connecting rod 10 can rotate around the supporting upright 4 connected with the front face acquisition unit 1.
The number of the side surface acquisition units 2 is two, the two side surface acquisition units are symmetrically distributed on two sides of the front surface acquisition unit 1, the optical axes of the front surface acquisition unit 1 and the two side surface acquisition units 2 are intersected in the right front of the front surface acquisition unit 1, and the included angle range of the optical axes is [25 degrees, 35 degrees ].
As shown in fig. 9, the side surface collecting unit 2 includes an infrared camera 6, an infrared projection galvanometer 7, a texture camera 8, a window glass 12, and a metal mounting bracket 9.
The two infrared cameras 6 are respectively arranged at the top and the bottom of the side collecting unit 2 and project infrared noninductive high-speed structured light stripe coded images. The texture camera 8 is arranged in the middle of the two infrared cameras 6; the optical axis of the texture camera 8 is parallel to the horizontal direction, and the optical axis of the texture camera 8 intersects the optical axes of the two infrared cameras 6 at a point. The window glass 12 is for preventing dust from entering the camera lens.
As shown in fig. 10, the side surface acquisition unit 2 is a metal casing with ventilation convection holes, and the infrared camera 6 and the texture camera 8 are fixed in the metal casing through a metal mounting frame 9.
As shown in fig. 11, the present invention optimally designs the angles of the system architecture of the acquisition device, and after practical verification, under the condition of basic design configuration, the three parameters, namely the baseline S, the target distance L and the relative baseline angle of the left and right side acquisition units are subjected to orthogonal experiments, and then spatial layout position debugging tests are performed, and the quality of the point cloud quantity modeled after calibration is evaluated, and the optimal system architecture is obtained by considering the point cloud triangle surface type quantity and the full face (from the left ear continuous face to the right ear): target distance L:560MM, baseline S:650MM, the included angle between the optical axes of the left camera and the right camera and the base line, namely the included angle alpha =59 +/-1 degrees between the optical axis of the side collecting unit and the optical axis of the front collecting unit, the included angle beta =14 degrees relative to the horizontal direction near the base line 135MM of the infrared upper and lower infrared cameras 6 in the side collecting unit 2, the structured light is projected and centered, and the main optical axis is horizontally emergent.
Example 4
This embodiment is a specific workflow of the system for acquiring and reconstructing a three-dimensional non-sensory face described in embodiment 3, and a logical layered architecture of the system is shown in fig. 12.
The acquisition device is responsible for gathering image data, and the data that will gather are uploaded through communication module after accomplishing the collection the high in the clouds server carries out image information and three-dimensional reconstruction's cloud computing, then shows in passing down to local other display device and APP through the network. The remote distributed processing mode can be formed, the local part is responsible for acquisition and display, the processing of big data is completed on the cloud server, the high-speed real-time performance is realized, and the super computing power is increased. One cloud server can be connected with a plurality of multi-place acquisition display devices, so that the resource allocation is optimized, and the cost is greatly saved.
Through the cooperation of the 3D studio12 and a standard sphere gauge, multiple measurement tests are carried out by adopting the non-inductive three-dimensional face acquisition and reconstruction system to obtain multiple groups of test data, and the measured data are subjected to error statistical analysis, as shown in figure 13, the average value of the obtained test precision is =0.15 +/-0.05 MM, the error accords with Gaussian distribution, and the three-dimensional face data acquisition can be effectively completed by the method.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (11)

1. A method for reconstructing a non-sensory three-dimensional face is characterized by comprising the following steps:
s1: calibrating the target image by adopting a circular ring type double-circle center calibration plate, and calibrating camera parameters and image surface space parameters;
s2: projecting a structured light image to a target face according to the camera parameters and the image plane space parameters and collecting reflected structured light image data and texture images; the structured light image is an infrared non-inductive structured light stripe coding image;
s3: performing dephasing on the structured light image data to obtain depth information of a target face;
s4: generating a point cloud by adopting an ICP point cloud algorithm to obtain a three-dimensional point cloud model;
s5: performing global fusion and curved surface reconstruction on the three-dimensional point cloud model by adopting a hidden function method to obtain a three-dimensional curved surface model;
s6: performing smoothing treatment and optimization on the three-dimensional curved surface model by adopting a least square method, filling the cavity and eliminating abnormal points;
s7: performing texture fusion on the texture image and the three-dimensional curved surface model, and outputting a three-dimensional portrait model;
specifically, the step S1 includes the following steps:
s11: calibrating internal and external parameters of the camera through a circular ring type double-circle-center calibration plate;
s12: calibrating phase fitting parameters of six planes in space through six space postures of a calibration plate in a measurement range of a measured object;
s13: performing inter-unit calibration of the full-face camera, applying a standard three-dimensional object, simultaneously shooting through a left unit and a right unit, performing point cloud matching calculation on three-dimensional corresponding points, and reversely solving a baseline, camera and image plane space parameters according to the single camera system parameters obtained in the step S12;
specifically, the step S2 is to project structured light stripes with three frequencies on the left and right sides, the camera performs synchronous acquisition, and each frequency acquires N pictures with a phase difference of 2 pi/N in sequence, where N is greater than or equal to 3, and a phase function phi (x, y) of each group of deformed stripe images in the pictures is:
Figure FDA0003859679190000021
I(x,y)=R(x,y)[A(x,y)+B(x,y)cosφ(x,y)]
I n (x, y) is the intensity function of the fringes, R (x, y) is the distribution of the object surface reflectivity, A (x, y) is the background light intensity, phi (x, y) is the phase function of the fringe deformation, and B (x, y)/A (x, y) represents the contrast of the fringes;
specifically, the step S3 includes the following steps:
s31: calculating a truncated phase map phi of each set of the deformed fringe images w (m, n, t), where m, n represents the spatial extent of the stripes, and t =1,2 represents the time variable;
s32: calculating the truncation phase difference of the two adjacent groups of the truncation phase diagrams at the same pixel point and the discontinuity number based on 2 pi;
s33: according to the formula phi u (v k )=U[φ w (v k ),v·φ u (v k-1 )]Expanding the phase of the truncated phase diagram to obtain the depth information of the target face, wherein the slope of least square fitting of the expanded phase is
Figure FDA0003859679190000022
Wherein
Figure FDA0003859679190000023
k =1,2,3,s represents the maximum period of the fringe template, and the starting phase value phi of the truncated phase map is unwrapped u (1)=φ w (1);
Specifically, the step S4 includes the following steps:
s41: performing coarse registration by adopting a closest point iterative algorithm according to the depth information to obtain a point cloud transformation initial value;
s42: performing fine registration by adopting an ICP (inductively coupled plasma) algorithm according to the initial transformation value to obtain a point set of a plurality of point clouds;
s43: optimizing the point set by adopting a global optimization algorithm;
s44: performing point cloud fusion on the point sets by adopting voxel filtering to obtain a complete three-dimensional point cloud model with uniform density;
specifically, the step S5 includes:
s51: constructing an integral relation between a sampling point and an indicating function through a gradient relation of a point set in the three-dimensional point cloud model, generating a vector field of the point set through the integral relation by using a blocking method, meanwhile, calculating an approximate indicating function gradient field to construct a poisson equation, and solving a poisson equation approximate solution by using matrix iteration to obtain a closed curved surface;
s52: triangularization is carried out by adopting a mobile cube algorithm to complete isosurface extraction, and the closed curved surface is converted into an isosurface model consisting of triangular surface patches;
s53: calculating the density of each point in the model, marking a certain point as an invalid point when the density of the point is smaller than a density threshold value, and deleting a triangular surface connected with the invalid point to obtain an optimized three-dimensional curved surface model;
specifically, the step S7 includes:
s71: calibrating internal parameters and external parameters of a texture camera by adopting a plane calibration method to obtain a mapping relation between each triangular patch in the three-dimensional curved surface model and the texture image acquired by the texture camera in multiple visual angles;
wherein the internal parameters comprise principal point, focal length, distortion coefficient, and the external parameters comprise translation matrix and rotation matrix;
s72: carrying out color white balance and brightness normalization pretreatment on the texture image;
s73: performing visibility judgment on the triangular surface patches, and selecting the corresponding texture images to obtain a plurality of corresponding texture triangular surface patches;
s74: calculating a fusion weight coefficient according to the normal direction of the triangular patch, and performing texture fusion on the texture triangular patches according to the fusion weight coefficient;
s75: replacing the corresponding triangular patch in the three-dimensional curved surface model with the fused texture triangular patch;
s76: and outputting the three-dimensional portrait model after all the triangular patches of the three-dimensional curved surface model are subjected to texture fusion.
2. The method for reconstructing a three-dimensional human face according to claim 1, wherein the relationship between the coordinates (XW, YW, ZW) of a point P in the circular dual-center calibration plate in the world coordinate system and the coordinates (u, v) of the projected point PW on the camera imaging screen in step S11 is as follows:
Figure FDA0003859679190000041
wherein, in the camera external parameter matrix: r is the rotation matrix and T is the translation vector.
3. The method for reconstructing the three-dimensional human face according to the claim 1, wherein the ICP algorithm of the step S42 adopts the following registration conditions:
Figure FDA0003859679190000042
wherein, { P i I =1, \8230;, n } represents the first set of points in space, { Q + i I =1, \ 8230;, n } represents a second set of points in space, R is the rotation matrix and T is the translation vector.
4. The method for reconstructing the three-dimensional human face according to the claim 1, wherein the step S51 comprises:
s511: inputting the three-dimensional point cloud model, wherein each point of the model comprises three-dimensional coordinates and an internal normal direction;
s512: establishing an octree according to the three-dimensional point cloud model, wherein each node of the octree is added with a node function, and a normal field formed by the internal normal can be represented as linear summation of the node functions;
s513: constructing a Poisson equation according to the octree, wherein the formula of the Poisson equation phi is as follows:
Figure FDA0003859679190000051
wherein, b o Is a divergence vector, L is the Laplacian of the matrix, whose order number is consistent with the number of leaf nodes of the octree, F o Is a description function of the point cloud subspace of the octree, o is a leaf node of the octree, p is a cotyledon child node of o, v o Is the normal vector of the node in the octree node o;
s514: and (5) solving the approximate solution of the Poisson equation by using matrix iteration to obtain the reconstructed closed curved surface.
5. The method for reconstructing the three-dimensional human face without sense of claim 1, wherein the least square method in the step S6 comprises the following steps:
s61: inputting all point sets to be smoothed and smoothing parameter radiuses;
s62: traversing each point in each point set to be smoothed, and searching the nearest point set in the range of the radius of the smoothing parameter;
s63: performing surface fitting on each point in the point set to be smoothed and the corresponding nearest point set, projecting the point onto the curved surface, and replacing the coordinates of the point in the point set to be smoothed with the coordinates of the corresponding projection point;
s64: and outputting the smoothed point set of the point set to be smoothed.
6. The method according to claim 1, wherein the selection of the texture image in step S73 comprises the following conditions:
1) An included angle between a vector formed by a connecting line of the center of the triangular patch and the optical center of the texture camera and the normal direction of the triangular patch is not more than 90 degrees;
2) The triangular surface patch and the mapped texture triangular surface have no shielding relation;
3) The triangular patch needs to be projected into the texture triangle after affine transformation.
7. The method of claim 1, wherein the visibility determination in step S73 comprises the following steps:
s731: constructing a two-dimensional recording matrix with the same size as the texture image, wherein the initial value of all matrix elements is + ∞;
s732: traversing each triangular patch, acquiring a two-dimensional projection triangle of the triangular patch in the texture camera, calculating the distance d from the center of the triangular patch to the optical center of the texture camera, and updating elements in the matrix, which are positioned in the projection triangle;
wherein, the updating mode is as follows: when the matrix element is positioned in the projection triangle and the value of the matrix element is greater than d, replacing the value of the matrix element with d;
s733: traversing all the triangular patches, acquiring a two-dimensional projection triangle of each triangular patch in the texture camera, calculating the distance d from the center of the triangular patch to the optical center of the texture camera, and judging the visibility of the triangular patch;
wherein the visibility determination condition is: if matrix elements exist in the matrix R and meet the condition that the matrix elements are positioned in the projection triangle and the value of the matrix elements is smaller than d, the triangular patch is visible in the camera; otherwise, it is invisible.
8. The method for reconstructing the three-dimensional human face according to the claim 1, wherein the step S74 comprises:
s741: calculating a fusion weight for each texture triangular patch, the fusion weight
Figure FDA0003859679190000061
Is calculated as follows:
Figure FDA0003859679190000071
wherein,
Figure FDA0003859679190000072
forming an included angle between a vector formed by coordinates from the center of the texture triangular surface patch to the optical center of the texture camera and a normal line of the texture triangular surface patch, wherein i is the serial number of the texture camera, and j is the serial number of the texture triangular surface patch;
s742: traversing each texture triangular patch, acquiring a texture triangular patch set X of the texture triangular patch which is adjacent to the texture triangular patch in a three-dimensional space, and smoothing the fusion weight according to the following formula:
Figure FDA0003859679190000073
wherein | X | is the potential of the set, namely the number of the adjacent triangular patches;
s743: traversing each texture triangular patch, and normalizing the fusion weight according to the following formula:
Figure FDA0003859679190000074
s744: traversing each texture triangular patch, performing affine transformation on the texture triangle of the texture triangular patch under each texture camera view angle, performing weighted summation to fuse the texture triangular patches, and mapping the fused texture triangular patches to the corresponding triangular patches;
the formula of the weighted sum is as follows:
Figure FDA0003859679190000075
wherein,
Figure FDA0003859679190000081
is a texture triangle after being fused,
Figure FDA0003859679190000082
affine transformed texture triangles.
9. A non-inductive three-dimensional face acquisition and reconstruction system, comprising a cloud server and an acquisition device, wherein the cloud server is capable of executing the non-inductive three-dimensional face reconstruction method according to any one of claims 1 to 8;
the collecting device comprises a front collecting unit, a side collecting unit, a communication module, a supporting upright post and a base;
a controller and a heat dissipation module are arranged in the base, and the controller is electrically connected with the front acquisition unit and the side acquisition unit;
the communication module is in communication connection with the cloud server and is used for sending the acquired image data to the cloud server for processing;
the number of the side surface acquisition units is two, the two side surface acquisition units are symmetrically distributed on two sides of the front surface acquisition unit, and the optical axes of the front surface acquisition unit and the two side surface acquisition units are intersected in front of the front surface acquisition unit;
the number of the supporting upright posts is 3; the front collecting unit and the side collecting unit are respectively arranged on the base through the supporting upright post;
the front collecting unit is connected with the side collecting unit through a reinforcing beam connecting rod; in the installation process, the reinforcing beam connecting rod can rotate around the supporting upright post connected with the front acquisition unit.
10. The system according to claim 9, wherein the side surface collecting unit comprises an infrared structured light collecting camera and a texture camera.
11. The system of claim 10, wherein the ir structured light collection camera projects ir non-sensible high-speed structured light stripe encoded image.
CN202011267696.6A 2020-11-13 2020-11-13 Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system Active CN112308963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011267696.6A CN112308963B (en) 2020-11-13 2020-11-13 Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011267696.6A CN112308963B (en) 2020-11-13 2020-11-13 Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system

Publications (2)

Publication Number Publication Date
CN112308963A CN112308963A (en) 2021-02-02
CN112308963B true CN112308963B (en) 2022-11-08

Family

ID=74335370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011267696.6A Active CN112308963B (en) 2020-11-13 2020-11-13 Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system

Country Status (1)

Country Link
CN (1) CN112308963B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884670B (en) * 2021-02-25 2024-02-27 耀视(苏州)医疗科技有限公司 Correction method for pixel offset between adjacent rows
CN112991458B (en) * 2021-03-09 2023-02-24 武汉大学 Rapid three-dimensional modeling method and system based on voxels
CN113240811B (en) * 2021-04-28 2022-06-07 深圳羽迹科技有限公司 Three-dimensional face model creating method, system, equipment and storage medium
CN114494576A (en) * 2021-12-23 2022-05-13 南京大学 Rapid high-precision multi-view face three-dimensional reconstruction method based on implicit function
CN114255314B (en) * 2022-02-28 2022-06-03 深圳大学 Automatic texture mapping method, system and terminal for shielding avoidance three-dimensional model
CN114858086A (en) * 2022-03-25 2022-08-05 先临三维科技股份有限公司 Three-dimensional scanning system, method and device
CN115100383B (en) * 2022-08-24 2022-11-15 深圳星坊科技有限公司 Three-dimensional reconstruction method, device and equipment for mirror surface object based on common light source
CN116912334B (en) * 2023-09-12 2023-11-28 武汉工程大学 Phase mapping high-precision projector calibration method based on grating fringe projection
CN117506919B (en) * 2023-12-01 2024-05-24 广州创之力智能科技有限公司 Hand-eye calibration method and device, terminal equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247074A (en) * 2013-04-23 2013-08-14 苏州华漫信息服务有限公司 3D (three dimensional) photographing method combining depth information and human face analyzing technology
CN104794728A (en) * 2015-05-05 2015-07-22 成都元天益三维科技有限公司 Method for reconstructing real-time three-dimensional face data with multiple images
CN104809457A (en) * 2015-05-26 2015-07-29 牟永敏 Three-dimensional face identification method and system based on regionalization implicit function features
WO2019052709A1 (en) * 2017-09-13 2019-03-21 Siemens Healthcare Gmbh Improved 3-d vessel tree surface reconstruction
CN109919876A (en) * 2019-03-11 2019-06-21 四川川大智胜软件股份有限公司 A kind of true face model building of three-dimensional and three-dimensional true face photographic system
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN111325828A (en) * 2020-01-21 2020-06-23 中国电子科技集团公司第五十二研究所 Three-dimensional face acquisition method and device based on three-eye camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648270B (en) * 2018-05-12 2022-04-19 西北工业大学 Unmanned aerial vehicle real-time three-dimensional scene reconstruction method capable of realizing real-time synchronous positioning and map construction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247074A (en) * 2013-04-23 2013-08-14 苏州华漫信息服务有限公司 3D (three dimensional) photographing method combining depth information and human face analyzing technology
CN104794728A (en) * 2015-05-05 2015-07-22 成都元天益三维科技有限公司 Method for reconstructing real-time three-dimensional face data with multiple images
CN104809457A (en) * 2015-05-26 2015-07-29 牟永敏 Three-dimensional face identification method and system based on regionalization implicit function features
WO2019052709A1 (en) * 2017-09-13 2019-03-21 Siemens Healthcare Gmbh Improved 3-d vessel tree surface reconstruction
CN109919876A (en) * 2019-03-11 2019-06-21 四川川大智胜软件股份有限公司 A kind of true face model building of three-dimensional and three-dimensional true face photographic system
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN111325828A (en) * 2020-01-21 2020-06-23 中国电子科技集团公司第五十二研究所 Three-dimensional face acquisition method and device based on three-eye camera

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Front2Back: Single View 3D Shape Reconstruction via Front to Back Prediction;Yuan Yao等;《2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)》;20200805;528-537 *
基于单目视频序列的非刚性动态目标三维重建算法研究;刘洋;《中国博士学位论文全文数据库 信息科技辑》;20190115;I138-124 *
基于图像信息的点云优化研究;尹婕;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20180215;I138-1394 *
基于模板的人脸点云补洞方法;孙晓斐等;《现代计算机(专业版)》;20180410;59-63 *

Also Published As

Publication number Publication date
CN112308963A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN112308963B (en) Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN107767442B (en) Foot type three-dimensional reconstruction and measurement method based on Kinect and binocular vision
CN110874864B (en) Method, device, electronic equipment and system for obtaining three-dimensional model of object
Newcombe et al. Live dense reconstruction with a single moving camera
CN111473744B (en) Three-dimensional shape vision measurement method and system based on speckle embedded phase shift stripe
CA2529044C (en) Three-dimensional modeling from arbitrary three-dimensional curves
KR101681095B1 (en) Apparatus and method for generating depth image that have same viewpoint and same resolution with color image
WO2000027131A2 (en) Improved methods and apparatus for 3-d imaging
JP2016075637A (en) Information processing apparatus and method for the same
Habib et al. A comparative analysis of two approaches for multiple-surface registration of irregular point clouds
CN113393577B (en) Oblique photography terrain reconstruction method
CN111060006A (en) Viewpoint planning method based on three-dimensional model
CN114782636A (en) Three-dimensional reconstruction method, device and system
CN114494589A (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium
Lin et al. Vision system for fast 3-D model reconstruction
CN116295113A (en) Polarization three-dimensional imaging method integrating fringe projection
CN116242277A (en) Automatic measurement method for size of power supply cabinet structural member based on full-field three-dimensional vision
Furferi et al. A RGB-D based instant body-scanning solution for compact box installation
Jokinen Area-based matching for simultaneous registration of multiple 3-D profile maps
Kubota et al. All-focused light field rendering.
Wong et al. 3D object model reconstruction from image sequence based on photometric consistency in volume space
Cai et al. High-precision and arbitrary arranged projection moiré system based on an iterative calculation model and the self-calibration method
Rasztovits et al. Comparison of 3D reconstruction services and terrestrial laser scanning for cultural heritage documentation
Sheng et al. Research on point-cloud collection and 3D model reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant