CN108711185B - Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation - Google Patents

Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation Download PDF

Info

Publication number
CN108711185B
CN108711185B CN201810460091.5A CN201810460091A CN108711185B CN 108711185 B CN108711185 B CN 108711185B CN 201810460091 A CN201810460091 A CN 201810460091A CN 108711185 B CN108711185 B CN 108711185B
Authority
CN
China
Prior art keywords
rigid
model
dimensional
motion
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810460091.5A
Other languages
Chinese (zh)
Other versions
CN108711185A (en
Inventor
刘烨斌
戴琼海
徐枫
方璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Shenzhen Graduate School Tsinghua University
Original Assignee
Tsinghua University
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Shenzhen Graduate School Tsinghua University filed Critical Tsinghua University
Priority to CN201810460091.5A priority Critical patent/CN108711185B/en
Publication of CN108711185A publication Critical patent/CN108711185A/en
Priority to PCT/CN2019/086889 priority patent/WO2019219012A1/en
Application granted granted Critical
Publication of CN108711185B publication Critical patent/CN108711185B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional reconstruction method and a three-dimensional reconstruction device combining rigid motion and non-rigid deformation, wherein the method comprises the following steps: shooting a target object based on a depth camera to obtain a single depth image; extracting a three-dimensional framework from the depth point cloud by a three-dimensional framework extraction algorithm; acquiring a matching point pair between the three-dimensional point cloud and the top point of the reconstructed model; establishing an energy function according to the matching point pairs and the three-dimensional framework information, solving a non-rigid motion position transformation parameter of each vertex on the reconstruction model, and optimizing the framework parameters of the object; performing GPU optimization solution on the energy function to obtain non-rigid deformation of each surface vertex, and deforming the reconstructed three-dimensional model of the previous frame according to a solution result so as to align the deformation model with the three-dimensional point cloud of the current frame; an updated model of the current frame is obtained to enter the iteration of the next frame. The method can effectively improve the real-time property, robustness and accuracy of reconstruction, has strong expansibility, and is simple and easy to implement.

Description

Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation
Technical Field
The invention relates to the technical field of computer vision and computer graphics, in particular to a three-dimensional reconstruction method and a three-dimensional reconstruction device combining rigid motion and non-rigid deformation.
Background
Dynamic object three-dimensional reconstruction is a key problem in the field of computer graphics and computer vision. The high-quality dynamic object three-dimensional model, such as a human body, an animal, a human face, a human hand and the like, has wide application prospect and important application value in the fields of movie and television entertainment, sports games, virtual reality and the like. However, the acquisition of high-quality three-dimensional models is usually realized by means of expensive laser scanners or multi-camera array systems, and although the accuracy is high, some disadvantages are also obviously existed: firstly, the object is required to be absolutely static in the scanning process, and the scanning result has obvious errors due to small movement; secondly, the counterfeiting is expensive, and the method is difficult to popularize in daily life of common people and is often applied to large companies or national statistical departments. Thirdly, the speed is slow, at least 10 minutes to several hours are often needed for reconstructing a three-dimensional model, and the cost for reconstructing a dynamic model sequence is higher.
From the technical point of view, the existing reconstruction method either focuses on solving rigid motion information of an object in advance to obtain approximation of the object, and further reconstructs non-rigid surface motion information. But this reconstruction method requires that a three-dimensional model of the object's keyframe be obtained in advance. On the other hand, although the existing reconstruction method for dynamically fusing the surfaces frame by frame can realize template-free dynamic three-dimensional reconstruction, the robustness of tracking reconstruction is low only by using a non-rigid surface deformation method.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, one purpose of the invention is to provide a three-dimensional reconstruction method combining rigid motion and non-rigid deformation, which can effectively improve the real-time property, robustness and accuracy of reconstruction, has strong expansibility, and is simple and easy to implement.
Another object of the present invention is to propose a three-dimensional reconstruction device combining rigid motion and non-rigid deformation.
In order to achieve the above object, an embodiment of an aspect of the present invention provides a three-dimensional reconstruction method combining rigid motion and non-rigid deformation, including the following steps: shooting a target object based on a depth camera to obtain a single depth image; extracting a three-dimensional framework from the depth point cloud by a three-dimensional framework extraction algorithm; converting the single depth image into a three-dimensional point cloud, and acquiring a matching point pair between the three-dimensional point cloud and a reconstructed model vertex; establishing an energy function according to the matching point pairs and the three-dimensional framework information, solving a non-rigid motion position transformation parameter of each vertex on the reconstruction model, and optimizing an object framework parameter; performing GPU (Graphics Processing Unit) optimization solution on the energy function to obtain non-rigid deformation of each surface vertex, and deforming the reconstructed three-dimensional model of the previous frame according to a solution result so as to align the deformation model with the current frame three-dimensional point cloud; and fusing the three-dimensional point cloud of the current frame with the deformation model to obtain an updated model of the current frame so as to enter the iteration of the next frame.
According to the three-dimensional reconstruction method combining rigid motion and non-rigid deformation, disclosed by the embodiment of the invention, the three-dimensional information of the surface of the dynamic object is fused frame by a real-time non-rigid alignment method, and in order to realize robust tracking, the robust real-time dynamic three-dimensional reconstruction under the condition of no first frame key frame three-dimensional template is realized, so that the real-time property, robustness and accuracy of reconstruction can be effectively improved, the expansibility is strong, and the method is simple and easy to realize.
In addition, the three-dimensional reconstruction method combining rigid motion and non-rigid deformation according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the transforming the single depth image into a three-dimensional point cloud further includes: projecting the single depth image to a three-dimensional space through an internal reference matrix of a depth camera to generate the three-dimensional point cloud, wherein a depth map projection formula is as follows:
Figure BDA0001660674250000021
where u, v are pixel coordinates and d (u, v) is a depth value at the location of the pixel (u, v) on the depth image,
Figure BDA0001660674250000022
is an internal reference matrix of the depth camera.
Further, in one embodiment of the present invention, the energy function is:
Et=λnEnsEsjEjgEgbEb
wherein E istFor total energy terms, EnAs a non-rigid surface deformation constraint term, EsFor rigid skeletal motion constraints, EjIdentifying constraints for rigid skeletons,EgFor local rigid motion constraint term, λn、λs、λjAnd λgThe weight coefficients corresponding to the constraint terms are respectively.
Further, in one embodiment of the present invention, wherein,
Figure BDA0001660674250000023
Figure BDA0001660674250000024
Figure BDA0001660674250000025
Figure BDA0001660674250000026
wherein u isiPosition coordinates representing three-dimensional point clouds in the same matching point pair, ciRepresenting the ith element in the matching point pair set, in the non-rigid surface deformation constraint item
Figure BDA0001660674250000027
And
Figure BDA0001660674250000028
respectively representing the vertex coordinates and normal direction of the model driven by non-rigid deformation, wherein the rigid framework motion constraint term
Figure BDA0001660674250000029
And
Figure BDA00016606742500000210
respectively representing the vertex coordinates of the model driven by the motion of the skeleton of the object and the normal direction thereof,
Figure BDA0001660674250000031
and
Figure BDA0001660674250000032
respectively representing the model vertex coordinates driven by the rigid motion of the target and the model vertex coordinates driven by the motion obtained by the estimation of the three-dimensional framework, wherein in the local rigid motion constraint term, i represents the ith vertex on the model,
Figure BDA0001660674250000033
representing a set of adjacent vertices around the ith vertex on the model,
Figure BDA0001660674250000034
and
Figure BDA0001660674250000035
respectively representing the known non-rigid movement versus model surface vertex viAnd vjThe driving function of the driving device (2),
Figure BDA0001660674250000036
and
Figure BDA0001660674250000037
the representative action is at viAnd vjOn the non-rigid movement acting simultaneously on vjUpper position transformation effect.
Further, in an embodiment of the present invention, model vertices are driven according to surface non-rigid deformation and object rigid skeleton motion, wherein the calculation formula is:
Figure BDA0001660674250000038
Figure BDA0001660674250000039
wherein the content of the first and second substances,
Figure BDA00016606742500000310
to act on the vertex viThe deformation matrix of (2) includes two parts of rotation and translation;
Figure BDA00016606742500000311
is a rotating portion of the deformation matrix;
Figure BDA00016606742500000312
is to vertex viA collection of bones with a driving action; alpha is alphai,jWeighting the driving action of the jth bone on the ith model vertex to represent the strength of the driving action of the bone on the vertex; t isbjIs the motion deformation matrix, rot (T) of the jth bone itselfbj) Is the rotating part of the deformation matrix.
In order to achieve the above object, another embodiment of the present invention provides a three-dimensional reconstruction apparatus combining rigid motion and non-rigid deformation, including: the shooting module is used for shooting a target object based on a depth camera to obtain a single depth image; the extraction module is used for extracting a three-dimensional framework from the depth point cloud through a three-dimensional framework extraction algorithm; the matching module is used for converting the single depth image into a three-dimensional point cloud and acquiring a matching point pair between the three-dimensional point cloud and the top point of the reconstructed model; the resolving module is used for establishing an energy function according to the matching point pairs and the three-dimensional skeleton information, solving the non-rigid motion position transformation parameters of each vertex on the reconstruction model and optimizing the skeleton parameters of the object; the solving module is used for carrying out GPU optimization solving on the energy function so as to obtain non-rigid deformation of each surface vertex, and deforming the reconstructed three-dimensional model of the previous frame according to a solving result so that the deformation model is aligned with the three-dimensional point cloud of the current frame; and the model updating module is used for fusing the current frame three-dimensional point cloud and the deformation model to obtain an updated model of the current frame so as to enter the iteration of the next frame.
The three-dimensional reconstruction device combining rigid motion and non-rigid deformation, provided by the embodiment of the invention, fuses the three-dimensional information of the surface of the dynamic object frame by frame through a real-time non-rigid alignment method, and realizes robust real-time dynamic three-dimensional reconstruction under the condition of no first frame key frame three-dimensional template in order to realize robust tracking, so that the real-time property, robustness and accuracy of reconstruction can be effectively improved, the expansibility is strong, and the device is simple and easy to realize.
In addition, the three-dimensional reconstruction apparatus combining rigid motion and non-rigid deformation according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the matching module is further configured to project the single depth image into a three-dimensional space through an internal reference matrix of a depth camera to generate the three-dimensional point cloud, where the depth map projection formula is:
Figure BDA0001660674250000041
where u, v are pixel coordinates and d (u, v) is a depth value at the location of the pixel (u, v) on the depth image,
Figure BDA0001660674250000042
is an internal reference matrix of the depth camera.
Further, in one embodiment of the present invention, the energy function is:
Et=λnEnsEsjEjgEgbEb
wherein E istFor total energy terms, EnAs a non-rigid surface deformation constraint term, EsFor rigid skeletal motion constraints, EjIdentifying constraints for rigid skeletons, EgFor local rigid motion constraint term, λn、λs、λjAnd λgThe weight coefficients corresponding to the constraint terms are respectively.
Further, in one embodiment of the present invention, wherein,
Figure BDA0001660674250000043
Figure BDA0001660674250000044
Figure BDA0001660674250000045
Figure BDA0001660674250000046
wherein u isiPosition coordinates representing three-dimensional point clouds in the same matching point pair, ciRepresenting the ith element in the matching point pair set, in the non-rigid surface deformation constraint item
Figure BDA0001660674250000047
And
Figure BDA00016606742500000418
respectively representing the vertex coordinates and normal direction of the model driven by non-rigid deformation, wherein the rigid framework motion constraint term
Figure BDA0001660674250000048
And
Figure BDA0001660674250000049
respectively representing the vertex coordinates of the model driven by the motion of the skeleton of the object and the normal direction thereof,
Figure BDA00016606742500000410
and
Figure BDA00016606742500000411
respectively representing the model vertex coordinates driven by the rigid motion of the target and the model vertex coordinates driven by the motion obtained by the estimation of the three-dimensional framework, wherein in the local rigid motion constraint term, i represents the ith vertex on the model,
Figure BDA00016606742500000412
representing a set of adjacent vertices around the ith vertex on the model,
Figure BDA00016606742500000413
and
Figure BDA00016606742500000414
respectively representing the known non-rigid movement versus model surface vertex viAnd vjThe driving function of the driving device (2),
Figure BDA00016606742500000415
and
Figure BDA00016606742500000416
the representative action is at viAnd vjOn the non-rigid movement acting simultaneously on vjUpper position transformation effect.
Further, in an embodiment of the present invention, model vertices are driven according to surface non-rigid deformation and object rigid skeleton motion, wherein the calculation formula is:
Figure BDA00016606742500000417
Figure BDA0001660674250000051
wherein the content of the first and second substances,
Figure BDA0001660674250000052
to act on the vertex viThe deformation matrix of (2) includes two parts of rotation and translation;
Figure BDA0001660674250000053
is a rotating portion of the deformation matrix;
Figure BDA0001660674250000054
is to vertex viA collection of bones with a driving action; alpha is alphai,jWeighting the driving action of the jth bone on the ith model vertex to represent the strength of the driving action of the bone on the vertex; t isbjIs the motion deformation matrix, rot (T) of the jth bone itselfbj) Is the rotating part of the deformation matrix.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a three-dimensional reconstruction method that combines rigid motion and non-rigid deformation according to one embodiment of the present invention;
FIG. 2 is a flow chart of a three-dimensional reconstruction method that combines rigid motion and non-rigid deformation according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a three-dimensional reconstruction apparatus combining rigid motion and non-rigid deformation according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The three-dimensional reconstruction method and apparatus for combining rigid motion and non-rigid deformation proposed according to the embodiments of the present invention will be described below with reference to the accompanying drawings, and first, the three-dimensional reconstruction method for combining rigid motion and non-rigid deformation proposed according to the embodiments of the present invention will be described with reference to the accompanying drawings.
FIG. 1 is a flow chart of a three-dimensional reconstruction method combining rigid motion and non-rigid deformation according to an embodiment of the present invention.
As shown in fig. 1, the three-dimensional reconstruction method combining rigid motion and non-rigid deformation includes the following steps:
in step S101, depth camera-based shooting is performed on a target object to obtain a single depth image.
It can be understood that, as shown in fig. 2, the real-time video frame rate depth point cloud is obtained, and a depth map is taken of the dynamic object to obtain a frame-by-frame depth point cloud. Specifically, a dynamic object is photographed using a depth camera, obtaining a continuous sequence of single depth images. A single depth image is transformed into a set of three-dimensional point clouds.
In step S102, three-dimensional skeleton extraction is performed on the depth point cloud by a three-dimensional skeleton extraction algorithm.
It can be understood that, as shown in fig. 2, 3D skeleton extraction is performed by a skeleton recognition algorithm, and three-dimensional rigid skeleton information of the current frame of the object is extracted by an existing skeleton recognition algorithm. For example, object three-dimensional skeleton stealing is achieved through KinectSDK.
In step S103, the single depth image is transformed into a three-dimensional point cloud, and a matching point pair between the three-dimensional point cloud and the reconstructed model vertex is obtained.
It can be understood that, as shown in fig. 2, a three-dimensional model and cloud matching point pair is established, and a matching point pair between the current frame three-dimensional point cloud and the reconstructed model vertex is calculated.
Further, in one embodiment of the present invention, transforming a single depth image into a three-dimensional point cloud further comprises: projecting a single depth image to a three-dimensional space through an internal reference matrix of a depth camera to generate a three-dimensional point cloud, wherein a depth map projection formula is as follows:
Figure BDA0001660674250000061
where u, v are pixel coordinates and d (u, v) is a depth value at the location of the pixel (u, v) on the depth image,
Figure BDA0001660674250000062
is the internal reference matrix of the depth camera.
It can be understood that the object is photographed by a depth camera to obtain a depth image, the depth map is transformed into a set of three-dimensional point clouds, and the depth map is projected into a three-dimensional space based on an internal reference matrix calibrated by the depth camera to generate the set of three-dimensional point clouds. The depth map projection formula of (2) is:
Figure BDA0001660674250000063
where u, v are pixel coordinates and d (u, v) is a depth value at the location of the pixel (u, v) on the depth image,
Figure BDA0001660674250000064
is a depth camera internal reference matrix.
Specifically, an internal reference matrix of the depth camera is obtained, and a depth map is projected into a three-dimensional space and transformed into a set of three-dimensional point clouds according to the internal reference matrix. Wherein the transformation formula is:
Figure BDA0001660674250000065
where u, v are pixel coordinates and d (u, v) is a depth value at the location of the pixel (u, v) on the depth image,
Figure BDA0001660674250000066
is a depth camera internal reference matrix. In obtaining the matching point pairs, the vertices of the three-dimensional model are projected onto the depth image using a camera projection formula to obtain the matching point pairs.
Further, in an embodiment of the present invention, model vertices are driven according to surface non-rigid deformation and object rigid skeleton motion, wherein the calculation formula is:
Figure BDA0001660674250000067
Figure BDA0001660674250000068
wherein the content of the first and second substances,
Figure BDA0001660674250000069
to act on the vertex viThe deformation matrix of (2) includes two parts of rotation and translation;
Figure BDA00016606742500000610
is a rotating portion of the deformation matrix;
Figure BDA00016606742500000611
is to vertex viA collection of bones with a driving action; alpha is alphai,jWeighting the driving action of the jth bone on the ith model vertex to represent the strength of the driving action of the bone on the vertex; t isbjIs the motion deformation matrix, rot (T) of the jth bone itselfbj) Is the rotating part of the deformation matrix.
In step S104, an energy function is established according to the matching point pairs and the three-dimensional skeleton information, and the non-rigid motion position transformation parameters of each vertex on the reconstructed model are solved and the object skeleton parameters are optimized.
It can be understood that, establishing an energy function, establishing the energy function according to the current frame matching point pair information and the extracted current frame three-dimensional rigid skeleton information.
For example, a single depth camera, such as a microsoft Kinect depth camera, an IphoneX depth camera, an aobi light depth camera, etc., is used to photograph a dynamic scene, obtain real-time depth image data (video frame rate, 20 frames/second or more) and transmit the data to a computer, the computer calculates the three-dimensional geometric information of a dynamic object in real time, reconstructs a three-dimensional model of the object at the same frame rate, and outputs the three-dimensional skeleton information of the object.
Further, in one embodiment of the present invention, the energy function is:
Et=λnEnsEsjEjgEgbEb
wherein E istFor total energy terms, EnAs a non-rigid surface deformation constraint term, EsFor rigid skeletal motion constraints, EjIdentifying constraints for rigid skeletons, EgFor local rigid motion constraint term, λn、λs、λjAnd λgThe weight coefficients corresponding to the constraint terms are respectively.
Further, in one embodiment of the present invention, wherein,
Figure BDA0001660674250000071
Figure BDA0001660674250000072
Figure BDA0001660674250000073
Figure BDA0001660674250000074
wherein u isiPosition coordinates representing three-dimensional point clouds in the same matching point pair, ciRepresenting the ith element in the set of pairs of matching points, in the non-rigid surface deformation constraint
Figure BDA0001660674250000075
And
Figure BDA0001660674250000076
respectively representing the vertex coordinates and normal direction of the model driven by non-rigid deformation and the motion constraint term of the rigid skeleton
Figure BDA0001660674250000077
And
Figure BDA0001660674250000078
respectively representThe model vertex coordinates and the normal direction thereof after being driven by the motion of the object skeleton,
Figure BDA0001660674250000079
and
Figure BDA00016606742500000710
respectively representing the model vertex coordinates driven by the rigid motion of the target and the model vertex coordinates driven by the motion obtained by the estimation of the three-dimensional framework, wherein in the constraint term of the local rigid motion, i represents the ith vertex on the model,
Figure BDA00016606742500000715
representing a set of adjacent vertices around the ith vertex on the model,
Figure BDA00016606742500000711
and
Figure BDA00016606742500000712
respectively representing the known non-rigid movement versus model surface vertex viAnd vjThe driving function of the driving device (2),
Figure BDA00016606742500000713
and
Figure BDA00016606742500000714
the representative action is at viAnd vjOn the non-rigid movement acting simultaneously on vjUpper position transformation effect.
In particular, a rigid motion constraint term E is used simultaneouslysAnd a non-rigid motion constraint term EnCarrying out optimization solution on the object motion, and simultaneously using a single depth image to carry out an object rigid skeleton constraint item EjAnd constraining the solved rigid motion.
(1) Surface non-rigid constraint EnEnsuring that the model after non-rigid deformation is aligned with the three-dimensional point cloud obtained from the depth map as much as possible;
Figure BDA0001660674250000081
and
Figure BDA0001660674250000082
respectively representing the vertex coordinates of the model driven by non-rigid deformation and the normal direction thereof,
Figure BDA0001660674250000083
and
Figure BDA0001660674250000084
respectively representing the vertex coordinates of the model driven by the motion of the object skeleton and the normal direction of the model.
(2) Rigid skeleton motion constraint term EsAnd ensuring that the model subjected to rigid deformation driven by skeleton motion is aligned with the three-dimensional point cloud obtained from the depth map as much as possible.
(3) Constraint term E in consistency of rigid skeleton motion and non-rigid deformationbIn (1),
Figure BDA0001660674250000085
and
Figure BDA0001660674250000086
the constraint terms are used for ensuring that the solved rigid framework is consistent with the identified framework as much as possible, and the condition that errors are accumulated and cannot be recovered in the dynamic tracking process is prevented through single-frame framework identification, so that the finally solved non-rigid motion can be ensured to be in line with the dynamic model of the object framework and be fully aligned with the three-dimensional point cloud obtained from the depth map.
(4) In the local rigid motion constraint term EgWhere i represents the ith vertex on the model,
Figure BDA0001660674250000087
representing a set of adjacent vertices around the ith vertex on the model,
Figure BDA0001660674250000088
and
Figure BDA0001660674250000089
respectively representing the known non-rigid movement versus model surface vertex viAnd vjThe driving function of the driving device (2),
Figure BDA00016606742500000810
and
Figure BDA00016606742500000811
the representative action is at viAnd vjOn the non-rigid movement acting simultaneously on vjThe position transformation effect of (2) is to ensure that the non-rigid driving effect of the adjacent vertices on the model is as consistent as possible.
Figure BDA00016606742500000812
Is a robust penalty function that is a function of,
Figure BDA00016606742500000813
and
Figure BDA00016606742500000814
respectively representing the surface vertexes v of the model by the motion pair of the rigid skeletoniAnd vjWhen two adjacent vertexes on the surface of the model are driven by rigid skeleton motion to have larger difference, the robust punishment function value is smaller, and when two adjacent vertexes are driven by the skeleton motion to have smaller difference, the robust punishment function value is larger.
In step S105, the GPU optimization solution is performed on the energy function to obtain the non-rigid deformation of each surface vertex, and the reconstructed three-dimensional model of the previous frame is deformed according to the solution result, so that the deformed model is aligned with the three-dimensional point cloud of the current frame.
It can be understood that, as shown in fig. 2, GPU optimization solution is performed on the energy function, the non-rigid motion position transformation parameter of each vertex on the reconstructed model is solved, the three-dimensional rigid motion information of the object is optimized, and the reconstructed model of the previous frame is deformed according to the solution result so as to align the reconstructed model with the three-dimensional point cloud of the current frame.
Specifically, the energy function is solved, and the reconstructed model is aligned with the three-dimensional point cloud according to the solving result. And solving the non-rigid motion position transformation parameters and the object skeleton motion parameters of each vertex on the reconstruction model. And finally solving the obtained information into a transformation matrix of each three-dimensional model vertex and the motion parameters of the skeleton of the object, namely the independent transformation matrix of each skeleton. In order to meet the requirement of fast linear solution, the method of the embodiment of the invention approximates the deformation equation by using an exponential mapping method as follows:
Figure BDA0001660674250000091
wherein the content of the first and second substances,
Figure BDA0001660674250000092
for model vertices v truncated to the previous frameiThe cumulative transformation matrix of (a), for a known quantity,
Figure BDA0001660674250000093
non-rigid deformation for each surface vertex; i is a four-dimensional unit array;
wherein the content of the first and second substances,
Figure BDA0001660674250000094
order to
Figure BDA0001660674250000095
Namely, the vertex of the model after the last frame transformation is transformed by:
Figure BDA0001660674250000096
for theFor each vertex, the unknown parameter to be solved is the six-dimensional transformation parameter x ═ (v)1,v2,v3,wx,wy,wz)T. The linearization pattern of skeletal motion is the same as non-rigid motion.
In step S106, the current frame three-dimensional point cloud and the deformation model are fused to obtain an updated model of the current frame, so as to enter the iteration of the next frame.
It can be understood that, as shown in fig. 2, poisson fusion is performed on the aligned model and point cloud, and a more complete three-dimensional model of a new frame is obtained.
Specifically, the point cloud and the three-dimensional model are fused to obtain an updated model of the current frame. And updating and complementing the three-dimensional model aligned with the depth point cloud, fusing newly obtained depth information into the three-dimensional model, and updating the surface vertex position of the three-dimensional model or adding a new vertex to the three-dimensional model to enable the three-dimensional model to be more consistent with the expression of the current depth image.
In summary, the core function of the embodiment of the invention is to receive the depth image code stream in real time and calculate each frame of three-dimensional model in real time. And meanwhile, calculating a time-varying three-dimensional model of the dynamic object by utilizing large-scale rigid skeleton motion and small-scale surface non-rigid deformation information of the object. The method of the embodiment of the invention has accurate solution, can realize high-precision reconstruction of the dynamic object in real time, has the advantages of simple equipment, convenient deployment, expandability and the like because the method is a real-time reconstruction method and only needs to provide single depth camera input, and can acquire the required input information very easily and obtain the dynamic three-dimensional model in real time. The method has the advantages of accurate and robust solving, simplicity, easy implementation, real-time running speed, wide application prospect and capability of being quickly realized on hardware systems such as Personal Computers (PCs) or workstations and the like.
According to the three-dimensional reconstruction method combining rigid motion and non-rigid deformation provided by the embodiment of the invention, the three-dimensional information of the surface of the dynamic object is fused frame by a real-time non-rigid alignment method, and in order to realize robust tracking, the robust real-time dynamic three-dimensional reconstruction under the condition of no first frame key frame three-dimensional template is realized, so that the real-time property, the robustness and the accuracy of reconstruction can be effectively improved, the expansibility is strong, and the method is simple and easy to realize.
Next, a three-dimensional reconstruction apparatus combining rigid motion and non-rigid deformation proposed according to an embodiment of the present invention is described with reference to the accompanying drawings.
Fig. 3 is a schematic structural diagram of a three-dimensional reconstruction apparatus combining rigid motion and non-rigid deformation according to an embodiment of the present invention.
As shown in fig. 3, the three-dimensional reconstruction apparatus 10 combining rigid motion and non-rigid deformation includes: the model updating module comprises a shooting module 100, an extraction module 200, a matching module 300, a solving module 400, a solving module 500 and a model updating module 600.
The shooting module 100 is configured to perform depth camera-based shooting on a target object to obtain a single depth image. The extraction module 200 is configured to perform three-dimensional skeleton extraction on the depth point cloud through a three-dimensional skeleton extraction algorithm. The matching module 300 transforms a single depth image into a three-dimensional point cloud and obtains pairs of matching points between the three-dimensional point cloud and the vertices of the reconstructed model. The calculating module 400 is used for establishing an energy function according to the matching point pairs and the three-dimensional skeleton information, solving the non-rigid motion position transformation parameters of each vertex on the reconstruction model and optimizing the skeleton parameters of the object. The solving module 500 is configured to perform GPU optimization solution on the energy function to obtain non-rigid deformation of each surface vertex, and deform the reconstructed three-dimensional model of the previous frame according to the solution result, so that the deformed model is aligned with the three-dimensional point cloud of the current frame. The model updating module 600 is configured to fuse the current frame three-dimensional point cloud and the deformation model to obtain an updated model of the current frame, so as to enter the iteration of the next frame. The device 10 of the embodiment of the invention can effectively improve the real-time property, robustness and accuracy of reconstruction, has strong expansibility, and is simple and easy to realize.
Further, in an embodiment of the present invention, the matching module 300 is further configured to project a single depth image into a three-dimensional space through an internal reference matrix of the depth camera to generate a three-dimensional point cloud, wherein the depth map projection formula is:
Figure BDA0001660674250000101
where u, v are pixel coordinates and d (u, v) is a depth value at the location of the pixel (u, v) on the depth image,
Figure BDA0001660674250000102
is the internal reference matrix of the depth camera.
Further, in one embodiment of the present invention, the energy function is:
Et=λnEnsEsjEjgEgbEb
wherein E istFor total energy terms, EnAs a non-rigid surface deformation constraint term, EsFor rigid skeletal motion constraints, EjIdentifying constraints for rigid skeletons, EgFor local rigid motion constraint term, λn、λs、λjAnd λgThe weight coefficients corresponding to the constraint terms are respectively.
Further, in one embodiment of the present invention, wherein,
Figure BDA0001660674250000103
Figure BDA0001660674250000111
Figure BDA0001660674250000112
Figure BDA0001660674250000113
wherein u isiPosition coordinates representing three-dimensional point clouds in the same matching point pair, ciRepresenting the ith element in the set of pairs of matching points, in the non-rigid surface deformation constraint
Figure BDA0001660674250000114
And
Figure BDA0001660674250000115
respectively representing the vertex coordinates and normal direction of the model driven by non-rigid deformation and the motion constraint term of the rigid skeleton
Figure BDA0001660674250000116
And
Figure BDA0001660674250000117
respectively representing the vertex coordinates of the model driven by the motion of the skeleton of the object and the normal direction thereof,
Figure BDA0001660674250000118
and
Figure BDA0001660674250000119
respectively representing the model vertex coordinates driven by the rigid motion of the target and the model vertex coordinates driven by the motion obtained by the estimation of the three-dimensional framework, wherein in the constraint term of the local rigid motion, i represents the ith vertex on the model,
Figure BDA00016606742500001110
representing a set of adjacent vertices around the ith vertex on the model,
Figure BDA00016606742500001111
and
Figure BDA00016606742500001112
respectively representing the known non-rigid movement versus model surface vertex viAnd vjThe driving function of the driving device (2),
Figure BDA00016606742500001113
and
Figure BDA00016606742500001114
the representative action is at viAnd vjOn the non-rigid movement acting simultaneously on vjUpper position transformation effect.
Further, in an embodiment of the present invention, model vertices are driven according to surface non-rigid deformation and object rigid skeleton motion, wherein the calculation formula is:
Figure BDA00016606742500001115
Figure BDA00016606742500001116
wherein the content of the first and second substances,
Figure BDA00016606742500001117
to act on the vertex viThe deformation matrix of (2) includes two parts of rotation and translation;
Figure BDA00016606742500001118
is a rotating portion of the deformation matrix; is to vertex viA collection of bones with a driving action; alpha is alphai,jWeighting the driving action of the jth bone on the ith model vertex to represent the strength of the driving action of the bone on the vertex; t isbjIs the motion deformation matrix, rot (T) of the jth bone itselfbj) Is the rotating part of the deformation matrix.
It should be noted that the foregoing explanation of the embodiment of the three-dimensional reconstruction method combining rigid motion and non-rigid deformation is also applicable to the three-dimensional reconstruction apparatus combining rigid motion and non-rigid deformation in this embodiment, and details are not repeated here.
According to the three-dimensional reconstruction device combining rigid motion and non-rigid deformation provided by the embodiment of the invention, the three-dimensional information of the surface of the dynamic object is fused frame by a real-time non-rigid alignment method, and in order to realize robust tracking, the robust real-time dynamic three-dimensional reconstruction under the condition of no first frame key frame three-dimensional template is realized, so that the real-time property, the robustness and the accuracy of reconstruction can be effectively improved, the expansibility is strong, and the device is simple and easy to realize.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (4)

1. A three-dimensional reconstruction method combining rigid motion and non-rigid deformation is characterized by comprising the following steps:
shooting a target object based on a depth camera to obtain a single depth image, obtaining a depth point cloud through a real-time video frame rate, and shooting a depth image of a dynamic object to obtain a frame-by-frame depth point cloud;
extracting a three-dimensional framework from the depth point cloud by a three-dimensional framework extraction algorithm;
transforming the single depth image into a three-dimensional point cloud, and acquiring a matching point pair between the three-dimensional point cloud and a reconstructed model vertex, specifically: establishing a three-dimensional model and a cloud matching point pair, and calculating a matching point pair between the current frame three-dimensional point cloud and the reconstructed model vertex; the transforming the single depth image into a three-dimensional point cloud further comprises: projecting the single depth image to a three-dimensional space through an internal reference matrix of a depth camera to generate the three-dimensional point cloud, wherein a depth map projection formula is as follows:
Figure FDA00028722740700000114
where u, v are pixel coordinates and d (u, v) is a depth value at the location of the pixel (u, v) on the depth image,
Figure FDA00028722740700000115
an internal reference matrix of the depth camera;
establishing an energy function according to the matching point pairs and the three-dimensional framework information, solving a non-rigid motion position transformation parameter of each vertex on the reconstruction model and optimizing an object framework parameter, specifically: using rigid motion constraint terms E simultaneouslysAnd a non-rigid motion constraint term EnCarrying out optimization solution on the object motion, and simultaneously using a single depth image to carry out an object rigid skeleton constraint item EjConstraining the solved rigid motion, wherein the energy function is: et=λnEnsEsjEjgEgbEbWherein E istFor total energy terms, EnAs a non-rigid surface deformation constraint term, EsFor rigid skeletal motion constraints, EjIdentifying constraints for rigid skeletons, EgFor local rigid motion constraints, EbFor non-rigid deformation consistency constraint term, λn、λs、λjAnd λgRespectively, the weight coefficients corresponding to the constraint terms, wherein,
Figure FDA0002872274070000011
Figure FDA0002872274070000012
wherein u isiPosition coordinates representing three-dimensional point clouds in the same matching point pair, ciRepresenting the ith element in the matching point pair set, in the non-rigid surface deformation constraint item
Figure FDA0002872274070000013
And
Figure FDA0002872274070000014
respectively representing the vertex coordinates and normal direction of the model driven by non-rigid deformation, wherein the rigid framework motion constraint term
Figure FDA0002872274070000015
And
Figure FDA0002872274070000016
respectively representing the vertex coordinates of the model driven by the motion of the skeleton of the object and the normal direction thereof,
Figure FDA0002872274070000017
and
Figure FDA0002872274070000018
respectively representing the model vertex coordinates driven by the rigid motion of the target and the model vertex coordinates driven by the motion obtained by the estimation of the three-dimensional framework, wherein in the local rigid motion constraint term, i represents the ith vertex on the model,
Figure FDA0002872274070000019
representing a set of adjacent vertices around the ith vertex on the model,
Figure FDA00028722740700000110
and
Figure FDA00028722740700000111
respectively representing the known non-rigid movement versus model surface vertex viAnd vjThe driving function of the driving device (2),
Figure FDA00028722740700000112
and
Figure FDA00028722740700000113
the representative action is at viAnd vjOn the non-rigid movement acting simultaneously on vjPosition change effects on;
performing GPU optimization solution on the energy function to obtain non-rigid deformation of each surface vertex, and deforming the reconstructed three-dimensional model of the previous frame according to a solution result so as to align the deformation model with the three-dimensional point cloud of the current frame; and
and fusing the three-dimensional point cloud of the current frame with the deformation model to obtain an updated model of the current frame so as to enter the iteration of the next frame.
2. The method of claim 1, wherein model vertices are driven according to surface non-rigid deformation and object rigid skeleton motion, wherein the formula is:
Figure FDA0002872274070000021
Figure FDA0002872274070000022
wherein the content of the first and second substances,
Figure FDA0002872274070000023
to act on the vertexviThe deformation matrix of (2) includes two parts of rotation and translation;
Figure FDA0002872274070000024
is a rotating portion of the deformation matrix;
Figure FDA0002872274070000025
is to vertex viA collection of bones with a driving action; alpha is alphai,jWeighting the driving action of the jth bone on the ith model vertex to represent the strength of the driving action of the bone on the vertex; t isbjIs the motion deformation matrix, rot (T) of the jth bone itselfbj) Is the rotating part of the deformation matrix.
3. A three-dimensional reconstruction apparatus that combines rigid motion and non-rigid deformation, comprising:
the shooting module is used for shooting a target object based on a depth camera to obtain a single depth image, and shooting a depth map of a dynamic object to obtain frame-by-frame depth point clouds through real-time video frame rate depth point cloud acquisition;
the extraction module is used for extracting a three-dimensional framework from the depth point cloud through a three-dimensional framework extraction algorithm;
the matching module is used for converting the single depth image into a three-dimensional point cloud and acquiring a matching point pair between the three-dimensional point cloud and a reconstructed model vertex, and specifically comprises the following steps: establishing a three-dimensional model and a cloud matching point pair, and calculating a matching point pair between the current frame three-dimensional point cloud and the reconstructed model vertex; the matching module is further configured to project the single depth image into a three-dimensional space through an internal reference matrix of a depth camera to generate the three-dimensional point cloud, where a depth map projection formula is:
Figure FDA0002872274070000026
where u, v are pixel coordinates and d (u, v) is a depth value at the location of the pixel (u, v) on the depth image,
Figure FDA0002872274070000027
an internal reference matrix of the depth camera;
a calculating module, configured to establish an energy function according to the matching point pairs and the three-dimensional skeleton information, solve a non-rigid motion position transformation parameter of each vertex on the reconstruction model, and optimize an object skeleton parameter, specifically: using rigid motion constraint terms E simultaneouslysAnd a non-rigid motion constraint term EnCarrying out optimization solution on the object motion, and simultaneously using a single depth image to carry out an object rigid skeleton constraint item EjConstraining the solved rigid motion, wherein the energy function is: et=λnEnsEsjEjgEgbEbWherein E istFor total energy terms, EnAs a non-rigid surface deformation constraint term, EsFor rigid skeletal motion constraints, EjIdentifying constraints for rigid skeletons, EgFor local rigid motion constraints, EbFor non-rigid deformation consistency constraint term, λn、λs、λjAnd λgRespectively, the weight coefficients corresponding to the constraint terms, wherein,
Figure FDA0002872274070000031
Figure FDA0002872274070000032
wherein u isiPosition coordinates representing three-dimensional point clouds in the same matching point pair, ciRepresenting the ith element in the matching point pair set, in the non-rigid surface deformation constraint item
Figure FDA0002872274070000033
And
Figure FDA0002872274070000034
respectively representing displacement by non-rigid deformationThe vertex coordinates and the normal direction of the moved model, and the rigid skeleton motion constraint term
Figure FDA0002872274070000035
And
Figure FDA0002872274070000036
respectively representing the vertex coordinates of the model driven by the motion of the skeleton of the object and the normal direction thereof,
Figure FDA0002872274070000037
and
Figure FDA0002872274070000038
respectively representing the model vertex coordinates driven by the rigid motion of the target and the model vertex coordinates driven by the motion obtained by the estimation of the three-dimensional framework, wherein in the local rigid motion constraint term, i represents the ith vertex on the model,
Figure FDA0002872274070000039
representing a set of adjacent vertices around the ith vertex on the model,
Figure FDA00028722740700000310
and
Figure FDA00028722740700000311
respectively representing the known non-rigid movement versus model surface vertex viAnd vjThe driving function of the driving device (2),
Figure FDA00028722740700000312
and
Figure FDA00028722740700000313
the representative action is at viAnd vjOn the non-rigid movement acting simultaneously on vjPosition change effects on;
the solving module is used for carrying out GPU optimization solving on the energy function so as to obtain non-rigid deformation of each surface vertex, and deforming the reconstructed three-dimensional model of the previous frame according to a solving result so that the deformation model is aligned with the three-dimensional point cloud of the current frame; and
and the model updating module is used for fusing the current frame three-dimensional point cloud and the deformation model to obtain an updated model of the current frame so as to enter the iteration of the next frame.
4. The apparatus according to claim 3, wherein the model vertices are driven according to the surface non-rigid deformation and the rigid skeleton motion of the object, and the calculation formula is:
Figure FDA00028722740700000314
Figure FDA00028722740700000315
wherein the content of the first and second substances,
Figure FDA00028722740700000316
to act on the vertex viThe deformation matrix of (2) includes two parts of rotation and translation;
Figure FDA00028722740700000317
is a rotating portion of the deformation matrix;
Figure FDA00028722740700000318
is to vertex viA collection of bones with a driving action; alpha is alphai,jWeighting the driving action of the jth bone on the ith model vertex to represent the strength of the driving action of the bone on the vertex; t isbjIs the motion deformation matrix, rot (T) of the jth bone itselfbj) Is the rotating part of the deformation matrix.
CN201810460091.5A 2018-05-15 2018-05-15 Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation Expired - Fee Related CN108711185B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810460091.5A CN108711185B (en) 2018-05-15 2018-05-15 Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation
PCT/CN2019/086889 WO2019219012A1 (en) 2018-05-15 2019-05-14 Three-dimensional reconstruction method and device uniting rigid motion and non-rigid deformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810460091.5A CN108711185B (en) 2018-05-15 2018-05-15 Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation

Publications (2)

Publication Number Publication Date
CN108711185A CN108711185A (en) 2018-10-26
CN108711185B true CN108711185B (en) 2021-05-28

Family

ID=63869046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810460091.5A Expired - Fee Related CN108711185B (en) 2018-05-15 2018-05-15 Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation

Country Status (2)

Country Link
CN (1) CN108711185B (en)
WO (1) WO2019219012A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108711185B (en) * 2018-05-15 2021-05-28 清华大学 Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation
CN109785440A (en) * 2018-12-18 2019-05-21 合肥阿巴赛信息科技有限公司 A kind of curved surface distorted pattern
CN109829972B (en) * 2019-01-19 2023-06-02 北京工业大学 Three-dimensional human standard skeleton extraction method for continuous frame point cloud
CN109840940B (en) * 2019-02-11 2023-06-27 清华-伯克利深圳学院筹备办公室 Dynamic three-dimensional reconstruction method, device, equipment, medium and system
CN111768504B (en) * 2019-03-30 2023-07-14 华为技术有限公司 Model processing method, deformation control method and related equipment
CN110070595B (en) * 2019-04-04 2020-11-24 东南大学深圳研究院 Single image 3D object reconstruction method based on deep learning
CN110006408B (en) * 2019-04-17 2020-04-24 武汉大学 LiDAR data cloud control aerial image photogrammetry method
CN111862139B (en) * 2019-08-16 2023-08-18 中山大学 Dynamic object parametric modeling method based on color-depth camera
CN110689625B (en) * 2019-09-06 2021-07-16 清华大学 Automatic generation method and device for customized face mixed expression model
CN111968169B (en) * 2020-08-19 2024-01-19 北京拙河科技有限公司 Dynamic human body three-dimensional reconstruction method, device, equipment and medium
CN113096249B (en) * 2021-03-30 2023-02-17 Oppo广东移动通信有限公司 Method for training vertex reconstruction model, image reconstruction method and electronic equipment
CN112991524B (en) * 2021-04-20 2022-03-25 北京的卢深视科技有限公司 Three-dimensional reconstruction method, electronic device and storage medium
CN114373018A (en) * 2021-12-06 2022-04-19 聚好看科技股份有限公司 Real-time driving method, device and equipment
CN114648613B (en) * 2022-05-18 2022-08-23 杭州像衍科技有限公司 Three-dimensional head model reconstruction method and device based on deformable nerve radiation field
CN115082512A (en) * 2022-07-08 2022-09-20 北京大学深圳研究生院 Point cloud motion estimation method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100235118A1 (en) * 2009-03-16 2010-09-16 Bradford Allen Moore Event Recognition
CN102800103A (en) * 2012-06-18 2012-11-28 清华大学 Unmarked motion capturing method and device based on multi-visual angle depth camera
CN103198523A (en) * 2013-04-26 2013-07-10 清华大学 Three-dimensional non-rigid body reconstruction method and system based on multiple depth maps

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101801749B1 (en) * 2016-08-24 2017-11-28 한국과학기술연구원 Method of deblurring multi-view stereo for 3d shape reconstruction, recording medium and device for performing the method
CN106469465A (en) * 2016-08-31 2017-03-01 深圳市唯特视科技有限公司 A kind of three-dimensional facial reconstruction method based on gray scale and depth information
CN107358645B (en) * 2017-06-08 2020-08-11 上海交通大学 Product three-dimensional model reconstruction method and system
CN108711185B (en) * 2018-05-15 2021-05-28 清华大学 Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100235118A1 (en) * 2009-03-16 2010-09-16 Bradford Allen Moore Event Recognition
CN102800103A (en) * 2012-06-18 2012-11-28 清华大学 Unmarked motion capturing method and device based on multi-visual angle depth camera
CN103198523A (en) * 2013-04-26 2013-07-10 清华大学 Three-dimensional non-rigid body reconstruction method and system based on multiple depth maps

Also Published As

Publication number Publication date
WO2019219012A1 (en) 2019-11-21
CN108711185A (en) 2018-10-26

Similar Documents

Publication Publication Date Title
CN108711185B (en) Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation
CN108665537B (en) Three-dimensional reconstruction method and system for jointly optimizing human body posture and appearance model
US9942535B2 (en) Method for 3D scene structure modeling and camera registration from single image
EP2383699B1 (en) Method for estimating a pose of an articulated object model
US20170330375A1 (en) Data Processing Method and Apparatus
CN108629831B (en) Three-dimensional human body reconstruction method and system based on parameterized human body template and inertial measurement
Newcombe et al. Live dense reconstruction with a single moving camera
Hoppe et al. Online Feedback for Structure-from-Motion Image Acquisition.
US8848035B2 (en) Device for generating three dimensional surface models of moving objects
US20030012410A1 (en) Tracking and pose estimation for augmented reality using real features
WO2019219014A1 (en) Three-dimensional geometry and eigencomponent reconstruction method and device based on light and shadow optimization
CN113077519B (en) Multi-phase external parameter automatic calibration method based on human skeleton extraction
CN113012293A (en) Stone carving model construction method, device, equipment and storage medium
CN109448105B (en) Three-dimensional human body skeleton generation method and system based on multi-depth image sensor
Inamoto et al. Intermediate view generation of soccer scene from multiple videos
JPH10304244A (en) Image processing unit and its method
JP2007025863A (en) Photographing system, photographing method, and image processing program
Zhao et al. Alignment of continuous video onto 3D point clouds
KR102577135B1 (en) A skeleton-based dynamic point cloud estimation system for sequence compression
US20120162215A1 (en) Apparatus and method for generating texture of three-dimensional reconstructed object depending on resolution level of two-dimensional image
CN114119891A (en) Three-dimensional reconstruction method and reconstruction system for robot monocular semi-dense map
Malerczyk et al. 3D reconstruction of sports events for digital TV
JP2002094849A (en) Wide view image pickup device
Remondino et al. Markerless motion capture from single or multi-camera video sequence
CN111986133B (en) Virtual advertisement implantation method applied to bullet time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210528