CN101833786B - Method and system for capturing and rebuilding three-dimensional model - Google Patents

Method and system for capturing and rebuilding three-dimensional model Download PDF

Info

Publication number
CN101833786B
CN101833786B CN2010101411826A CN201010141182A CN101833786B CN 101833786 B CN101833786 B CN 101833786B CN 2010101411826 A CN2010101411826 A CN 2010101411826A CN 201010141182 A CN201010141182 A CN 201010141182A CN 101833786 B CN101833786 B CN 101833786B
Authority
CN
China
Prior art keywords
model
static
constraints
point cloud
viewing angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010101411826A
Other languages
Chinese (zh)
Other versions
CN101833786A (en
Inventor
戴琼海
李坤
徐文立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN2010101411826A priority Critical patent/CN101833786B/en
Publication of CN101833786A publication Critical patent/CN101833786A/en
Application granted granted Critical
Publication of CN101833786B publication Critical patent/CN101833786B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本发明提出一种静态三维模型的捕捉及重建方法,包括以下步骤:对环形场中的运动物体进行图像采集;获取可视外壳模型;根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云;对获得的所述各个视角的深度点云进行融合得到静态三维模型。通过本发明能够保证静态三维模型重建形状的精确性和完整性。另外,本发明还提出了一种动态三维模型的捕捉及重建方法。

Figure 201010141182

The present invention proposes a method for capturing and reconstructing a static three-dimensional model, which includes the following steps: collecting images of moving objects in an annular field; obtaining a visible shell model; Depth point clouds of various viewing angles are obtained under constraint conditions; the obtained depth point clouds of various viewing angles are fused to obtain a static three-dimensional model. The invention can ensure the accuracy and completeness of the reconstructed shape of the static three-dimensional model. In addition, the invention also proposes a dynamic three-dimensional model capture and reconstruction method.

Figure 201010141182

Description

三维模型的捕捉及重建方法和系统Method and system for capturing and reconstructing three-dimensional models

技术领域 technical field

本发明涉及计算机视频处理技术领域,特别涉及一种三维模型的捕捉及重建方法和系统。The invention relates to the technical field of computer video processing, in particular to a method and system for capturing and reconstructing a three-dimensional model.

背景技术 Background technique

对于动态场景的三维重建问题,很多工作将其看成静态场景重建问题在时间维度上的简单累加,即并不利用时间信息辅助场景重建,每一帧单独进行静态三维建模。但是这种方法复杂度高、存储量大,不能保证帧与帧之间模型的拓扑一致性,容易产生抖动现象。此外采用上述方法进行三维建模无法对非刚体模型的运动情况进行有效分析,也无法通过时域上的插值获得任意时刻的模型。通过对这类问题进行研究,现有技术提出了联合求解3D场景流以及几何模型的重建方法。进一步还提出了使用变分方法统一动态场景几何和运动的重建,然而由于几何重建和运动重建是迭代进行的,即通过某时刻的几何重建作为运动重建的初始值推导下一时刻的模型重建,因此这种方法的时空联合重建效率仍然不高,实际效果也并不尽人意。For the 3D reconstruction of dynamic scenes, many works regard it as a simple accumulation of static scene reconstruction problems in the time dimension, that is, do not use time information to assist scene reconstruction, and perform static 3D modeling for each frame alone. However, this method has high complexity and large storage capacity, cannot guarantee the topological consistency of the model between frames, and is prone to jitter. In addition, using the above method for 3D modeling cannot effectively analyze the motion of the non-rigid body model, nor can it obtain the model at any time through interpolation in the time domain. Through research on this type of problem, the prior art proposes a joint solution to the 3D scene flow and the reconstruction method of the geometric model. It is further proposed to use the variational method to unify the reconstruction of dynamic scene geometry and motion. However, since geometric reconstruction and motion reconstruction are performed iteratively, that is, the model reconstruction at the next moment is deduced by using the geometric reconstruction at a certain moment as the initial value of motion reconstruction. Therefore, the joint spatial-temporal reconstruction efficiency of this method is still not high, and the actual effect is not satisfactory.

因此,为了避免时空联合重建难度大、质量一般的问题,另一类基于视频的动态三维重建方法则以初始帧的静态三维重建结果作为场景表示,然后应用三维运动跟踪算法求解该三维物体的运动,并应用合适的变形算法驱动静态模型运动,从而获得动态三维重建结果。目前,基于视频的三维运动跟踪可以分为两类:带标记的三维运动跟踪和无标记的三维运动跟踪。其中,带标记的三维运动跟踪方法准确,但是需要被捕捉动作者穿上带有标记的紧身服装,从而限制了对形状和纹理的捕捉。而无标记的三维运动跟踪方法则克服了以上的缺陷。一种无标记的三维运动跟踪方法是通过联合运动学模型和衣服模型来捕捉穿着更加一般服装的人体的运动,但是这种方法无法捕捉到运动对象的精确几何结构。另一种无标记的三维运动跟踪方法则可同时捕捉对象骨架和形状的运动,然而由于一些局部表面并没有随时间做出应有的改变,因此该方法仍然无法有效地进行三维运动跟踪。此外,因为该方法仅仅依靠轮廓信息,所以对轮廓误差非常敏感。虽然这种无标记的方法灵活性提高了,但是很难达到与带标记方法相同的精度。此外,大多数三维运动跟踪方法都需要通过提取运动学骨架来帮助捕捉运动,而运动学骨架只能跟踪刚体运动,所以这种方法往往需要其他扫描技术来辅助捕捉时变的形状。最后,以上所有方法都不能跟踪穿着任意服饰人的运动。Therefore, in order to avoid the difficulty and general quality of spatio-temporal joint reconstruction, another video-based dynamic 3D reconstruction method uses the static 3D reconstruction result of the initial frame as the scene representation, and then applies the 3D motion tracking algorithm to solve the motion of the 3D object. , and apply a suitable deformation algorithm to drive the motion of the static model, so as to obtain dynamic 3D reconstruction results. Currently, video-based 3D motion tracking can be divided into two categories: 3D motion tracking with markers and 3D motion tracking without markers. Among them, the 3D motion tracking method with markers is accurate, but requires the person to be captured to wear tight clothing with markers, which limits the capture of shape and texture. The markerless 3D motion tracking method overcomes the above defects. A marker-free approach to 3D motion tracking captures the motion of a human body wearing more general clothing by combining a kinematic model and a clothing model, but this approach fails to capture the precise geometry of the moving object. Another marker-free 3D motion tracking method can capture both the motion of the object's skeleton and shape, but this method is still not effective for 3D motion tracking because some local surfaces do not change as they should over time. Furthermore, since the method only relies on contour information, it is very sensitive to contour errors. Although this label-free method has increased flexibility, it is difficult to achieve the same accuracy as the labeled method. In addition, most 3D motion tracking methods need to help capture motion by extracting a kinematic skeleton, which can only track rigid body motion, so this method often requires other scanning techniques to assist in capturing time-varying shapes. Finally, none of the above methods can track the motion of a person wearing arbitrary clothing.

近年来,在计算机图形学中,新的动画捕捉与设计、动画编辑以及变形传递方法不断涌现。这些方法不再依赖于运动学骨架和运动参数,而是基于表面模型和一般的形状变形方法,从而可以捕捉刚体以及非刚体的变形。但是,所有这种基于多视角视频的运动捕捉及恢复方法,初始帧的静态三维重建都需要采用激光扫描仪进行。虽然激光扫描仪可以获得高精度的三维重建结果,但是激光扫描仪昂贵、费时费力,扫描期间人必须处于完全静止不动状态。而且为了后续工作的方便,人通常是两手握拳站立的,所拍摄的多视角视频也是双手握拳做动作。另外,以激光扫描仪的重建结果作为初始场景表示的方法,所恢复的整个动态三维序列中都一直保留着扫描时模型上的一些表面特征,如衣服的褶皱等。In computer graphics, new methods of animation capture and design, animation editing, and deformation transfer have emerged in recent years. These methods no longer rely on kinematic skeletons and motion parameters, but are based on surface models and general shape deformation methods, so that deformations of rigid as well as non-rigid bodies can be captured. However, in all such motion capture and restoration methods based on multi-view video, the static 3D reconstruction of the initial frame needs to be performed by a laser scanner. Although laser scanners can obtain high-precision 3D reconstruction results, laser scanners are expensive, time-consuming and laborious, and people must remain completely still during scanning. Moreover, for the convenience of follow-up work, people usually stand with fists in both hands, and the multi-view video shot also makes movements with fists in both hands. In addition, using the reconstruction result of the laser scanner as the initial scene representation method, some surface features on the model during scanning, such as the folds of clothes, etc., are always preserved in the restored dynamic 3D sequence.

发明内容 Contents of the invention

本发明的目的旨在至少解决上述技术缺陷,本发明提出了一种静态和动态三维模型的捕捉及重建方法和系统。The purpose of the present invention is to at least solve the above-mentioned technical defects. The present invention proposes a method and system for capturing and reconstructing static and dynamic three-dimensional models.

为达到上述目的,本发明一方面提出了一种静态三维模型的捕捉及重建方法,包括以下步骤:对环形场中的运动物体进行图像采集;获取可视外壳模型;根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云;对获得的所述各个视角的深度点云进行融合得到静态三维模型。In order to achieve the above object, the present invention proposes a method for capturing and reconstructing a static three-dimensional model, which includes the following steps: collecting images of moving objects in an annular field; obtaining a visual shell model; The depth point cloud of each viewing angle is obtained by visual shell model and preset constraint conditions; the obtained depth point cloud of each viewing angle is fused to obtain a static three-dimensional model.

本发明另一方面还提出了一种静态三维模型的捕捉及重建系统,包括:围绕环形场的多个摄像机,用于对环形场中的运动物体进行图像采集;静态三维模型重建装置,用于获取可视外壳模型,并根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云,以及对获得的所述各个视角的深度点云进行融合得到静态三维模型。Another aspect of the present invention also proposes a static three-dimensional model capture and reconstruction system, including: a plurality of cameras around the ring field, used to collect images of moving objects in the ring field; a static three-dimensional model reconstruction device for Obtain the visible shell model, and obtain the depth point cloud of each viewing angle according to the image of each viewing angle, the visible shell model and the preset constraints, and fuse the obtained depth point cloud of each viewing angle to obtain a static three-dimensional model .

本发明再一方面还提出了一种动态三维模型的捕捉及重建方法,包括以下步骤:获取静态三维模型;将静态三维模型的表面模型转换为体模型,并将其作为运动跟踪的默认场景表示;获取模型顶点在下一时刻的初始三维运动;根据预定的空时约束条件从获取的顶点中选择精确的顶点作为体变形的位置约束;根据所述位置约束驱动拉普拉斯体变形框架更新动态三维模型。Another aspect of the present invention also proposes a dynamic three-dimensional model capture and reconstruction method, including the following steps: obtaining a static three-dimensional model; converting the surface model of the static three-dimensional model into a volume model, and using it as a default scene representation for motion tracking ; Obtain the initial three-dimensional movement of the model vertices at the next moment; select the precise vertices from the obtained vertices according to the predetermined space-time constraints as the position constraints of the body deformation; drive the Laplacian body deformation framework to update the dynamics according to the position constraints 3D model.

本发明再一方面还提出了一种动态三维模型的捕捉及重建系统,包括:围绕环形场的多个摄像机,用于对环形场中的运动物体进行图像采集;静态三维模型获取装置,用于获取静态三维模型;动态三维模型重建装置,用于将静态三维模型的表面模型转换为体模型,并将其作为运动跟踪的默认场景表示,并获取模型顶点在下一时刻的初始三维运动,以及根据预定的空时约束条件从获取的顶点中选择精确的顶点作为体变形的位置约束,根据所述位置约束驱动拉普拉斯体变形框架更新动态三维模型。Another aspect of the present invention also proposes a dynamic three-dimensional model capture and reconstruction system, including: a plurality of cameras around the ring field, used to collect images of moving objects in the ring field; a static three-dimensional model acquisition device for Obtain a static 3D model; a dynamic 3D model reconstruction device is used to convert the surface model of the static 3D model into a volume model, and use it as the default scene representation for motion tracking, and obtain the initial 3D motion of the model vertices at the next moment, and according to Predetermined space-time constraints select accurate vertices from the obtained vertices as position constraints for body deformation, and drive the Laplacian body deformation framework to update the dynamic 3D model according to the position constraints.

通过本发明能够保证静态三维模型重建形状的精确性和完整性,另外本发明基于稀疏表示理论设计了新的三维运动估计方法,以及基于体模型的变形优化框架,因此能够得到优质的动态重建结果。另外,本发明可以不依赖于三维扫描仪和光学标记,因此成本不高,而且能够跟踪穿着任意服饰人的运动。Through the present invention, the accuracy and integrity of the reconstructed shape of the static 3D model can be guaranteed. In addition, the present invention designs a new 3D motion estimation method based on sparse representation theory, and a deformation optimization framework based on the volume model, so high-quality dynamic reconstruction results can be obtained . In addition, the invention can be independent of 3D scanners and optical markers, so the cost is not high, and it can track the movement of people wearing any clothing.

本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.

附图说明 Description of drawings

本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present invention will become apparent and easy to understand from the following description of the embodiments in conjunction with the accompanying drawings, wherein:

图1为本发明实施例的静态三维模型的捕捉及重建方法流程图;1 is a flowchart of a method for capturing and reconstructing a static three-dimensional model according to an embodiment of the present invention;

图2为本发明实施例20个摄像机呈环形分布环绕待采集的场景;Fig. 2 is the scene to be collected surrounded by 20 cameras distributed in a ring according to the embodiment of the present invention;

图3为本发明实施例的动态三维模型的捕捉及重建方法流程图;3 is a flowchart of a method for capturing and reconstructing a dynamic three-dimensional model according to an embodiment of the present invention;

图4为本发明实施例整个动态三维重建方法的示意框图;以及FIG. 4 is a schematic block diagram of the entire dynamic three-dimensional reconstruction method according to an embodiment of the present invention; and

图5为对两个长时序列采用本发明方法得到的动态三维模型结果。Fig. 5 is the result of the dynamic three-dimensional model obtained by using the method of the present invention for two long time series.

具体实施方式 Detailed ways

下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能解释为对本发明的限制。Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals designate the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary only for explaining the present invention and should not be construed as limiting the present invention.

本发明实施例分别提出了静态三维模型和动态三维模型的捕捉及重建方法,但是需要说明的是动态三维模型的捕捉及重建可以以通过本发明获得的静态三维模型为基础,也可以以通过其他手段获得的静态三维模型为基础,例如现有的三维扫描仪等,这些均应包含在本发明的保护范围之内。The embodiment of the present invention respectively proposes the capture and reconstruction methods of the static 3D model and the dynamic 3D model, but it should be noted that the capture and reconstruction of the dynamic 3D model can be based on the static 3D model obtained by the present invention, or can be obtained through other methods. Based on the static three-dimensional model obtained by other means, such as the existing three-dimensional scanner, etc., these should be included in the protection scope of the present invention.

如图1所示,为本发明实施例的静态三维模型的捕捉及重建方法流程图,包括以下步骤:As shown in Figure 1, it is a flowchart of a method for capturing and reconstructing a static three-dimensional model according to an embodiment of the present invention, including the following steps:

步骤S101,对环形场中的运动物体进行图像采集。例如,在环形场中设有20个摄像机,每个摄像机的帧率为30帧/秒,控制各组摄像机对环形场中的运动物体进行采集。当然本领域技术人员也可选择更多的摄像机以获得更多的视角图像,当然也可减少摄像机的数量,这些均应包含在本发明的保护范围之内。本发明的一个示例,如图2所示,为本发明实施例20个摄像机呈环形分布环绕待采集的场景。其中,Ci表示第i号摄像机。摄像机采集图像的分辨率为1024×768。所采集人物站于环形中心。Step S101 , image acquisition of moving objects in the ring field. For example, 20 cameras are set in the ring field, and the frame rate of each camera is 30 frames per second, and each group of cameras is controlled to collect moving objects in the ring field. Of course, those skilled in the art can also select more cameras to obtain more viewing angle images, and of course reduce the number of cameras, all of which should be included in the protection scope of the present invention. As an example of the present invention, as shown in FIG. 2 , 20 cameras according to the embodiment of the present invention are distributed in a ring around the scene to be captured. Among them, Ci represents the i-th camera. The resolution of the image captured by the camera is 1024×768. The collected characters stand in the center of the circle.

步骤S102,获取初始时刻的可视外壳模型(visual hull)。Step S102, acquiring a visual hull model (visual hull) at the initial moment.

步骤S103,根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云。具体可包括:Step S103, obtaining the depth point cloud of each angle of view according to the image of each angle of view, the visible shell model and the preset constraints. Specifically, it may include:

步骤S201,将各个视角图像与获取的所述可视外壳模型求交以获取各个视角的可见点云。Step S201 , intersecting the obtained visible shell model with the images of each viewing angle to obtain the visible point cloud of each viewing angle.

步骤S202,将各个视角的可见点云投影到该视角图像,获得初始深度点云估计,即d=(a,b,1)为沿着对极线方向的偏移。Step S202, project the visible point cloud of each viewing angle to the viewing angle image to obtain the initial depth point cloud estimation, that is, d=(a, b, 1) is the offset along the epipolar line.

步骤S203,根据初始深度点云估计和所述预设的约束条件获得精确深度点云,其中,所述预设的约束条件包括对极几何约束、亮度约束、梯度约束和平滑性约束中的一种或多种。在本发明的一个优选实施例中,可同时包括上述四种约束,通过以下公式获得精确深度点云:Step S203, obtaining an accurate depth point cloud according to the initial depth point cloud estimation and the preset constraints, wherein the preset constraints include one of epipolar geometry constraints, brightness constraints, gradient constraints, and smoothness constraints one or more species. In a preferred embodiment of the present invention, the above four constraints can be included at the same time, and the accurate depth point cloud can be obtained by the following formula:

EE. (( aa ,, bb )) == ∫∫ ΩΩ ββ (( xx )) ΨΨ (( || II (( xx bb ++ dd )) -- II (( xx )) || 22 ++ γγ || ▿▿ II (( xx bb ++ dd )) -- ▿▿ II (( xx )) || 22 )) dxdx ++ αα ∫∫ ΩΩ ΨΨ (( || ▿▿ aa || 22 ++ || ▿▿ bb || 22 )) dxdx ,,

其中,x:=(x,y,c)定义了参考视角c图像中的一个像素位置(x,y),其亮度定义为I(x);xb:=(xb,yb,c)为视角c+1上的对极点,w为x在视角c+1上的对应点的偏移;

Figure GSA00000073445300052
为空间梯度算子;β(x)为遮挡图,对于非遮挡区域的像素为1,否则为0。考虑到模型假设中野值的影响,我们采用鲁棒的惩罚函数
Figure GSA00000073445300053
来产生一个全变差正则化,其中ε为一个很小的值(在实验中设为0.001)该公式包括了四个约束:对极几何约束(xb+d=x+w),亮度约束(I(xb+d)=I(x)),梯度约束
Figure GSA00000073445300061
和平滑性约束
Figure GSA00000073445300062
Among them, x:=(x, y, c) defines a pixel position (x, y) in the image of the reference view angle c, and its brightness is defined as I(x); x b :=(x b , y b , c ) is the antipole on the angle of view c+1, w is the offset of the corresponding point of x on the angle of view c+1;
Figure GSA00000073445300052
is the spatial gradient operator; β(x) is the occlusion map, which is 1 for the pixels in the non-occlusion area, and 0 otherwise. Considering the influence of the outliers in the model assumptions, we use a robust penalty function
Figure GSA00000073445300053
To produce a total variation regularization, where ε is a very small value (set to 0.001 in the experiment). This formula includes four constraints: epipolar geometric constraints (x b +d = x + w), brightness constraints (I(x b +d)=I(x)), gradient constraints
Figure GSA00000073445300061
and smoothness constraints
Figure GSA00000073445300062

步骤S104,对获得的所述各个视角的深度点云进行融合得到静态三维模型。具体可包括以下步骤:Step S104, merging the obtained depth point clouds of each viewing angle to obtain a static three-dimensional model. Specifically, the following steps may be included:

步骤S301,将各个视角的深度点云融合并通过轮廓约束去掉一些野值。Step S301, merging the depth point clouds of each viewing angle and removing some outliers through contour constraints.

步骤S302,通过移动立方体法重建完整表面模型,获得静态三维模型。In step S302, the complete surface model is reconstructed by the moving cube method to obtain a static three-dimensional model.

通过本发明能够保证静态三维模型重建形状的精确性和完整性,静态三维模型的精确性和完整性是动态三维模型重建的基础。The invention can ensure the accuracy and integrity of the reconstructed shape of the static three-dimensional model, and the accuracy and integrity of the static three-dimensional model are the basis of the reconstruction of the dynamic three-dimensional model.

如图3所示,为本发明实施例的动态三维模型的捕捉及重建方法流程图,包括以下步骤:As shown in Figure 3, it is a flowchart of a method for capturing and reconstructing a dynamic three-dimensional model according to an embodiment of the present invention, including the following steps:

步骤S401,将静态三维模型的表面模型转换为体模型,并将其作为运动跟踪的默认场景表示。Step S401, converting the surface model of the static 3D model into a volume model, and using it as a default scene representation for motion tracking.

步骤S402,获取模型顶点在下一时刻的初始三维运动。具体地,可包括以下步骤:Step S402, acquiring the initial three-dimensional motion of the vertices of the model at the next moment. Specifically, the following steps may be included:

步骤S501,计算下一时刻各视角图像的光流。Step S501, calculating the optical flow of images of each view angle at the next moment.

步骤S502,从各视角光流与相邻视角光流求取可见点的场景流,对于不可见点的场景流赋为一个相对大的值,例如10000。Step S502 , calculating the scene flow of the visible point from the optical flow of each viewing angle and the optical flow of the adjacent viewing angle, and assigning a relatively large value, such as 10000, to the scene flow of the invisible point.

步骤S503,以求得的各视角场景流为列,构造矩阵M∈im×n,其中m为表面顶点数。Step S503, taking the obtained scene flow of each view angle as a column, constructing a matrix M∈i m×n , where m is the number of surface vertices.

步骤S504,基于稀疏表示理论,通过求解以下低秩矩阵恢复问题,得到新的矩阵X。Step S504, based on the sparse representation theory, a new matrix X is obtained by solving the following low-rank matrix restoration problem.

minimize    ||X||* minimize||X|| *

Figure GSA00000073445300063
Figure GSA00000073445300063

其中,X为未知变量,Ω是[m]×[n]完整元素集合的一个子集([n]定义为数列{1,K,n}),

Figure GSA00000073445300064
为采样操作子,定义为Among them, X is an unknown variable, Ω is a subset of [m]×[n] complete element set ([n] is defined as the sequence {1, K, n}),
Figure GSA00000073445300064
is a sampling operator defined as

Figure GSA00000073445300071
Figure GSA00000073445300071

步骤S505,将矩阵X中的每一行的平均值作为该行所对应顶点的运动

Figure GSA00000073445300072
从而得到下一时刻的顶点位置
Figure GSA00000073445300073
Step S505, take the average value of each row in the matrix X as the motion of the vertex corresponding to the row
Figure GSA00000073445300072
So as to get the vertex position at the next moment
Figure GSA00000073445300073

步骤S403,根据预定的空时约束条件从获取的顶点中选择精确的顶点作为体变形的位置约束。在本发明实施例中,所述预定的空时约束条件包括:Step S403, selecting accurate vertices from the obtained vertices according to predetermined space-time constraints as position constraints for volume deformation. In the embodiment of the present invention, the predetermined space-time constraints include:

CC spsp == 11 NN ΣΣ nno == 00 NN -- 11 (( 11 -- PP silsil nno (( vv ii ′′ )) )) ,,

CC tmptmp == 11 NN vv ΣΣ nno ∈∈ vv (( ii )) (( 11 -- PP zz nno (( pp (( vv ii )) ,, pp (( vv ii ′′ )) )) )) ,,

CC smthsmth == || || ff →&Right Arrow; (( vv ii )) -- 11 NN sthe s ΣΣ jj ∈∈ NN (( ii )) ff →&Right Arrow; (( vv jj )) || || ..

其中,Psil n(v′i)为估计值的轮廓误差,如果v′i投影到下一时刻相机n图像上的像素点在轮廓之内则该函数值为1,否则为0;v(i)是vi的可见相机集合;Nv是可见相机数;Pz n(p(vi),p(v′i)计算vi和v′i在相机n图像上投影位置之间的ZNCC相关性;Ns为顶点vi的直接邻居个数。Among them, P sil n (v′ i ) is the contour error of the estimated value, if the pixel projected by v′ i onto the image of camera n at the next moment is within the contour, then the function value is 1, otherwise it is 0; v( i) is the visible camera set of v i ; N v is the number of visible cameras; P z n (p(v i ), p(v′ i ) calculates the distance between the projected positions of v i and v′ i on the image of camera n ZNCC correlation; N s is the number of direct neighbors of vertex v i .

步骤S404,根据位置约束驱动拉普拉斯体变形框架更新动态三维模型。具体地,包括:In step S404, the dynamic three-dimensional model is updated by driving the Laplacian volume deformation framework according to the position constraints. Specifically, including:

步骤S601,建立如下拉普拉斯体变形线性系统,对于每一个v′i,有Step S601, establish the following Laplacian volume deformation linear system, for each v′ i , there is

ΣΣ jj ∈∈ NN (( ii )) ωω ijij (( vv ii ′′ -- vv jj ′′ )) == ΣΣ jj ∈∈ NN (( ii )) ωω ijij 22 (( RR ii ++ RR jj )) (( vv ii -- vv jj )) ,,

其中,Ri和Rj为旋转矩阵,并初始化为单位阵.Among them, R i and R j are rotation matrices and are initialized as identity matrix.

步骤S602,定义协方差矩阵Step S602, define covariance matrix

CC ii == ΣΣ jj ∈∈ NN (( ii )) ωω ijij (( vv ii -- vv jj )) (( vv ii ′′ -- vv jj ′′ )) TT == VV ii DD. ii VV ii ′′ TT ,,

对Ci进行奇异值分解有

Figure GSA00000073445300081
于是
Figure GSA00000073445300082
如果det(Ri)≤0,则改变Ui中对应于最小奇异值的列的符号;The singular value decomposition of C i has
Figure GSA00000073445300081
then
Figure GSA00000073445300082
If det(R i )≤0, change the sign of the column in U i corresponding to the smallest singular value;

步骤S603,如果轮廓误差小于给定的阈值,则更新模型,否则返回步骤S601。Step S603, if the contour error is smaller than a given threshold, update the model, otherwise return to step S601.

作为本发明的优选实施例,本发明上述的静态三维模型和动态三维模型的捕捉及重建方法可以同时使用,如图4所示,为本发明实施例整个动态三维重建方法的示意框图。As a preferred embodiment of the present invention, the above static 3D model and dynamic 3D model capture and reconstruction methods of the present invention can be used simultaneously, as shown in Figure 4, which is a schematic block diagram of the entire dynamic 3D reconstruction method of the embodiment of the present invention.

如图5所示,为对两个长时序列采用所提出的发明方法得到的动态三维模型结果。其中,每个序列结果的第一幅图是将各个时刻模型放在一起的总图,后面的图片分别为各个时刻的建模结果。As shown in Fig. 5, it is the result of the dynamic three-dimensional model obtained by using the proposed inventive method on two long time series. Among them, the first picture of each sequence result is the general picture that puts the models of each time together, and the subsequent pictures are the modeling results of each time.

本发明实施例还提出了一种静态三维模型的捕捉及重建系统,该系统包括:围绕环形场的多个摄像机和静态三维模型重建装置。其中,围绕环形场的多个摄像机用于对环形场中的运动物体进行图像采集;静态三维模型重建装置用于获取可视外壳模型,并根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云,以及对获得的所述各个视角的深度点云进行融合得到静态三维模型。其中,静态三维模型重建装置的具体工作过程可参考以上静态三维模型的捕捉及重建方法的实施例,在此不再赘述。The embodiment of the present invention also proposes a system for capturing and reconstructing a static three-dimensional model. The system includes: multiple cameras surrounding an annular field and a static three-dimensional model reconstruction device. Among them, a plurality of cameras surrounding the ring field are used to collect images of moving objects in the ring field; the static 3D model reconstruction device is used to obtain a visible shell model, and according to each viewing angle image, the visible shell model and preset Obtain the depth point cloud of each angle of view according to the constraint conditions, and fuse the acquired depth point cloud of each angle of view to obtain a static three-dimensional model. For the specific working process of the static 3D model reconstruction device, reference may be made to the above embodiment of the static 3D model capture and reconstruction method, which will not be repeated here.

另外,本发明实施例还提出了一种动态三维模型的捕捉及重建系统,In addition, the embodiment of the present invention also proposes a dynamic three-dimensional model capture and reconstruction system,

包括:围绕环形场的多个摄像机、静态三维模型获取装置和动态三维模型重建装置。其中,围绕环形场的多个摄像机用于对环形场中的运动物体进行图像采集;静态三维模型获取装置用于获取静态三维模型;动态三维模型重建装置,用于将静态三维模型的表面模型转换为体模型,并将其作为运动跟踪的默认场景表示,并获取模型顶点在下一时刻的初始三维运动,以及根据预定的空时约束条件从获取的顶点中选择精确的顶点作为体变形的位置约束,根据所述位置约束驱动拉普拉斯体变形框架更新动态三维模型。其中,静态和动态三维模型重建装置的具体工作过程可参考以上静态和动态三维模型的捕捉及重建方法的实施例,在此不再赘述。It includes: a plurality of cameras surrounding the ring field, a static three-dimensional model acquisition device and a dynamic three-dimensional model reconstruction device. Among them, multiple cameras around the ring field are used to collect images of moving objects in the ring field; the static 3D model acquisition device is used to obtain a static 3D model; the dynamic 3D model reconstruction device is used to convert the surface model of the static 3D model It is a body model, and it is used as the default scene representation of motion tracking, and the initial three-dimensional motion of the model vertices at the next moment is obtained, and the precise vertices are selected from the obtained vertices according to the predetermined space-time constraints as the position constraints of the body deformation , to update the dynamic 3D model by driving the Laplacian body deformation framework according to the position constraints. For the specific working process of the static and dynamic 3D model reconstruction device, reference may be made to the above embodiment of the static and dynamic 3D model capture and reconstruction method, which will not be repeated here.

通过本发明能够保证静态三维模型重建形状的精确性和完整性,另外本发明基于稀疏表示理论设计了新的三维运动估计方法,以及基于体模型的变形优化框架,因此能够得到优质的动态重建结果。另外,本发明可以不依赖于三维扫描仪和光学标记,因此成本不高,而且能够跟踪穿着任意服饰人的运动。Through the present invention, the accuracy and integrity of the reconstructed shape of the static 3D model can be guaranteed. In addition, the present invention designs a new 3D motion estimation method based on sparse representation theory, and a deformation optimization framework based on the volume model, so high-quality dynamic reconstruction results can be obtained . In addition, the invention can be independent of 3D scanners and optical markers, so the cost is not high, and it can track the movement of people wearing any clothing.

尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同限定。Although the embodiments of the present invention have been shown and described, those skilled in the art can understand that various changes, modifications and substitutions can be made to these embodiments without departing from the principle and spirit of the present invention. and modifications, the scope of the invention is defined by the appended claims and their equivalents.

Claims (6)

1.一种静态三维模型的捕捉及重建方法,其特征在于,包括以下步骤: 1. A capture and reconstruction method of a static three-dimensional model, characterized in that, comprising the following steps: 对环形场中的运动物体进行图像采集; Image acquisition of moving objects in the ring field; 获取可视外壳模型; Get the visual shell model; 根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云; Obtain the depth point cloud of each angle of view according to the image of each angle of view, the visible shell model and the preset constraint conditions; 对获得的所述各个视角的深度点云进行融合得到静态三维模型, Fusing the obtained depth point clouds of each viewing angle to obtain a static three-dimensional model, 其中, in, 所述根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云包括: The obtaining the depth point cloud of each angle of view according to each angle of view image, the visible shell model and preset constraints includes: 将各个视角图像与获取的所述可视外壳模型求交以获取各个视角的可见点云; Intersect each viewing angle image with the obtained visible shell model to obtain a visible point cloud of each viewing angle; 将各个视角的可见点云投影到该视角图像,获得初始深度点云估计; Project the visible point cloud of each viewing angle to the viewing angle image to obtain the initial depth point cloud estimation; 根据所述初始深度点云估计和所述预设的约束条件获得精确深度点云; Obtaining an accurate depth point cloud according to the initial depth point cloud estimation and the preset constraints; 所述预设的约束条件包括对极几何约束、亮度约束、梯度约束和平滑性约束中的一种或多种; The preset constraints include one or more of epipolar geometric constraints, brightness constraints, gradient constraints, and smoothness constraints; 通过以下公式获得精确深度点云: Accurate depth point cloud is obtained by the following formula:
Figure FSB00000614199300011
Figure FSB00000614199300011
其中,d=(a,b,1)为初始深度点云估计,x:=(x,y,c)为参考视角c图像中的一个像素位置(x,y),I(x)为该像素位置的亮度;x b :=(xb,yb,c)为视角c+1上的对极点; 
Figure FSB00000614199300012
为空间梯度算子;β(x)为遮挡图, 
Figure FSB00000614199300013
为鲁棒的惩罚函数。
Among them, d = (a, b, 1) is the initial depth point cloud estimation, x : = (x, y, c) is a pixel position (x, y) in the reference view c image, and I(x) is the The brightness of the pixel position; x b :=(x b , y b , c) is the antipole on the viewing angle c+1;
Figure FSB00000614199300012
is the spatial gradient operator; β(x) is the occlusion map,
Figure FSB00000614199300013
is a robust penalty function.
2.如权利要求1所述的静态三维模型的捕捉及重建方法,其特征在于, 所述对获得的所述各个视角的深度点云进行融合得到静态三维模型包括: 2. the capture of static three-dimensional model as claimed in claim 1 and reconstruction method, it is characterized in that, described to obtain the depth point cloud of described each angle of view that fusion obtains static three-dimensional model comprising: 将所述各个视角的深度点云融合并通过轮廓约束去掉野值;以及 Fusion of the depth point cloud of each viewing angle and remove outliers through contour constraints; and 通过移动立方体法重建完整表面模型,获得静态三维模型。 The complete surface model is reconstructed by the moving cube method to obtain a static 3D model. 3.如权利要求1或2所述的静态三维模型的捕捉及重建方法,其特征在于,还包括: 3. The capture and reconstruction method of a static three-dimensional model as claimed in claim 1 or 2, further comprising: 根据所述静态三维模型构建动态三维模型。 A dynamic three-dimensional model is constructed according to the static three-dimensional model. 4.如权利要求3所述的静态三维模型的捕捉及重建方法,其特征在于,所述根据所述静态三维模型构建动态三维模型包括: 4. The capture and reconstruction method of a static three-dimensional model as claimed in claim 3, wherein said building a dynamic three-dimensional model according to said static three-dimensional model comprises: 将静态三维模型的表面模型转换为体模型,并将其作为运动跟踪的默认场景表示; Convert the surface model of a static 3D model to a volume model and use it as the default scene representation for motion tracking; 获取模型顶点在下一时刻的初始三维运动; Obtain the initial three-dimensional motion of the model vertices at the next moment; 根据预定的空时约束条件从获取的顶点中选择精确的顶点作为体变形的位置约束; Select accurate vertices from the obtained vertices according to predetermined space-time constraints as positional constraints for volume deformation; 根据所述位置约束驱动拉普拉斯体变形框架更新动态三维模型, Drive the Laplacian body deformation framework to update the dynamic 3D model according to the position constraints, 其中, in, 所述获取模型顶点在下一时刻的初始三维运动包括: The acquisition of the initial three-dimensional movement of the model vertices at the next moment includes: 计算下一时刻各视角图像的光流; Calculate the optical flow of images of each viewing angle at the next moment; 从各视角光流与相邻视角光流求取可见点的场景流,对于不可见点的场景流赋为一个相对大的值; Calculate the scene flow of visible points from the optical flow of each viewing angle and the optical flow of adjacent viewing angles, and assign a relatively large value to the scene flow of invisible points; 以求得的各视角场景流为列,构造矩阵M∈im×n,其中m为表面顶点数; Taking the obtained scene flow of each viewing angle as a column, construct a matrix M∈i m×n , where m is the number of surface vertices; 基于稀疏表示理论获得矩阵X; Obtain matrix X based on sparse representation theory; 将矩阵X中的每一行的平均值作为该行所对应顶点的运动 
Figure FSB00000614199300021
从而得到下一时刻的顶点位置 
Figure FSB00000614199300022
Take the average value of each row in the matrix X as the motion of the vertex corresponding to the row
Figure FSB00000614199300021
So as to get the vertex position at the next moment
Figure FSB00000614199300022
其中, in, 所述基于稀疏表示理论获得矩阵X包括:  The said obtaining matrix X based on sparse representation theory includes: 通过求解以下低秩矩阵恢复问题得到新的矩阵X, The new matrix X is obtained by solving the following low-rank matrix recovery problem, minimize    ||X||* minimize||X|| *
Figure FSB00000614199300031
Figure FSB00000614199300031
其中,X为未知变量,Ω是[m]×[n]完整元素集合的一个子集,[n]定义为数列{1,...,n}, 
Figure FSB00000614199300032
为采样操作子,定义为
Among them, X is an unknown variable, Ω is a subset of [m]×[n] complete element set, [n] is defined as the sequence {1,...,n},
Figure FSB00000614199300032
is a sampling operator defined as
Figure FSB00000614199300033
Figure FSB00000614199300033
所述预定的空时约束条件包括: The predetermined space-time constraints include:
Figure FSB00000614199300034
Figure FSB00000614199300034
Figure FSB00000614199300035
Figure FSB00000614199300035
Figure FSB00000614199300036
Figure FSB00000614199300036
其中, 
Figure FSB00000614199300037
为估计值的轮廓误差,如果v′i投影到下一时刻相机n图像上的像素点在轮廓之内则该函数值为1,否则为0; 
Figure FSB00000614199300038
是vi的可见相机集合;Nv是可见相机数; 计算vi和v′i在相机n图像上投影位置之间的ZNCC相关性;Ns为顶点vi的直接邻居个数。
in,
Figure FSB00000614199300037
is the contour error of the estimated value, if v′ i is projected onto the pixel of the camera n image at the next moment is within the contour, then the function value is 1, otherwise it is 0;
Figure FSB00000614199300038
is the visible camera set of v i ; N v is the number of visible cameras; Calculate the ZNCC correlation between the projected positions of v i and v′ i on the image of camera n; N s is the number of direct neighbors of vertex v i .
5.如权利要求4所述的静态三维模型的捕捉及重建方法,其特征在于,所述根据位置约束驱动拉普拉斯体变形框架更新动态三维模型包括: 5. The capturing and rebuilding method of static three-dimensional model as claimed in claim 4, is characterized in that, described according to position constraints, drives the Laplace body deformation framework to update dynamic three-dimensional model and comprises: 初始化旋转矩阵为单位矩阵,Ri=Rj=I; Initialize the rotation matrix as the identity matrix, R i =R j =I; 利用拉普拉斯体变形进行优化; Optimization using Laplacian volume deformation; 获得新的旋转矩阵Ri和RjObtain new rotation matrices R i and R j ; 判断轮廓误差是否小于预定值,如果小于所述预定值则更新动态三维模型,如果不小于所述预定值则继续利用拉普拉斯体变形进行优化。  It is judged whether the contour error is smaller than a predetermined value, if it is smaller than the predetermined value, the dynamic three-dimensional model is updated, and if it is not smaller than the predetermined value, the Laplacian body deformation is continued to be used for optimization. the 6.一种静态三维模型的捕捉及重建系统,其特征在于,包括: 6. A capture and reconstruction system for a static three-dimensional model, characterized in that it comprises: 围绕环形场的多个摄像机,用于对环形场中的运动物体进行图像采集;以及 a plurality of cameras around the ring field for image acquisition of moving objects in the ring field; and 静态三维模型重建装置,用于获取可视外壳模型,并根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云,以及对获得的所述各个视角的深度点云进行融合得到静态三维模型, A static three-dimensional model reconstruction device, configured to obtain a visible shell model, and obtain depth point clouds of each viewing angle according to images of each viewing angle, the visible shell model, and preset constraints, and to obtain the depth of each viewing angle The point cloud is fused to obtain a static 3D model, 其中, in, 所述静态三维模型重建装置将各个视角图像与获取的所述可视外壳模型求交以获取各个视角的可见点云,再将各个视角的可见点云投影到该视角图像,获得初始深度点云估计,以及根据所述初始深度点云估计和所述预设的约束条件获得精确深度点云; The static 3D model reconstruction device intersects the obtained visible shell model with each viewing angle image to obtain the visible point cloud of each viewing angle, and then projects the visible point cloud of each viewing angle to the viewing angle image to obtain the initial depth point cloud Estimating, and obtaining an accurate depth point cloud according to the initial depth point cloud estimation and the preset constraints; 所述预设的约束条件包括对极几何约束、亮度约束、梯度约束和平滑性约束中的一种或多种; The preset constraints include one or more of epipolar geometric constraints, brightness constraints, gradient constraints, and smoothness constraints; 所述静态三维模型重建装置通过以下公式获得精确深度点云: The static 3D model reconstruction device obtains an accurate depth point cloud through the following formula:
Figure FSB00000614199300041
Figure FSB00000614199300041
其中,d=(a,b,1)为初始深度点云估计,x:=(x,y,c)为参考视角c图像中的一个像素位置(x,y),I(x)为该像素位置的亮度;x b :=(xb,yb,c)为视角c+1上的对极点; 
Figure FSB00000614199300042
为空间梯度算子;β(x)为遮挡图, 
Figure FSB00000614199300043
为鲁棒的惩罚函数。 
Among them, d = (a, b, 1) is the initial depth point cloud estimation, x : = (x, y, c) is a pixel position (x, y) in the reference view c image, and I(x) is the The brightness of the pixel position; x b :=(x b , y b , c) is the antipole on the viewing angle c+1;
Figure FSB00000614199300042
is the spatial gradient operator; β(x) is the occlusion map,
Figure FSB00000614199300043
is a robust penalty function.
CN2010101411826A 2010-04-06 2010-04-06 Method and system for capturing and rebuilding three-dimensional model Expired - Fee Related CN101833786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101411826A CN101833786B (en) 2010-04-06 2010-04-06 Method and system for capturing and rebuilding three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101411826A CN101833786B (en) 2010-04-06 2010-04-06 Method and system for capturing and rebuilding three-dimensional model

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN 201110167593 Division CN102222361A (en) 2010-04-06 2010-04-06 Method and system for capturing and reconstructing 3D model

Publications (2)

Publication Number Publication Date
CN101833786A CN101833786A (en) 2010-09-15
CN101833786B true CN101833786B (en) 2011-12-28

Family

ID=42717847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101411826A Expired - Fee Related CN101833786B (en) 2010-04-06 2010-04-06 Method and system for capturing and rebuilding three-dimensional model

Country Status (1)

Country Link
CN (1) CN101833786B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306390B (en) * 2011-05-18 2013-11-06 清华大学 Method and device for capturing movement based on framework and partial interpolation
AU2011203028B1 (en) * 2011-06-22 2012-03-08 Microsoft Technology Licensing, Llc Fully automatic dynamic articulated model calibration
CN102446366B (en) * 2011-09-14 2013-06-19 天津大学 Time-space jointed multi-view video interpolation and three-dimensional modeling method
CN102722908B (en) * 2012-05-25 2016-06-08 任伟峰 Method for position and device are put in a kind of object space in three-dimension virtual reality scene
CN102800127B (en) * 2012-07-18 2014-11-26 清华大学 Light stream optimization based three-dimensional reconstruction method and device
CN103903300A (en) * 2012-12-31 2014-07-02 博世汽车部件(苏州)有限公司 Object surface height reconstructing method, object surface height reconstructing system, optical character extracting method and optical character extracting system
CN103927787A (en) * 2014-04-30 2014-07-16 南京大学 Method and device for improving three-dimensional reconstruction precision based on matrix recovery
CN104599314A (en) * 2014-06-12 2015-05-06 深圳奥比中光科技有限公司 Three-dimensional model reconstruction method and system
CN105488823B (en) * 2014-09-16 2019-10-18 株式会社日立制作所 CT image reconstruction method, CT image reconstruction device and CT system
US20160140733A1 (en) * 2014-11-13 2016-05-19 Futurewei Technologies, Inc. Method and systems for multi-view high-speed motion capture
US10127709B2 (en) * 2014-11-28 2018-11-13 Panasonic Intellectual Property Management Co., Ltd. Modeling device, three-dimensional model generating device, modeling method, and program
CN107170037A (en) * 2016-03-07 2017-09-15 深圳市鹰眼在线电子科技有限公司 A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
WO2018045532A1 (en) * 2016-09-08 2018-03-15 深圳市大富网络技术有限公司 Method for generating square animation and related device
US10572720B2 (en) * 2017-03-01 2020-02-25 Sony Corporation Virtual reality-based apparatus and method to generate a three dimensional (3D) human face model using image and depth data
CN107358645B (en) * 2017-06-08 2020-08-11 上海交通大学 Product 3D model reconstruction method and system
TWI657407B (en) * 2017-12-07 2019-04-21 財團法人資訊工業策進會 Three-dimensional point cloud tracking apparatus and method by recurrent neural network
CN108769361B (en) * 2018-04-03 2020-10-27 华为技术有限公司 Control method of terminal wallpaper, terminal and computer-readable storage medium
CN109271893B (en) * 2018-08-30 2021-01-01 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for generating simulation point cloud data
WO2021051220A1 (en) * 2019-09-16 2021-03-25 深圳市大疆创新科技有限公司 Point cloud fusion method, device, and system, and storage medium
CN112001958B (en) * 2020-10-28 2021-02-02 浙江浙能技术研究院有限公司 Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation
WO2022087932A1 (en) * 2020-10-29 2022-05-05 Huawei Technologies Co., Ltd. Non-rigid 3d object modeling using scene flow estimation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6791542B2 (en) * 2002-06-17 2004-09-14 Mitsubishi Electric Research Laboratories, Inc. Modeling 3D objects with opacity hulls
CN100557640C (en) * 2008-04-28 2009-11-04 清华大学 An Interactive Multi-viewpoint 3D Model Reconstruction Method
CN101650834A (en) * 2009-07-16 2010-02-17 上海交通大学 Three dimensional reconstruction method of human body surface under complex scene

Also Published As

Publication number Publication date
CN101833786A (en) 2010-09-15

Similar Documents

Publication Publication Date Title
CN101833786B (en) Method and system for capturing and rebuilding three-dimensional model
CN102222361A (en) Method and system for capturing and reconstructing 3D model
CN108711185B (en) 3D reconstruction method and device combining rigid motion and non-rigid deformation
CN108038905B (en) A kind of Object reconstruction method based on super-pixel
CN111882668B (en) Multi-view three-dimensional object reconstruction method and system
Sturm et al. CopyMe3D: Scanning and printing persons in 3D
CN104376552B (en) A kind of virtual combat method of 3D models and two dimensional image
Ahmed et al. Dense correspondence finding for parametrization-free animation reconstruction from video
CN108053476B (en) A system and method for measuring human parameters based on segmented three-dimensional reconstruction
CN104915978B (en) Realistic animation generation method based on body-sensing camera Kinect
CN101658347B (en) Method for obtaining dynamic shape of foot model
CN103649998A (en) Method for determining a parameter set designed for determining the pose of a camera and/or for determining a three-dimensional structure of the at least one real object
Li et al. 3d human avatar digitization from a single image
Sizintsev et al. Spatiotemporal stereo and scene flow via stequel matching
Wang et al. TerrainFusion: Real-time digital surface model reconstruction based on monocular SLAM
Alsadik et al. Efficient use of video for 3D modelling of cultural heritage objects
Chen et al. Research on 3D reconstruction based on multiple views
Guan et al. EVI‐SAM: Robust, Real‐Time, Tightly‐Coupled Event–Visual–Inertial State Estimation and 3D Dense Mapping
Ke et al. Towards real-time 3D visualization with multiview RGB camera array
CN112132971B (en) Three-dimensional human modeling method, three-dimensional human modeling device, electronic equipment and storage medium
Remondino 3D reconstruction of static human body with a digital camera
Mahmoud et al. Fast 3d structure from motion with missing points from registration of partial reconstructions
Suttasupa et al. Plane detection for Kinect image sequences
JP2009048305A (en) Shape analysis program and shape analysis apparatus
CN105184860A (en) Method for reconstructing dense three-dimensional structure and motion field of dynamic face simultaneously

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111228