CN101833786B - Method and system for capturing and rebuilding three-dimensional model - Google Patents
Method and system for capturing and rebuilding three-dimensional model Download PDFInfo
- Publication number
- CN101833786B CN101833786B CN2010101411826A CN201010141182A CN101833786B CN 101833786 B CN101833786 B CN 101833786B CN 2010101411826 A CN2010101411826 A CN 2010101411826A CN 201010141182 A CN201010141182 A CN 201010141182A CN 101833786 B CN101833786 B CN 101833786B
- Authority
- CN
- China
- Prior art keywords
- model
- static
- constraints
- point cloud
- viewing angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000003068 static effect Effects 0.000 claims abstract description 60
- 239000011159 matrix material Substances 0.000 claims description 14
- 230000003287 optical effect Effects 0.000 claims description 8
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 2
- 230000004927 fusion Effects 0.000 claims 2
- 238000011084 recovery Methods 0.000 claims 1
- 238000013461 design Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 208000006440 Open Bite Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003368 label free method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000004800 variational method Methods 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
本发明提出一种静态三维模型的捕捉及重建方法,包括以下步骤:对环形场中的运动物体进行图像采集;获取可视外壳模型;根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云;对获得的所述各个视角的深度点云进行融合得到静态三维模型。通过本发明能够保证静态三维模型重建形状的精确性和完整性。另外,本发明还提出了一种动态三维模型的捕捉及重建方法。
The present invention proposes a method for capturing and reconstructing a static three-dimensional model, which includes the following steps: collecting images of moving objects in an annular field; obtaining a visible shell model; Depth point clouds of various viewing angles are obtained under constraint conditions; the obtained depth point clouds of various viewing angles are fused to obtain a static three-dimensional model. The invention can ensure the accuracy and completeness of the reconstructed shape of the static three-dimensional model. In addition, the invention also proposes a dynamic three-dimensional model capture and reconstruction method.
Description
技术领域 technical field
本发明涉及计算机视频处理技术领域,特别涉及一种三维模型的捕捉及重建方法和系统。The invention relates to the technical field of computer video processing, in particular to a method and system for capturing and reconstructing a three-dimensional model.
背景技术 Background technique
对于动态场景的三维重建问题,很多工作将其看成静态场景重建问题在时间维度上的简单累加,即并不利用时间信息辅助场景重建,每一帧单独进行静态三维建模。但是这种方法复杂度高、存储量大,不能保证帧与帧之间模型的拓扑一致性,容易产生抖动现象。此外采用上述方法进行三维建模无法对非刚体模型的运动情况进行有效分析,也无法通过时域上的插值获得任意时刻的模型。通过对这类问题进行研究,现有技术提出了联合求解3D场景流以及几何模型的重建方法。进一步还提出了使用变分方法统一动态场景几何和运动的重建,然而由于几何重建和运动重建是迭代进行的,即通过某时刻的几何重建作为运动重建的初始值推导下一时刻的模型重建,因此这种方法的时空联合重建效率仍然不高,实际效果也并不尽人意。For the 3D reconstruction of dynamic scenes, many works regard it as a simple accumulation of static scene reconstruction problems in the time dimension, that is, do not use time information to assist scene reconstruction, and perform static 3D modeling for each frame alone. However, this method has high complexity and large storage capacity, cannot guarantee the topological consistency of the model between frames, and is prone to jitter. In addition, using the above method for 3D modeling cannot effectively analyze the motion of the non-rigid body model, nor can it obtain the model at any time through interpolation in the time domain. Through research on this type of problem, the prior art proposes a joint solution to the 3D scene flow and the reconstruction method of the geometric model. It is further proposed to use the variational method to unify the reconstruction of dynamic scene geometry and motion. However, since geometric reconstruction and motion reconstruction are performed iteratively, that is, the model reconstruction at the next moment is deduced by using the geometric reconstruction at a certain moment as the initial value of motion reconstruction. Therefore, the joint spatial-temporal reconstruction efficiency of this method is still not high, and the actual effect is not satisfactory.
因此,为了避免时空联合重建难度大、质量一般的问题,另一类基于视频的动态三维重建方法则以初始帧的静态三维重建结果作为场景表示,然后应用三维运动跟踪算法求解该三维物体的运动,并应用合适的变形算法驱动静态模型运动,从而获得动态三维重建结果。目前,基于视频的三维运动跟踪可以分为两类:带标记的三维运动跟踪和无标记的三维运动跟踪。其中,带标记的三维运动跟踪方法准确,但是需要被捕捉动作者穿上带有标记的紧身服装,从而限制了对形状和纹理的捕捉。而无标记的三维运动跟踪方法则克服了以上的缺陷。一种无标记的三维运动跟踪方法是通过联合运动学模型和衣服模型来捕捉穿着更加一般服装的人体的运动,但是这种方法无法捕捉到运动对象的精确几何结构。另一种无标记的三维运动跟踪方法则可同时捕捉对象骨架和形状的运动,然而由于一些局部表面并没有随时间做出应有的改变,因此该方法仍然无法有效地进行三维运动跟踪。此外,因为该方法仅仅依靠轮廓信息,所以对轮廓误差非常敏感。虽然这种无标记的方法灵活性提高了,但是很难达到与带标记方法相同的精度。此外,大多数三维运动跟踪方法都需要通过提取运动学骨架来帮助捕捉运动,而运动学骨架只能跟踪刚体运动,所以这种方法往往需要其他扫描技术来辅助捕捉时变的形状。最后,以上所有方法都不能跟踪穿着任意服饰人的运动。Therefore, in order to avoid the difficulty and general quality of spatio-temporal joint reconstruction, another video-based dynamic 3D reconstruction method uses the static 3D reconstruction result of the initial frame as the scene representation, and then applies the 3D motion tracking algorithm to solve the motion of the 3D object. , and apply a suitable deformation algorithm to drive the motion of the static model, so as to obtain dynamic 3D reconstruction results. Currently, video-based 3D motion tracking can be divided into two categories: 3D motion tracking with markers and 3D motion tracking without markers. Among them, the 3D motion tracking method with markers is accurate, but requires the person to be captured to wear tight clothing with markers, which limits the capture of shape and texture. The markerless 3D motion tracking method overcomes the above defects. A marker-free approach to 3D motion tracking captures the motion of a human body wearing more general clothing by combining a kinematic model and a clothing model, but this approach fails to capture the precise geometry of the moving object. Another marker-free 3D motion tracking method can capture both the motion of the object's skeleton and shape, but this method is still not effective for 3D motion tracking because some local surfaces do not change as they should over time. Furthermore, since the method only relies on contour information, it is very sensitive to contour errors. Although this label-free method has increased flexibility, it is difficult to achieve the same accuracy as the labeled method. In addition, most 3D motion tracking methods need to help capture motion by extracting a kinematic skeleton, which can only track rigid body motion, so this method often requires other scanning techniques to assist in capturing time-varying shapes. Finally, none of the above methods can track the motion of a person wearing arbitrary clothing.
近年来,在计算机图形学中,新的动画捕捉与设计、动画编辑以及变形传递方法不断涌现。这些方法不再依赖于运动学骨架和运动参数,而是基于表面模型和一般的形状变形方法,从而可以捕捉刚体以及非刚体的变形。但是,所有这种基于多视角视频的运动捕捉及恢复方法,初始帧的静态三维重建都需要采用激光扫描仪进行。虽然激光扫描仪可以获得高精度的三维重建结果,但是激光扫描仪昂贵、费时费力,扫描期间人必须处于完全静止不动状态。而且为了后续工作的方便,人通常是两手握拳站立的,所拍摄的多视角视频也是双手握拳做动作。另外,以激光扫描仪的重建结果作为初始场景表示的方法,所恢复的整个动态三维序列中都一直保留着扫描时模型上的一些表面特征,如衣服的褶皱等。In computer graphics, new methods of animation capture and design, animation editing, and deformation transfer have emerged in recent years. These methods no longer rely on kinematic skeletons and motion parameters, but are based on surface models and general shape deformation methods, so that deformations of rigid as well as non-rigid bodies can be captured. However, in all such motion capture and restoration methods based on multi-view video, the static 3D reconstruction of the initial frame needs to be performed by a laser scanner. Although laser scanners can obtain high-precision 3D reconstruction results, laser scanners are expensive, time-consuming and laborious, and people must remain completely still during scanning. Moreover, for the convenience of follow-up work, people usually stand with fists in both hands, and the multi-view video shot also makes movements with fists in both hands. In addition, using the reconstruction result of the laser scanner as the initial scene representation method, some surface features on the model during scanning, such as the folds of clothes, etc., are always preserved in the restored dynamic 3D sequence.
发明内容 Contents of the invention
本发明的目的旨在至少解决上述技术缺陷,本发明提出了一种静态和动态三维模型的捕捉及重建方法和系统。The purpose of the present invention is to at least solve the above-mentioned technical defects. The present invention proposes a method and system for capturing and reconstructing static and dynamic three-dimensional models.
为达到上述目的,本发明一方面提出了一种静态三维模型的捕捉及重建方法,包括以下步骤:对环形场中的运动物体进行图像采集;获取可视外壳模型;根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云;对获得的所述各个视角的深度点云进行融合得到静态三维模型。In order to achieve the above object, the present invention proposes a method for capturing and reconstructing a static three-dimensional model, which includes the following steps: collecting images of moving objects in an annular field; obtaining a visual shell model; The depth point cloud of each viewing angle is obtained by visual shell model and preset constraint conditions; the obtained depth point cloud of each viewing angle is fused to obtain a static three-dimensional model.
本发明另一方面还提出了一种静态三维模型的捕捉及重建系统,包括:围绕环形场的多个摄像机,用于对环形场中的运动物体进行图像采集;静态三维模型重建装置,用于获取可视外壳模型,并根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云,以及对获得的所述各个视角的深度点云进行融合得到静态三维模型。Another aspect of the present invention also proposes a static three-dimensional model capture and reconstruction system, including: a plurality of cameras around the ring field, used to collect images of moving objects in the ring field; a static three-dimensional model reconstruction device for Obtain the visible shell model, and obtain the depth point cloud of each viewing angle according to the image of each viewing angle, the visible shell model and the preset constraints, and fuse the obtained depth point cloud of each viewing angle to obtain a static three-dimensional model .
本发明再一方面还提出了一种动态三维模型的捕捉及重建方法,包括以下步骤:获取静态三维模型;将静态三维模型的表面模型转换为体模型,并将其作为运动跟踪的默认场景表示;获取模型顶点在下一时刻的初始三维运动;根据预定的空时约束条件从获取的顶点中选择精确的顶点作为体变形的位置约束;根据所述位置约束驱动拉普拉斯体变形框架更新动态三维模型。Another aspect of the present invention also proposes a dynamic three-dimensional model capture and reconstruction method, including the following steps: obtaining a static three-dimensional model; converting the surface model of the static three-dimensional model into a volume model, and using it as a default scene representation for motion tracking ; Obtain the initial three-dimensional movement of the model vertices at the next moment; select the precise vertices from the obtained vertices according to the predetermined space-time constraints as the position constraints of the body deformation; drive the Laplacian body deformation framework to update the dynamics according to the position constraints 3D model.
本发明再一方面还提出了一种动态三维模型的捕捉及重建系统,包括:围绕环形场的多个摄像机,用于对环形场中的运动物体进行图像采集;静态三维模型获取装置,用于获取静态三维模型;动态三维模型重建装置,用于将静态三维模型的表面模型转换为体模型,并将其作为运动跟踪的默认场景表示,并获取模型顶点在下一时刻的初始三维运动,以及根据预定的空时约束条件从获取的顶点中选择精确的顶点作为体变形的位置约束,根据所述位置约束驱动拉普拉斯体变形框架更新动态三维模型。Another aspect of the present invention also proposes a dynamic three-dimensional model capture and reconstruction system, including: a plurality of cameras around the ring field, used to collect images of moving objects in the ring field; a static three-dimensional model acquisition device for Obtain a static 3D model; a dynamic 3D model reconstruction device is used to convert the surface model of the static 3D model into a volume model, and use it as the default scene representation for motion tracking, and obtain the initial 3D motion of the model vertices at the next moment, and according to Predetermined space-time constraints select accurate vertices from the obtained vertices as position constraints for body deformation, and drive the Laplacian body deformation framework to update the dynamic 3D model according to the position constraints.
通过本发明能够保证静态三维模型重建形状的精确性和完整性,另外本发明基于稀疏表示理论设计了新的三维运动估计方法,以及基于体模型的变形优化框架,因此能够得到优质的动态重建结果。另外,本发明可以不依赖于三维扫描仪和光学标记,因此成本不高,而且能够跟踪穿着任意服饰人的运动。Through the present invention, the accuracy and integrity of the reconstructed shape of the static 3D model can be guaranteed. In addition, the present invention designs a new 3D motion estimation method based on sparse representation theory, and a deformation optimization framework based on the volume model, so high-quality dynamic reconstruction results can be obtained . In addition, the invention can be independent of 3D scanners and optical markers, so the cost is not high, and it can track the movement of people wearing any clothing.
本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
附图说明 Description of drawings
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present invention will become apparent and easy to understand from the following description of the embodiments in conjunction with the accompanying drawings, wherein:
图1为本发明实施例的静态三维模型的捕捉及重建方法流程图;1 is a flowchart of a method for capturing and reconstructing a static three-dimensional model according to an embodiment of the present invention;
图2为本发明实施例20个摄像机呈环形分布环绕待采集的场景;Fig. 2 is the scene to be collected surrounded by 20 cameras distributed in a ring according to the embodiment of the present invention;
图3为本发明实施例的动态三维模型的捕捉及重建方法流程图;3 is a flowchart of a method for capturing and reconstructing a dynamic three-dimensional model according to an embodiment of the present invention;
图4为本发明实施例整个动态三维重建方法的示意框图;以及FIG. 4 is a schematic block diagram of the entire dynamic three-dimensional reconstruction method according to an embodiment of the present invention; and
图5为对两个长时序列采用本发明方法得到的动态三维模型结果。Fig. 5 is the result of the dynamic three-dimensional model obtained by using the method of the present invention for two long time series.
具体实施方式 Detailed ways
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能解释为对本发明的限制。Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals designate the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary only for explaining the present invention and should not be construed as limiting the present invention.
本发明实施例分别提出了静态三维模型和动态三维模型的捕捉及重建方法,但是需要说明的是动态三维模型的捕捉及重建可以以通过本发明获得的静态三维模型为基础,也可以以通过其他手段获得的静态三维模型为基础,例如现有的三维扫描仪等,这些均应包含在本发明的保护范围之内。The embodiment of the present invention respectively proposes the capture and reconstruction methods of the static 3D model and the dynamic 3D model, but it should be noted that the capture and reconstruction of the dynamic 3D model can be based on the static 3D model obtained by the present invention, or can be obtained through other methods. Based on the static three-dimensional model obtained by other means, such as the existing three-dimensional scanner, etc., these should be included in the protection scope of the present invention.
如图1所示,为本发明实施例的静态三维模型的捕捉及重建方法流程图,包括以下步骤:As shown in Figure 1, it is a flowchart of a method for capturing and reconstructing a static three-dimensional model according to an embodiment of the present invention, including the following steps:
步骤S101,对环形场中的运动物体进行图像采集。例如,在环形场中设有20个摄像机,每个摄像机的帧率为30帧/秒,控制各组摄像机对环形场中的运动物体进行采集。当然本领域技术人员也可选择更多的摄像机以获得更多的视角图像,当然也可减少摄像机的数量,这些均应包含在本发明的保护范围之内。本发明的一个示例,如图2所示,为本发明实施例20个摄像机呈环形分布环绕待采集的场景。其中,Ci表示第i号摄像机。摄像机采集图像的分辨率为1024×768。所采集人物站于环形中心。Step S101 , image acquisition of moving objects in the ring field. For example, 20 cameras are set in the ring field, and the frame rate of each camera is 30 frames per second, and each group of cameras is controlled to collect moving objects in the ring field. Of course, those skilled in the art can also select more cameras to obtain more viewing angle images, and of course reduce the number of cameras, all of which should be included in the protection scope of the present invention. As an example of the present invention, as shown in FIG. 2 , 20 cameras according to the embodiment of the present invention are distributed in a ring around the scene to be captured. Among them, Ci represents the i-th camera. The resolution of the image captured by the camera is 1024×768. The collected characters stand in the center of the circle.
步骤S102,获取初始时刻的可视外壳模型(visual hull)。Step S102, acquiring a visual hull model (visual hull) at the initial moment.
步骤S103,根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云。具体可包括:Step S103, obtaining the depth point cloud of each angle of view according to the image of each angle of view, the visible shell model and the preset constraints. Specifically, it may include:
步骤S201,将各个视角图像与获取的所述可视外壳模型求交以获取各个视角的可见点云。Step S201 , intersecting the obtained visible shell model with the images of each viewing angle to obtain the visible point cloud of each viewing angle.
步骤S202,将各个视角的可见点云投影到该视角图像,获得初始深度点云估计,即d=(a,b,1)为沿着对极线方向的偏移。Step S202, project the visible point cloud of each viewing angle to the viewing angle image to obtain the initial depth point cloud estimation, that is, d=(a, b, 1) is the offset along the epipolar line.
步骤S203,根据初始深度点云估计和所述预设的约束条件获得精确深度点云,其中,所述预设的约束条件包括对极几何约束、亮度约束、梯度约束和平滑性约束中的一种或多种。在本发明的一个优选实施例中,可同时包括上述四种约束,通过以下公式获得精确深度点云:Step S203, obtaining an accurate depth point cloud according to the initial depth point cloud estimation and the preset constraints, wherein the preset constraints include one of epipolar geometry constraints, brightness constraints, gradient constraints, and smoothness constraints one or more species. In a preferred embodiment of the present invention, the above four constraints can be included at the same time, and the accurate depth point cloud can be obtained by the following formula:
其中,x:=(x,y,c)定义了参考视角c图像中的一个像素位置(x,y),其亮度定义为I(x);xb:=(xb,yb,c)为视角c+1上的对极点,w为x在视角c+1上的对应点的偏移;为空间梯度算子;β(x)为遮挡图,对于非遮挡区域的像素为1,否则为0。考虑到模型假设中野值的影响,我们采用鲁棒的惩罚函数来产生一个全变差正则化,其中ε为一个很小的值(在实验中设为0.001)该公式包括了四个约束:对极几何约束(xb+d=x+w),亮度约束(I(xb+d)=I(x)),梯度约束和平滑性约束 Among them, x:=(x, y, c) defines a pixel position (x, y) in the image of the reference view angle c, and its brightness is defined as I(x); x b :=(x b , y b , c ) is the antipole on the angle of view c+1, w is the offset of the corresponding point of x on the angle of view c+1; is the spatial gradient operator; β(x) is the occlusion map, which is 1 for the pixels in the non-occlusion area, and 0 otherwise. Considering the influence of the outliers in the model assumptions, we use a robust penalty function To produce a total variation regularization, where ε is a very small value (set to 0.001 in the experiment). This formula includes four constraints: epipolar geometric constraints (x b +d = x + w), brightness constraints (I(x b +d)=I(x)), gradient constraints and smoothness constraints
步骤S104,对获得的所述各个视角的深度点云进行融合得到静态三维模型。具体可包括以下步骤:Step S104, merging the obtained depth point clouds of each viewing angle to obtain a static three-dimensional model. Specifically, the following steps may be included:
步骤S301,将各个视角的深度点云融合并通过轮廓约束去掉一些野值。Step S301, merging the depth point clouds of each viewing angle and removing some outliers through contour constraints.
步骤S302,通过移动立方体法重建完整表面模型,获得静态三维模型。In step S302, the complete surface model is reconstructed by the moving cube method to obtain a static three-dimensional model.
通过本发明能够保证静态三维模型重建形状的精确性和完整性,静态三维模型的精确性和完整性是动态三维模型重建的基础。The invention can ensure the accuracy and integrity of the reconstructed shape of the static three-dimensional model, and the accuracy and integrity of the static three-dimensional model are the basis of the reconstruction of the dynamic three-dimensional model.
如图3所示,为本发明实施例的动态三维模型的捕捉及重建方法流程图,包括以下步骤:As shown in Figure 3, it is a flowchart of a method for capturing and reconstructing a dynamic three-dimensional model according to an embodiment of the present invention, including the following steps:
步骤S401,将静态三维模型的表面模型转换为体模型,并将其作为运动跟踪的默认场景表示。Step S401, converting the surface model of the static 3D model into a volume model, and using it as a default scene representation for motion tracking.
步骤S402,获取模型顶点在下一时刻的初始三维运动。具体地,可包括以下步骤:Step S402, acquiring the initial three-dimensional motion of the vertices of the model at the next moment. Specifically, the following steps may be included:
步骤S501,计算下一时刻各视角图像的光流。Step S501, calculating the optical flow of images of each view angle at the next moment.
步骤S502,从各视角光流与相邻视角光流求取可见点的场景流,对于不可见点的场景流赋为一个相对大的值,例如10000。Step S502 , calculating the scene flow of the visible point from the optical flow of each viewing angle and the optical flow of the adjacent viewing angle, and assigning a relatively large value, such as 10000, to the scene flow of the invisible point.
步骤S503,以求得的各视角场景流为列,构造矩阵M∈im×n,其中m为表面顶点数。Step S503, taking the obtained scene flow of each view angle as a column, constructing a matrix M∈i m×n , where m is the number of surface vertices.
步骤S504,基于稀疏表示理论,通过求解以下低秩矩阵恢复问题,得到新的矩阵X。Step S504, based on the sparse representation theory, a new matrix X is obtained by solving the following low-rank matrix restoration problem.
minimize ||X||* minimize||X|| *
其中,X为未知变量,Ω是[m]×[n]完整元素集合的一个子集([n]定义为数列{1,K,n}),为采样操作子,定义为Among them, X is an unknown variable, Ω is a subset of [m]×[n] complete element set ([n] is defined as the sequence {1, K, n}), is a sampling operator defined as
步骤S505,将矩阵X中的每一行的平均值作为该行所对应顶点的运动从而得到下一时刻的顶点位置 Step S505, take the average value of each row in the matrix X as the motion of the vertex corresponding to the row So as to get the vertex position at the next moment
步骤S403,根据预定的空时约束条件从获取的顶点中选择精确的顶点作为体变形的位置约束。在本发明实施例中,所述预定的空时约束条件包括:Step S403, selecting accurate vertices from the obtained vertices according to predetermined space-time constraints as position constraints for volume deformation. In the embodiment of the present invention, the predetermined space-time constraints include:
其中,Psil n(v′i)为估计值的轮廓误差,如果v′i投影到下一时刻相机n图像上的像素点在轮廓之内则该函数值为1,否则为0;v(i)是vi的可见相机集合;Nv是可见相机数;Pz n(p(vi),p(v′i)计算vi和v′i在相机n图像上投影位置之间的ZNCC相关性;Ns为顶点vi的直接邻居个数。Among them, P sil n (v′ i ) is the contour error of the estimated value, if the pixel projected by v′ i onto the image of camera n at the next moment is within the contour, then the function value is 1, otherwise it is 0; v( i) is the visible camera set of v i ; N v is the number of visible cameras; P z n (p(v i ), p(v′ i ) calculates the distance between the projected positions of v i and v′ i on the image of camera n ZNCC correlation; N s is the number of direct neighbors of vertex v i .
步骤S404,根据位置约束驱动拉普拉斯体变形框架更新动态三维模型。具体地,包括:In step S404, the dynamic three-dimensional model is updated by driving the Laplacian volume deformation framework according to the position constraints. Specifically, including:
步骤S601,建立如下拉普拉斯体变形线性系统,对于每一个v′i,有Step S601, establish the following Laplacian volume deformation linear system, for each v′ i , there is
其中,Ri和Rj为旋转矩阵,并初始化为单位阵.Among them, R i and R j are rotation matrices and are initialized as identity matrix.
步骤S602,定义协方差矩阵Step S602, define covariance matrix
对Ci进行奇异值分解有于是如果det(Ri)≤0,则改变Ui中对应于最小奇异值的列的符号;The singular value decomposition of C i has then If det(R i )≤0, change the sign of the column in U i corresponding to the smallest singular value;
步骤S603,如果轮廓误差小于给定的阈值,则更新模型,否则返回步骤S601。Step S603, if the contour error is smaller than a given threshold, update the model, otherwise return to step S601.
作为本发明的优选实施例,本发明上述的静态三维模型和动态三维模型的捕捉及重建方法可以同时使用,如图4所示,为本发明实施例整个动态三维重建方法的示意框图。As a preferred embodiment of the present invention, the above static 3D model and dynamic 3D model capture and reconstruction methods of the present invention can be used simultaneously, as shown in Figure 4, which is a schematic block diagram of the entire dynamic 3D reconstruction method of the embodiment of the present invention.
如图5所示,为对两个长时序列采用所提出的发明方法得到的动态三维模型结果。其中,每个序列结果的第一幅图是将各个时刻模型放在一起的总图,后面的图片分别为各个时刻的建模结果。As shown in Fig. 5, it is the result of the dynamic three-dimensional model obtained by using the proposed inventive method on two long time series. Among them, the first picture of each sequence result is the general picture that puts the models of each time together, and the subsequent pictures are the modeling results of each time.
本发明实施例还提出了一种静态三维模型的捕捉及重建系统,该系统包括:围绕环形场的多个摄像机和静态三维模型重建装置。其中,围绕环形场的多个摄像机用于对环形场中的运动物体进行图像采集;静态三维模型重建装置用于获取可视外壳模型,并根据各个视角图像、所述可视外壳模型以及预设的约束条件获得各个视角的深度点云,以及对获得的所述各个视角的深度点云进行融合得到静态三维模型。其中,静态三维模型重建装置的具体工作过程可参考以上静态三维模型的捕捉及重建方法的实施例,在此不再赘述。The embodiment of the present invention also proposes a system for capturing and reconstructing a static three-dimensional model. The system includes: multiple cameras surrounding an annular field and a static three-dimensional model reconstruction device. Among them, a plurality of cameras surrounding the ring field are used to collect images of moving objects in the ring field; the static 3D model reconstruction device is used to obtain a visible shell model, and according to each viewing angle image, the visible shell model and preset Obtain the depth point cloud of each angle of view according to the constraint conditions, and fuse the acquired depth point cloud of each angle of view to obtain a static three-dimensional model. For the specific working process of the static 3D model reconstruction device, reference may be made to the above embodiment of the static 3D model capture and reconstruction method, which will not be repeated here.
另外,本发明实施例还提出了一种动态三维模型的捕捉及重建系统,In addition, the embodiment of the present invention also proposes a dynamic three-dimensional model capture and reconstruction system,
包括:围绕环形场的多个摄像机、静态三维模型获取装置和动态三维模型重建装置。其中,围绕环形场的多个摄像机用于对环形场中的运动物体进行图像采集;静态三维模型获取装置用于获取静态三维模型;动态三维模型重建装置,用于将静态三维模型的表面模型转换为体模型,并将其作为运动跟踪的默认场景表示,并获取模型顶点在下一时刻的初始三维运动,以及根据预定的空时约束条件从获取的顶点中选择精确的顶点作为体变形的位置约束,根据所述位置约束驱动拉普拉斯体变形框架更新动态三维模型。其中,静态和动态三维模型重建装置的具体工作过程可参考以上静态和动态三维模型的捕捉及重建方法的实施例,在此不再赘述。It includes: a plurality of cameras surrounding the ring field, a static three-dimensional model acquisition device and a dynamic three-dimensional model reconstruction device. Among them, multiple cameras around the ring field are used to collect images of moving objects in the ring field; the static 3D model acquisition device is used to obtain a static 3D model; the dynamic 3D model reconstruction device is used to convert the surface model of the static 3D model It is a body model, and it is used as the default scene representation of motion tracking, and the initial three-dimensional motion of the model vertices at the next moment is obtained, and the precise vertices are selected from the obtained vertices according to the predetermined space-time constraints as the position constraints of the body deformation , to update the dynamic 3D model by driving the Laplacian body deformation framework according to the position constraints. For the specific working process of the static and dynamic 3D model reconstruction device, reference may be made to the above embodiment of the static and dynamic 3D model capture and reconstruction method, which will not be repeated here.
通过本发明能够保证静态三维模型重建形状的精确性和完整性,另外本发明基于稀疏表示理论设计了新的三维运动估计方法,以及基于体模型的变形优化框架,因此能够得到优质的动态重建结果。另外,本发明可以不依赖于三维扫描仪和光学标记,因此成本不高,而且能够跟踪穿着任意服饰人的运动。Through the present invention, the accuracy and integrity of the reconstructed shape of the static 3D model can be guaranteed. In addition, the present invention designs a new 3D motion estimation method based on sparse representation theory, and a deformation optimization framework based on the volume model, so high-quality dynamic reconstruction results can be obtained . In addition, the invention can be independent of 3D scanners and optical markers, so the cost is not high, and it can track the movement of people wearing any clothing.
尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同限定。Although the embodiments of the present invention have been shown and described, those skilled in the art can understand that various changes, modifications and substitutions can be made to these embodiments without departing from the principle and spirit of the present invention. and modifications, the scope of the invention is defined by the appended claims and their equivalents.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010101411826A CN101833786B (en) | 2010-04-06 | 2010-04-06 | Method and system for capturing and rebuilding three-dimensional model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010101411826A CN101833786B (en) | 2010-04-06 | 2010-04-06 | Method and system for capturing and rebuilding three-dimensional model |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110167593 Division CN102222361A (en) | 2010-04-06 | 2010-04-06 | Method and system for capturing and reconstructing 3D model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101833786A CN101833786A (en) | 2010-09-15 |
CN101833786B true CN101833786B (en) | 2011-12-28 |
Family
ID=42717847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010101411826A Expired - Fee Related CN101833786B (en) | 2010-04-06 | 2010-04-06 | Method and system for capturing and rebuilding three-dimensional model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101833786B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102306390B (en) * | 2011-05-18 | 2013-11-06 | 清华大学 | Method and device for capturing movement based on framework and partial interpolation |
AU2011203028B1 (en) * | 2011-06-22 | 2012-03-08 | Microsoft Technology Licensing, Llc | Fully automatic dynamic articulated model calibration |
CN102446366B (en) * | 2011-09-14 | 2013-06-19 | 天津大学 | Time-space jointed multi-view video interpolation and three-dimensional modeling method |
CN102722908B (en) * | 2012-05-25 | 2016-06-08 | 任伟峰 | Method for position and device are put in a kind of object space in three-dimension virtual reality scene |
CN102800127B (en) * | 2012-07-18 | 2014-11-26 | 清华大学 | Light stream optimization based three-dimensional reconstruction method and device |
CN103903300A (en) * | 2012-12-31 | 2014-07-02 | 博世汽车部件(苏州)有限公司 | Object surface height reconstructing method, object surface height reconstructing system, optical character extracting method and optical character extracting system |
CN103927787A (en) * | 2014-04-30 | 2014-07-16 | 南京大学 | Method and device for improving three-dimensional reconstruction precision based on matrix recovery |
CN104599314A (en) * | 2014-06-12 | 2015-05-06 | 深圳奥比中光科技有限公司 | Three-dimensional model reconstruction method and system |
CN105488823B (en) * | 2014-09-16 | 2019-10-18 | 株式会社日立制作所 | CT image reconstruction method, CT image reconstruction device and CT system |
US20160140733A1 (en) * | 2014-11-13 | 2016-05-19 | Futurewei Technologies, Inc. | Method and systems for multi-view high-speed motion capture |
US10127709B2 (en) * | 2014-11-28 | 2018-11-13 | Panasonic Intellectual Property Management Co., Ltd. | Modeling device, three-dimensional model generating device, modeling method, and program |
CN107170037A (en) * | 2016-03-07 | 2017-09-15 | 深圳市鹰眼在线电子科技有限公司 | A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera |
WO2018045532A1 (en) * | 2016-09-08 | 2018-03-15 | 深圳市大富网络技术有限公司 | Method for generating square animation and related device |
US10572720B2 (en) * | 2017-03-01 | 2020-02-25 | Sony Corporation | Virtual reality-based apparatus and method to generate a three dimensional (3D) human face model using image and depth data |
CN107358645B (en) * | 2017-06-08 | 2020-08-11 | 上海交通大学 | Product 3D model reconstruction method and system |
TWI657407B (en) * | 2017-12-07 | 2019-04-21 | 財團法人資訊工業策進會 | Three-dimensional point cloud tracking apparatus and method by recurrent neural network |
CN108769361B (en) * | 2018-04-03 | 2020-10-27 | 华为技术有限公司 | Control method of terminal wallpaper, terminal and computer-readable storage medium |
CN109271893B (en) * | 2018-08-30 | 2021-01-01 | 百度在线网络技术(北京)有限公司 | Method, device, equipment and storage medium for generating simulation point cloud data |
WO2021051220A1 (en) * | 2019-09-16 | 2021-03-25 | 深圳市大疆创新科技有限公司 | Point cloud fusion method, device, and system, and storage medium |
CN112001958B (en) * | 2020-10-28 | 2021-02-02 | 浙江浙能技术研究院有限公司 | Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation |
WO2022087932A1 (en) * | 2020-10-29 | 2022-05-05 | Huawei Technologies Co., Ltd. | Non-rigid 3d object modeling using scene flow estimation |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6791542B2 (en) * | 2002-06-17 | 2004-09-14 | Mitsubishi Electric Research Laboratories, Inc. | Modeling 3D objects with opacity hulls |
CN100557640C (en) * | 2008-04-28 | 2009-11-04 | 清华大学 | An Interactive Multi-viewpoint 3D Model Reconstruction Method |
CN101650834A (en) * | 2009-07-16 | 2010-02-17 | 上海交通大学 | Three dimensional reconstruction method of human body surface under complex scene |
-
2010
- 2010-04-06 CN CN2010101411826A patent/CN101833786B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN101833786A (en) | 2010-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101833786B (en) | Method and system for capturing and rebuilding three-dimensional model | |
CN102222361A (en) | Method and system for capturing and reconstructing 3D model | |
CN108711185B (en) | 3D reconstruction method and device combining rigid motion and non-rigid deformation | |
CN108038905B (en) | A kind of Object reconstruction method based on super-pixel | |
CN111882668B (en) | Multi-view three-dimensional object reconstruction method and system | |
Sturm et al. | CopyMe3D: Scanning and printing persons in 3D | |
CN104376552B (en) | A kind of virtual combat method of 3D models and two dimensional image | |
Ahmed et al. | Dense correspondence finding for parametrization-free animation reconstruction from video | |
CN108053476B (en) | A system and method for measuring human parameters based on segmented three-dimensional reconstruction | |
CN104915978B (en) | Realistic animation generation method based on body-sensing camera Kinect | |
CN101658347B (en) | Method for obtaining dynamic shape of foot model | |
CN103649998A (en) | Method for determining a parameter set designed for determining the pose of a camera and/or for determining a three-dimensional structure of the at least one real object | |
Li et al. | 3d human avatar digitization from a single image | |
Sizintsev et al. | Spatiotemporal stereo and scene flow via stequel matching | |
Wang et al. | TerrainFusion: Real-time digital surface model reconstruction based on monocular SLAM | |
Alsadik et al. | Efficient use of video for 3D modelling of cultural heritage objects | |
Chen et al. | Research on 3D reconstruction based on multiple views | |
Guan et al. | EVI‐SAM: Robust, Real‐Time, Tightly‐Coupled Event–Visual–Inertial State Estimation and 3D Dense Mapping | |
Ke et al. | Towards real-time 3D visualization with multiview RGB camera array | |
CN112132971B (en) | Three-dimensional human modeling method, three-dimensional human modeling device, electronic equipment and storage medium | |
Remondino | 3D reconstruction of static human body with a digital camera | |
Mahmoud et al. | Fast 3d structure from motion with missing points from registration of partial reconstructions | |
Suttasupa et al. | Plane detection for Kinect image sequences | |
JP2009048305A (en) | Shape analysis program and shape analysis apparatus | |
CN105184860A (en) | Method for reconstructing dense three-dimensional structure and motion field of dynamic face simultaneously |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20111228 |