WO2019219014A1 - 基于光影优化的三维几何与本征成份重建方法及装置 - Google Patents

基于光影优化的三维几何与本征成份重建方法及装置 Download PDF

Info

Publication number
WO2019219014A1
WO2019219014A1 PCT/CN2019/086892 CN2019086892W WO2019219014A1 WO 2019219014 A1 WO2019219014 A1 WO 2019219014A1 CN 2019086892 W CN2019086892 W CN 2019086892W WO 2019219014 A1 WO2019219014 A1 WO 2019219014A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
dimensional
eigen
vertex
point cloud
Prior art date
Application number
PCT/CN2019/086892
Other languages
English (en)
French (fr)
Inventor
刘烨斌
戴琼海
徐枫
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Publication of WO2019219014A1 publication Critical patent/WO2019219014A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/44Morphing

Definitions

  • the feedback of the present application belongs to the field of computer vision technology, and particularly relates to a method and device for reconstructing three-dimensional geometric and intrinsic components based on light and shadow optimization.
  • Dynamic object 3D reconstruction is a key issue in the field of computer graphics and computer vision.
  • High-quality dynamic object 3D models such as human body, animal, human face, human hand, etc.
  • the acquisition of high-quality 3D models usually relies on expensive laser scanners or multi-camera array systems.
  • the accuracy is high, there are also some shortcomings: First, the object is required to remain absolutely still during the scanning process. Movement will lead to obvious errors in the scanning results. Second, the fraud is expensive and difficult to spread to the daily lives of ordinary people, often applied to large companies or national statistical departments. Third, the speed is slow, and it often takes at least 10 minutes to several hours to reconstruct a 3D model. The cost of reconstructing a dynamic model sequence is greater.
  • the existing reconstruction method concentrates on solving the rigid motion information of the object first, obtaining the approximation of the object, and reconstructing the non-rigid surface motion information.
  • this reconstruction method requires obtaining a three-dimensional model of the key frame of the object in advance.
  • the existing frame-by-frame dynamic fusion surface reconstruction method can realize dynamic three-dimensional reconstruction without template, the robustness of tracking reconstruction is low only by using the non-rigid surface deformation method.
  • the method combines the three-dimensional geometric information and surface intrinsic component information of the dynamic object surface by frame, realizing the non-rigid high-precision tracking and fusion of the dynamic object surface, realizing the condition without the first frame three-dimensional template.
  • the single depth camera real-time dynamic 3D geometry and surface intrinsic component reconstruction. Based on the obtained geometric and intrinsic component information, the free view video generation and object relighting function of the dynamic object can be realized.
  • the feedback of the present application aims to solve at least one of the technical problems in the related art to some extent.
  • one of the objectives of the feedback of the present application is to propose a three-dimensional geometric and eigen component reconstruction method based on light and shadow optimization, which has the advantages of high robustness, accurate solution, low equipment requirements and broad application prospects.
  • Another object of the feedback of the present application is to propose a three-dimensional geometric and intrinsic component reconstruction device based on light and shadow optimization.
  • the present application provides an embodiment of a three-dimensional geometric and eigen component reconstruction method based on light and shadow optimization, which includes the following steps: capturing a dynamic scene by an RGBD camera to obtain a three-dimensional color point cloud time series.
  • the three-dimensional geometric and eigen component reconstruction method based on light and shadow optimization of the feedback embodiment of the present application can obtain the eigen component by reconstructing the matching point pairs between the model vertices to reflect the real material property of the surface of the object, and the influence of external illumination can be removed.
  • the eigen component by reconstructing the matching point pairs between the model vertices to reflect the real material property of the surface of the object, and the influence of external illumination can be removed.
  • the method for reconstructing the three-dimensional geometry and eigen component based on the light-shadow optimization according to the above embodiment according to the present application may further have the following additional technical features:
  • the energy function is:
  • E is the total energy term for motion solution.
  • E d is the depth data item
  • E s is the light and shadow optimization term
  • E reg is the local rigid motion constraint term, ⁇ d , ⁇ s and ⁇ reg Corresponding to the weight coefficients corresponding to each energy term;
  • a deformation matrix acting on the vertex v i the deformation matrix including a rotation portion and a translation portion, Is the rotating portion.
  • the depth data item is expressed as:
  • v is the model surface On one of the vertices, v' is the non-rigid drive model vertices, and the model vertices are calculated as:
  • a transformation matrix of the jth deformed node of the t-th frame and u t is a three-dimensional coordinate point corresponding to the model vertex v on the depth point cloud, Is the normal of the three-dimensional coordinate point, and P is the set of all corresponding point pairs.
  • the illumination energy term based on the decomposition of the eigencomponent is:
  • M c is a color camera projection matrix, C t (M c (v ')) to v' in the current viewing angle at time t a color camera is projected onto the color image obtained
  • B t (v') is the combination of model geometry information and image intrinsic component properties.
  • the local stiffness constraint is:
  • a set of adjacent deformed nodes representing the jth deformed node Represents a collection of all non-rigidally deformed nodes.
  • a three-dimensional geometric and eigen component reconstruction device based on light and shadow optimization, comprising: a shooting module for capturing a dynamic scene by an RGBD camera to obtain three-dimensional color. Point cloud time series shooting; an acquisition module, configured to acquire a matching point pair between the three-dimensional depth point cloud and the reconstructed model vertex, and obtain a point pair set, wherein the point pair set includes the three-dimensional depth point cloud Decoding a three-dimensional coordinate point corresponding to a vertex of the reconstruction model; and an decomposition module, configured to establish a joint energy function based on the eigen-decomposition according to the matching point pair and the current viewing angle color image, and solve the non-rigidity of each vertex on the reconstruction model a motion position transformation parameter; a solution module for solving the energy function to obtain a deformation transformation matrix of a surface model vertex and an eigen component of each item on the image; and a reconstruction module for using the reconstruction result
  • the three-dimensional geometric and eigen component reconstruction device based on the light and shadow optimization of the feedback embodiment of the present application obtains the eigen component by reconstructing the matching point pair between the model vertices to reflect the real material property of the surface of the object, and can remove the influence of external illumination.
  • the eigen component by reconstructing the matching point pair between the model vertices to reflect the real material property of the surface of the object, and can remove the influence of external illumination.
  • the light-shadow optimization-based three-dimensional geometry and eigen component reconstruction apparatus fed back to the above embodiment according to the present application may further have the following additional technical features:
  • the energy function is:
  • E is the total energy term for motion solution.
  • E d is the depth data item
  • E s is the light and shadow optimization term
  • E reg is the local rigid motion constraint term, ⁇ d , ⁇ s and ⁇ reg Corresponding to the weight coefficients corresponding to each energy term;
  • a deformation matrix acting on the vertex v i the deformation matrix including a rotation portion and a translation portion, Is the rotating portion.
  • the depth data item is expressed as:
  • v is the model surface On one of the vertices, v' is the non-rigid drive model vertices, and the model vertices are calculated as:
  • a transformation matrix of the jth deformed node of the t-th frame and u t is a three-dimensional coordinate point corresponding to the model vertex v on the depth point cloud, Is the normal of the three-dimensional coordinate point, and P is the set of all corresponding point pairs.
  • the illumination energy term based on the decomposition of the eigencomponent is:
  • M c is a color camera projection matrix, C t (M c (v ')) to v' in the current viewing angle at time t a color camera is projected onto the color image obtained
  • B t (v') is the combination of model geometry information and image intrinsic component properties.
  • the local stiffness constraint is:
  • a set of adjacent deformed nodes representing the jth deformed node Represents a collection of all non-rigidally deformed nodes.
  • FIG. 1 is a flow chart of a method for reconstructing a three-dimensional geometry and an intrinsic component based on light and shadow optimization according to a feedback embodiment of the present application;
  • FIG. 2 is a schematic structural diagram of a three-dimensional geometric and intrinsic component reconstruction apparatus based on light and shadow optimization according to a feedback embodiment of the present application.
  • FIG. 1 is a flow chart of a method for reconstructing a three-dimensional geometry and eigencomponent based on light and shadow optimization according to a feedback embodiment of the present application.
  • the method for reconstructing three-dimensional geometry and eigencomponents based on light and shadow optimization includes the following steps:
  • step S101 the dynamic scene is photographed by the RGBD camera to obtain a three-dimensional color point cloud time series photographing.
  • projecting the depth image into the three-dimensional space into a set of three-dimensional point clouds includes:
  • the depth map is projected into the three-dimensional space according to the internal reference matrix and transformed into a set of three-dimensional point clouds.
  • the formula of the transformation is: Where u, v are pixel coordinates, and d(u, v) is a depth value at a pixel (u, v) position on the depth image, For the depth camera internal reference matrix.
  • the vertices of the three-dimensional model are projected onto the depth image using a camera projection formula to obtain matching point pairs.
  • step S102 a matching point pair between the three-dimensional depth point cloud and the reconstructed model vertex is acquired, and a point pair set is obtained, wherein the point pair set includes a three-dimensional coordinate point corresponding to the vertex of the reconstructed model on the three-dimensional depth point cloud.
  • a matching point pair between the three-dimensional depth point cloud and the reconstructed model vertex is calculated, and a point pair set P is obtained, the set including a point pair (u t , v), where u t is a depth point A three-dimensional coordinate point on the cloud corresponding to the model vertex v.
  • the core is to project the vertices of the 3D model onto the depth image using the camera projection formula to obtain matching point pairs.
  • step S103 a joint energy function based on the eigen-decomposition is established according to the matching point pair and the current view color image, and the non-rigid motion position transformation parameters of each vertex on the reconstructed model are solved.
  • the non-rigid motion position transformation parameters of each vertex on the reconstruction model are solved, and the energy function is:
  • E is the total energy term for motion solution.
  • the intrinsic component of the current frame point cloud is to be solved for the current moment.
  • E d is a depth data term used to solve non-rigid surface motion. This item is used to ensure that the deformed scene model matches the current depth point cloud observation as much as possible. When the deformed model is far away from the depth observation distance, the energy is larger;
  • E s is the light and shadow optimization term for solving Current intrinsic ingredients.
  • the color image rendered by the scene illumination, the model geometric information and the model intrinsic component information is consistent with the actually collected color image. When the rendered image is significantly different from the actually collected color image, the energy term is larger;
  • E reg is a local rigid motion constraint term that acts on non-rigid motion.
  • the data item ensures that the non-rigid driving effects of adjacent vertices on the model are as uniform as possible, and the non-rigid motion for constraining the surface of the model is locally rigid, and it is not easy to produce large deformation of the local region;
  • ⁇ d , ⁇ s and ⁇ Reg is the weight coefficient corresponding to each energy item.
  • the deep data item is expressed as:
  • v is the model surface A vertex on the top
  • v' is the model vertex after the non-rigid drive deformation
  • the transformation matrix of the jth deformed node of the tth frame is an unknown quantity that the optimization function needs to solve;
  • u t is a three-dimensional coordinate point corresponding to the model vertex v on the depth point cloud, Is its normal;
  • P is the set of all corresponding point pairs.
  • This energy term is used to ensure that the deformed scene model matches the current depth point cloud observation as much as possible. When the deformed model is far away from the depth observation, the energy is larger.
  • the illumination energy term based on the decomposition of the eigencomponent is:
  • M c is a color camera projection matrix, C t (M c (v ')) to v' in the current viewing angle at time t a color camera is projected onto the color image obtained
  • the color; B t (v') is the combined color corresponding to v' in the color image rendered using the traditional rendering pipeline in combination with the model geometry information and the image intrinsic component properties.
  • the energy term assumes that the scene is uniformly illuminated, and the color image rendered by the model geometric information and the model intrinsic component information is consistent with the actually collected color image. This energy term is larger when the rendered image differs greatly from the actually acquired color image.
  • the local rigid constraint terms are:
  • a set of adjacent deformed nodes representing the jth deformed node Represents a collection of all non-rigidally deformed nodes.
  • the constraint is to ensure that the non-rigid driving effects of adjacent vertices on the model are as consistent as possible.
  • step S104 a joint energy function based on the eigen-decomposition is established according to the matching point pair and the current view color image, and the non-rigid motion position transformation parameters of each vertex on the reconstructed model are solved.
  • non-rigid motion position transformation parameters for each vertex on the reconstructed model are jointly solved.
  • the information obtained by the final solution is the transformation matrix of each 3D model vertex.
  • the method of the feedback embodiment of the present application approximates the deformation equation by using an exponential mapping method:
  • the cumulative transformation matrix of the model vertex v i up to the previous frame is a known quantity;
  • I is a four-dimensional unit matrix;
  • step S105 a joint energy function based on the eigen-decomposition is established according to the matching point pair and the current view color image, and the non-rigid motion position transformation parameters of each vertex on the reconstructed model are solved.
  • the depth image is used to update and complete the aligned three-dimensional model and the color image is used to update and complete the intrinsic components of the aligned three-dimensional model.
  • the newly obtained depth information is merged into the 3D model, the surface vertex position of the 3D model is updated or a new vertex is added to the 3D model to make it more consistent with the expression of the current depth image; on the other hand, the solved scene illumination information is used.
  • the color image is decomposed to obtain the eigen component information of the current perspective model, and finally integrated into the eigencomponent information of the model.
  • Both update processes are adaptive update processes, which are characterized in that after the model is fused with sufficient effective depth information and eigencomponent information, the scene model and the eigencomponent update are stopped, and only the dynamic scene illumination and model are performed.
  • the solution of the non-rigid motion further improves the robustness of the real-time dynamic reconstruction system.
  • the three-dimensional geometric and eigen component reconstruction method based on light and shadow optimization of the feedback embodiment of the present application can obtain the eigen component by reconstructing the matching point pairs between the model vertices to reflect the real material property of the surface of the object, and the influence of external illumination can be removed.
  • the eigen component by reconstructing the matching point pairs between the model vertices to reflect the real material property of the surface of the object, and the influence of external illumination can be removed.
  • FIG. 2 is a schematic structural diagram of a three-dimensional geometric and intrinsic component reconstruction apparatus based on light and shadow optimization according to an embodiment of the present application.
  • the light-shadow optimization-based three-dimensional geometry and eigen component reconstruction apparatus 10 includes a photographing module 100, an acquisition module 200, an decomposition module 300, a solution module 400, and a reconstruction module 500.
  • the shooting module 100 is configured to capture a dynamic scene by using an RGBD camera to obtain a three-dimensional color point cloud time series shooting.
  • the obtaining module 200 is configured to acquire a matching point pair between the three-dimensional depth point cloud and the reconstructed model vertex, and obtain a point pair set, wherein the point pair set includes a three-dimensional coordinate point on the three-dimensional depth point cloud corresponding to the vertex of the reconstructed model.
  • the decomposition module 300 is configured to establish a joint energy function based on the eigen decomposition according to the matching point pair and the current view color image, and solve the non-rigid motion position transformation parameter of each vertex on the reconstructed model.
  • the solution module 400 is configured to solve the energy function to obtain a deformation transformation matrix of the surface model vertices and an intrinsic component of each item on the image.
  • the reconstruction module 500 is configured to deform the geometry of the previous frame of the three-dimensional model according to the solution result, align the deformation model with the collection point cloud of the current frame, and combine the geometric and eigen component information according to the eigen component to complete and update.
  • the geometric and intrinsic components of the current frame model enable the reconstruction of 3D geometry and eigencomponents.
  • the energy function is:
  • E is the total energy term for motion solution.
  • E d is the depth data item
  • E s is the light and shadow optimization term
  • E reg is the local rigid motion constraint term, ⁇ d , ⁇ s and ⁇ reg Corresponding to the weight coefficients corresponding to each energy term;
  • a deformation matrix acting on the vertex v i includes a rotation portion and a translation portion, Is the part of the rotation.
  • the depth data item is expressed as:
  • v is the model surface A vertex on the top
  • v' is the model vertex after the non-rigid drive deformation
  • the calculation formula of the model vertex is:
  • a transformation matrix of the jth deformed node of the t-th frame and u t is a three-dimensional coordinate point corresponding to the model vertex v on the depth point cloud, Is the normal of the three-dimensional coordinate point, and P is the set of all corresponding point pairs.
  • the illumination energy term based on the decomposition of the eigencomponent is:
  • M c is a color camera projection matrix, C t (M c (v ')) to v' in the current viewing angle at time t a color camera is projected onto the color image obtained
  • B t (v') is the combination of model geometry information and image intrinsic component properties.
  • the local stiffness constraint is:
  • the three-dimensional geometric and eigen component reconstruction method based on light and shadow optimization of the feedback embodiment of the present application can obtain the eigen component by reconstructing the matching point pairs between the model vertices to reflect the real material property of the surface of the object, and the influence of external illumination can be removed.
  • the eigen component by reconstructing the matching point pairs between the model vertices to reflect the real material property of the surface of the object, and the influence of external illumination can be removed.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • the terms “installation”, “connected”, “connected”, “fixed” and the like shall be understood broadly, and may be fixed or detachable, for example, unless otherwise explicitly stated and defined. Connected, or integrated; can be mechanical or electrical; can be directly connected, or indirectly connected through an intermediate medium, can be the internal communication of two components or the interaction of two components, unless otherwise Clearly defined. For those skilled in the art, the specific meanings of the above terms in the feedback of the present application can be understood on a case-by-case basis.
  • the first feature "on” or “below” the second feature may be the direct contact of the first and second features, or the first and second features pass through the intermediate medium, unless otherwise explicitly stated and defined. Indirect contact.
  • the first feature "above”, “above” and “above” the second feature may be that the first feature is directly above or above the second feature, or merely that the first feature level is higher than the second feature.
  • the first feature “below”, “below” and “below” the second feature may be that the first feature is directly below or obliquely below the second feature, or merely that the first feature level is less than the second feature.
  • the description with reference to the terms “one embodiment”, “some embodiments”, “example”, “specific example”, or “some examples” and the like means a specific feature described in connection with the embodiment or example.
  • the structure, materials, or characteristics are included in at least one embodiment or example of the feedback of the present application.
  • the schematic representation of the above terms is not necessarily directed to the same embodiment or example.
  • the particular features, structures, materials, or characteristics described may be combined in a suitable manner in any one or more embodiments or examples.
  • various embodiments or examples described in the specification, as well as features of various embodiments or examples may be combined and combined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

一种基于光影优化的三维几何与本征成份重建方法及装置,其中,方法包括:通过RGBD相机得到三维彩色点云时间序列拍摄(S101);获取三维深度点云和重建模型顶点之间的匹配点对,并获得点对集合(S102);根据匹配点对及当前视角彩色图像建立基于本征分解的联合能量函数,并求解重建模型上每一个顶点的非刚性运动位置变换参数(S103);对能量函数进行求解,获得表面模型顶点的形变变换矩阵以及图像上每个项目的本征成份(S104);根据求解结果将前一帧三维模型的几何进行形变,补全和更新当前帧模型的几何和本征成份,实现三维几何与本征成份重建(S105)。该方法能够提高稀疏和单个视点条件下动态对象跟踪形变的鲁棒性,且求解准确,对设备要求低,拥有广阔的前景。

Description

基于光影优化的三维几何与本征成份重建方法及装置
相关申请的交叉引用
本申请要求清华大学于2018年05月15日提交的、发明名称为“基于光影优化的三维几何与本征成份重建方法及装置”的、中国专利申请号“201810460082.6”的优先权。
技术领域
本申请反馈属于计算机视觉技术领域,特别涉及一种基于光影优化的三维几何与本征成份重建方法及装置。
背景技术
动态对象三维重建是计算机图形学和计算机视觉领域的重点问题。高质量的动态对象三维模型,如人体,动物,人脸,人手部等,在影视娱乐、体育游戏、虚拟现实等领域有着广泛的应用前景和重要的应用价值。但是高质量三维模型的获取通常依靠价格昂贵的激光扫描仪或者多相机阵列系统来实现,虽然精度较高,但是也显著存在着一些缺点:第一,扫描过程中要求对象保持绝对静止,微小的移动就会导致扫描结果存在明显的误差;第二,造假昂贵,很难普及到普通民众日常生活中,往往应用于大公司或国家统计部门。第三,速度慢,往往重建一个三维模型需要至少10分钟到数小时的时间,重建动态模型序列的代价更大。
从技术角度,现有的重建方法要么集中在先求解对象的刚性运动信息,获得对象的逼近,进而重建非刚性表面运动信息。但这种重建方法需要事先获得对象的关键帧三维模型。另一方面,现有的逐帧动态融合表面的重建方法虽然可实现无模板的动态三维重建,但仅仅使用非刚性表面形变方法,跟踪重建的鲁棒性低。本方法通过实时非刚性对齐的方法,逐帧地融合动态对象表面三维几何信息与表面本征成份信息,实现了动态对象表面的非刚性高精度跟踪与融合,实现在无首帧三维模板条件下的单深度相机实时动态三维几何和表面本征成份重建。基于获得的几何和本征成份信息,可实现动态对象的自由视点视频生成及对象重光照功能。
发明内容
本申请反馈旨在至少在一定程度上解决相关技术中的技术问题之一。
为此,本申请反馈的一个目的在于提出一种基于光影优化的三维几何与本征成份重建方法,具有较高鲁棒性,求解准确,对设备要求低,拥有广阔应用前景的优点。
本申请反馈的另一个目的在于提出一种基于光影优化的三维几何与本征成份重建装置。
为达到上述目的,本申请反馈一方面实施例提出了一种基于光影优化的三维几何与本征成份重建方法,包括以下步骤:通过RGBD相机对动态场景进行拍摄,以得到三维彩色点云时间序列拍摄;获取三维深度点云和重建模型顶点之间的匹配点对,并获得点对集合,其中,所述点对集合包含所述三维深度点云上与所述重建模型的顶点对应的三维坐标点;根据所述匹配点对及当前视角彩色图像建立基于本征分解的联合能量函数,并求解所述重建模型上每一个顶点的非刚性运动位置变换参数;对所述能量函数进行求解,以获得表面模型顶点的形变变换矩阵以及图像上每个项目的本征成份;以及根据求解结果将前一帧三维模型的几何进行形变,使形变模型与当前帧的采集点云进行对齐,并根据所述本征成份融合几何及本征成份信息,以补全和更新当前帧模型的几何和本征成份,实现三维几何与本征成份重建。
本申请反馈实施例的基于光影优化的三维几何与本征成份重建方法,通过重建模型顶点之间的匹配点对获取本征成份,以反映对象表面的真实材质属性,可以去除外部光照的影响,以有效提高稀疏和单个视点条件下,动态对象跟踪形变的鲁棒性,并通过能量函数求解,并结合场景光照和重建出的场景本征成份信息实现对实时动态场景三维模型的高精度重建,具有求解准确的优点,此外,对设备要求低,拥有广阔的应用前景。
另外,根据本申请反馈上述实施例的基于光影优化的三维几何与本征成份重建方法还可以具有以下附加的技术特征:
进一步地,在本申请反馈的一个实施例中,所述能量函数为:
Figure PCTCN2019086892-appb-000001
其中,E为运动求解总能量项,
Figure PCTCN2019086892-appb-000002
为当前时刻待求解非刚性运动参数和当前帧点云的本征成份,E d为深度数据项,E s为光影优化项,E reg为局部刚性运动约束项,λ d、λ s和λ reg分别为对应各个能量项的权重系数;
并且,根据非刚性运动驱动模型顶点,计算公式为:
Figure PCTCN2019086892-appb-000003
其中,
Figure PCTCN2019086892-appb-000004
为作用于顶点v i的变形矩阵,所述变形矩阵包括旋转部分和平移部分,
Figure PCTCN2019086892-appb-000005
为所述旋转部分。
进一步地,在本申请反馈的一个实施例中,所述深度数据项表达为:
Figure PCTCN2019086892-appb-000006
其中,v为模型表面
Figure PCTCN2019086892-appb-000007
上的一个顶点,v′为非刚性驱动变形后的模型顶点,所述模型顶点的计算公式为:
Figure PCTCN2019086892-appb-000008
其中,
Figure PCTCN2019086892-appb-000009
为第t帧的第j个变形节点的变换矩阵,u t为深度点云上与模型顶点v对应的三维坐标点,
Figure PCTCN2019086892-appb-000010
是所述三维坐标点的法向,P为所有对应点对的集合。
进一步地,在本申请反馈的一个实施例中,基于本征成份分解的光照能量项为:
Figure PCTCN2019086892-appb-000011
其中,
Figure PCTCN2019086892-appb-000012
为当前彩色相机视角可见的所有模型顶点的集合,M c为彩色相机的投影矩阵,C t(M c(v′))为v′在当前t时刻彩色相机视角下投影到彩色图像上所获得的颜色,B t(v′)为结合模型几何信息和图像本征成份属性。
进一步地,在本申请反馈的一个实施例中,局部刚性约束项为:
Figure PCTCN2019086892-appb-000013
其中,
Figure PCTCN2019086892-appb-000014
代表第j个变形节点的相邻变形节点集合;
Figure PCTCN2019086892-appb-000015
代表所有非刚性变形节点的集合。
为达到上述目的,本申请反馈另一方面实施例提出了一种基于光影优化的三维几何与本征成份重建装置,包括:拍摄模块,用于通过RGBD相机对动态场景进行拍摄,以得到三维彩色点云时间序列拍摄;获取模块,用于获取三维深度点云和重建模型顶点之间的匹配点对,并获得点对集合,其中,所述点对集合包含所述三维深度点云上与所述重建模型的顶点对应的三维坐标点;分解模块,用于根据所述匹配点对及当前视角彩色图像建立基于本征分解的联合能量函数,并求解所述重建模型上每一个顶点的非刚性运动位置变换参数;求解模块,用于对所述能量函数进行求解,以获得表面模型顶点的形变变换矩阵以及图像上每个项目的本征成份;以及重建模块,用于根据求解结果将前一帧三维模型的几何进行形变,使形变模型与当前帧的采集点云进行对齐,并根据所述本征成份融合几何及本征成份信息,以补全和更新当前帧模型的几何和本征成份,实现三维几何与本征成份重建。
本申请反馈实施例的基于光影优化的三维几何与本征成份重建装置,通过重建模型顶点之间的匹配点对获取本征成份,以反映对象表面的真实材质属性,可以去除外部光照的影响,以有效提高稀疏和单个视点条件下,动态对象跟踪形变的鲁棒性,并通过能量函数求解,并结合场景光照和重建出的场景本征成份信息实现对实时动态场景三维模型的高精度重建,具有求解准确的优点,此外,对设备要求低,拥有广阔的应用前景。
另外,根据本申请反馈上述实施例的基于光影优化的三维几何与本征成份重建装置还 可以具有以下附加的技术特征:
进一步地,在本申请反馈的一个实施例中,所述能量函数为:
Figure PCTCN2019086892-appb-000016
其中,E为运动求解总能量项,
Figure PCTCN2019086892-appb-000017
为当前时刻待求解非刚性运动参数和当前帧点云的本征成份,E d为深度数据项,E s为光影优化项,E reg为局部刚性运动约束项,λ d、λ s和λ reg分别为对应各个能量项的权重系数;
并且,根据非刚性运动驱动模型顶点,计算公式为:
Figure PCTCN2019086892-appb-000018
其中,
Figure PCTCN2019086892-appb-000019
为作用于顶点v i的变形矩阵,所述变形矩阵包括旋转部分和平移部分,
Figure PCTCN2019086892-appb-000020
为所述旋转部分。
进一步地,在本申请反馈的一个实施例中,所述深度数据项表达为:
Figure PCTCN2019086892-appb-000021
其中,v为模型表面
Figure PCTCN2019086892-appb-000022
上的一个顶点,v′为非刚性驱动变形后的模型顶点,所述模型顶点的计算公式为:
Figure PCTCN2019086892-appb-000023
其中,
Figure PCTCN2019086892-appb-000024
为第t帧的第j个变形节点的变换矩阵,u t为深度点云上与模型顶点v对应的三维坐标点,
Figure PCTCN2019086892-appb-000025
是所述三维坐标点的法向,P为所有对应点对的集合。
进一步地,在本申请反馈的一个实施例中,基于本征成份分解的光照能量项为:
Figure PCTCN2019086892-appb-000026
其中,
Figure PCTCN2019086892-appb-000027
为当前彩色相机视角可见的所有模型顶点的集合,M c为彩色相机的投影矩阵,C t(M c(v′))为v′在当前t时刻彩色相机视角下投影到彩色图像上所获得的颜色,B t(v′)为结合模型几何信息和图像本征成份属性。
进一步地,在本申请反馈的一个实施例中,局部刚性约束项为:
Figure PCTCN2019086892-appb-000028
其中,
Figure PCTCN2019086892-appb-000029
代表第j个变形节点的相邻变形节点集合;
Figure PCTCN2019086892-appb-000030
代表所有非刚性变形节点的集合。
本申请反馈附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请反馈的实践了解到。
附图说明
本申请反馈上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为根据本申请反馈实施例的基于光影优化的三维几何与本征成份重建方法的流程图;
图2为根据本申请反馈实施例的基于光影优化的三维几何与本征成份重建装置的结构示意图。
具体实施方式
下面详细描述本申请反馈的实施例,实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请反馈,而不能理解为对本申请反馈的限制。
下面参照附图描述根据本申请反馈实施例提出的基于光影优化的三维几何与本征成份重建方法及方法,首先将参照附图描述根据本申请反馈实施例提出的基于光影优化的三维几何与本征成份重建方法。
图1为根据本申请反馈实施例的基于光影优化的三维几何与本征成份重建方法的流程图。
如图1所示,该基于光影优化的三维几何与本征成份重建方法包括以下步骤:
在步骤S101中,通过RGBD相机对动态场景进行拍摄,以得到三维彩色点云时间序列拍摄。
在本申请反馈的一个实施例中,将深度图像投影到三维空间中变换为一组三维点云包括:
获取深度相机的内参矩阵。
根据内参矩阵将深度图投影到三维空间中变换为一组三维点云。其中,变换的公式为:
Figure PCTCN2019086892-appb-000031
其中,u,v为像素坐标,d(u,v)为深度图像上像素(u,v)位置处的深度值,
Figure PCTCN2019086892-appb-000032
为深度相机内参矩阵。
在获取匹配点对方面,使用相机投影公式将三维模型的顶点投影到深度图像上以获得匹配点对。
在步骤S102中,获取三维深度点云和重建模型顶点之间的匹配点对,并获得点对集合,其中,点对集合包含三维深度点云上与重建模型的顶点对应的三维坐标点。
在本申请反馈的一个实施例中,计算三维深度点云和重建模型顶点之间的匹配点对, 获得点对集合P,该集合包含点对(u t,v),其中u t为深度点云上与模型顶点v对应的三维坐标点。其核心为使用相机投影公式将三维模型的顶点投影到深度图像上以获得匹配点对。
在步骤S103中,根据匹配点对及当前视角彩色图像建立基于本征分解的联合能量函数,并求解重建模型上每一个顶点的非刚性运动位置变换参数。
在本申请反馈的一个实施例中,基于本征分解的联合能量函数,求解重建模型上每一个顶点的非刚性运动位置变换参数及,能量函数为:
Figure PCTCN2019086892-appb-000033
其中,E为运动求解总能量项,
Figure PCTCN2019086892-appb-000034
为当前时刻待求解非刚性运动参数和当前帧点云的本征成份。E d为深度数据项,用于求解非刚性表面运动。该项用于保证变形后的场景模型与当前深度点云观测尽可能的匹配,当变形后的模型与深度观测距离较远时,该项能量较大;E s为光影优化项,用于求解当前本征成份。该项通过场景光照、模型几何信息及模型本征成份信息渲染出来的彩色图像与实际采集到的彩色图像保持一致,当渲染图像与实际采集的彩色图像差异较大时,该能量项较大;E reg为局部刚性运动约束项,作用于非刚性运动。该数据项保证模型上邻近顶点的非刚性驱动效果要尽可能的一致,用于约束模型表面的非刚性运动是局部刚性的,不易产生局部区域的较大的变形;λ d、λ s和λ reg分别为对应各个能量项权重系数。
其中,深度数据项表达为:
Figure PCTCN2019086892-appb-000035
v为模型表面
Figure PCTCN2019086892-appb-000036
上的一个顶点,v′为非刚性驱动变形后的模型顶点,公式为:
Figure PCTCN2019086892-appb-000037
其中,
Figure PCTCN2019086892-appb-000038
为第t帧的第j个变形节点的变换矩阵,是优化函数需要求解的未知量;u t为深度点云上与模型顶点v对应的三维坐标点,
Figure PCTCN2019086892-appb-000039
是其法向;P为所有对应点对的集合。该能量项用于保证变形后的场景模型与当前深度点云观测尽可能的匹配。当变形后的模型与深度观测距离较远时,该项能量较大。
基于本征成份分解的光照能量项为:
Figure PCTCN2019086892-appb-000040
其中,
Figure PCTCN2019086892-appb-000041
为当前彩色相机视角可见的所有模型顶点的集合,M c为彩色相机的投影矩阵,C t(M c(v′))为v′在当前t时刻彩色相机视角下投影到彩色图像上所获得的颜色;B t(v′)为结合模型几何信息和图像本征成份属性,使用传统渲染管线渲染出来的颜色图像中对应于v′的 渲染颜色。该能量项假设场景均匀光照,是的模型几何信息及模型本征成份信息渲染出来的彩色图像与实际采集到的彩色图像保持一致。当渲染图像与实际采集的彩色图像差异较大时,该能量项较大。
局部刚性约束项为:
Figure PCTCN2019086892-appb-000042
其中,
Figure PCTCN2019086892-appb-000043
代表第j个变形节点的相邻变形节点集合;
Figure PCTCN2019086892-appb-000044
代表所有非刚性变形节点的集合。该约束项要是为了保证模型上邻近顶点的非刚性驱动效果要尽可能的一致。
在步骤S104中,根据匹配点对及当前视角彩色图像建立基于本征分解的联合能量函数,并求解重建模型上每一个顶点的非刚性运动位置变换参数。
在本申请反馈的一个实施例中,共同求解重建模型上每一个顶点的非刚性运动位置变换参数。最终求解获得的信息为每一个三维模型顶点的变换矩阵。为了实现快速线性求解的要求,本申请反馈实施例的方法对利用指数映射方法对变形方程做如下近似:
Figure PCTCN2019086892-appb-000045
其中,
Figure PCTCN2019086892-appb-000046
为截至上一帧的模型顶点v i的累积变换矩阵,为已知量;I为四维单位阵;
其中,
Figure PCTCN2019086892-appb-000047
Figure PCTCN2019086892-appb-000048
即上一帧变换后的模型顶点,则经过变换有:
Figure PCTCN2019086892-appb-000049
对于每个顶点,要求解的未知参数即为六维变换参数x=(v 1,v 2,v 3,w x,w y,w z) T
在步骤S105中,根据匹配点对及当前视角彩色图像建立基于本征分解的联合能量函数,并求解重建模型上每一个顶点的非刚性运动位置变换参数。
在本申请反馈的一个实施例中,使用深度图像对对齐后的三维模型进行更新和补全以及使用彩色图像对对齐后的三维模型进行本征成份的更新和补全。一方面将新获得的深度信息融合到三维模型中,更新三维模型表面顶点位置或为三维模型增加新的顶点,使其更符合当前深度图像的表达;另一方面,使用求解出的场景光照信息将彩色图像分解,获得当前视角模型本征成份信息,并将其最终融合在模型的本征成份信息中。两种更新过程都为自适应更新过程,其特征在于:在模型融合了足够的有效的深度信息和本征成份信息以 后,即停止场景模型和本征成份的更新,只进行动态场景光照和模型非刚性运动的求解,从而进一步提高了该实时动态重建系统的鲁棒性。
本申请反馈实施例的基于光影优化的三维几何与本征成份重建方法,通过重建模型顶点之间的匹配点对获取本征成份,以反映对象表面的真实材质属性,可以去除外部光照的影响,以有效提高稀疏和单个视点条件下,动态对象跟踪形变的鲁棒性,并通过能量函数求解,并结合场景光照和重建出的场景本征成份信息实现对实时动态场景三维模型的高精度重建,具有求解准确的优点,此外,对设备要求低,拥有广阔的应用前景。
其次参照附图描述根据本申请反馈实施例提出的基于光影优化的三维几何与本征成份重建装置。
图2是本申请反馈一个实施例的基于光影优化的三维几何与本征成份重建装置的结构示意图。
如图2所示,该基于光影优化的三维几何与本征成份重建装置10包括:拍摄模块100、获取模块200、分解模块300、求解模块400和重建模块500。
其中,拍摄模块100用于通过RGBD相机对动态场景进行拍摄,以得到三维彩色点云时间序列拍摄。获取模块200用于获取三维深度点云和重建模型顶点之间的匹配点对,并获得点对集合,其中,点对集合包含三维深度点云上与重建模型的顶点对应的三维坐标点。分解模块300用于根据匹配点对及当前视角彩色图像建立基于本征分解的联合能量函数,并求解重建模型上每一个顶点的非刚性运动位置变换参数。求解模块400用于对能量函数进行求解,以获得表面模型顶点的形变变换矩阵以及图像上每个项目的本征成份。重建模块500用于根据求解结果将前一帧三维模型的几何进行形变,使形变模型与当前帧的采集点云进行对齐,并根据本征成份融合几何及本征成份信息,以补全和更新当前帧模型的几何和本征成份,实现三维几何与本征成份重建。
进一步地,在本申请反馈的一个实施例中,能量函数为:
Figure PCTCN2019086892-appb-000050
其中,E为运动求解总能量项,
Figure PCTCN2019086892-appb-000051
为当前时刻待求解非刚性运动参数和当前帧点云的本征成份,E d为深度数据项,E s为光影优化项,E reg为局部刚性运动约束项,λ d、λ s和λ reg分别为对应各个能量项的权重系数;
并且,根据非刚性运动驱动模型顶点,计算公式为:
Figure PCTCN2019086892-appb-000052
其中,
Figure PCTCN2019086892-appb-000053
为作用于顶点v i的变形矩阵,变形矩阵包括旋转部分和平移部分,
Figure PCTCN2019086892-appb-000054
为旋转部分。
进一步地,在本申请反馈的一个实施例中,深度数据项表达为:
Figure PCTCN2019086892-appb-000055
其中,v为模型表面
Figure PCTCN2019086892-appb-000056
上的一个顶点,v′为非刚性驱动变形后的模型顶点,模型顶点的计算公式为:
Figure PCTCN2019086892-appb-000057
其中,
Figure PCTCN2019086892-appb-000058
为第t帧的第j个变形节点的变换矩阵,u t为深度点云上与模型顶点v对应的三维坐标点,
Figure PCTCN2019086892-appb-000059
是三维坐标点的法向,P为所有对应点对的集合。
进一步地,在本申请反馈的一个实施例中,基于本征成份分解的光照能量项为:
Figure PCTCN2019086892-appb-000060
其中,
Figure PCTCN2019086892-appb-000061
为当前彩色相机视角可见的所有模型顶点的集合,M c为彩色相机的投影矩阵,C t(M c(v′))为v′在当前t时刻彩色相机视角下投影到彩色图像上所获得的颜色,B t(v′)为结合模型几何信息和图像本征成份属性。
进一步地,在本申请反馈的一个实施例中,局部刚性约束项为:
需要说明的是,前述对基于光影优化的三维几何与本征成份重建方法实施例的解释说明也适用于该实施例的基于光影优化的三维几何与本征成份重建装置,此处不再赘述。
本申请反馈实施例的基于光影优化的三维几何与本征成份重建方法,通过重建模型顶点之间的匹配点对获取本征成份,以反映对象表面的真实材质属性,可以去除外部光照的影响,以有效提高稀疏和单个视点条件下,动态对象跟踪形变的鲁棒性,并通过能量函数求解,并结合场景光照和重建出的场景本征成份信息实现对实时动态场景三维模型的高精度重建,具有求解准确的优点,此外,对设备要求低,拥有广阔的应用前景。
在本申请反馈的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”、“顺时针”、“逆时针”、“轴向”、“径向”、“周向”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请反馈和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请反馈的限制。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请反馈的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
在本申请反馈中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”、“固定”等术语应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请反馈中的具体含义。
在本申请反馈中,除非另有明确的规定和限定,第一特征在第二特征“上”或“下”可以是第一和第二特征直接接触,或第一和第二特征通过中间媒介间接接触。而且,第一特征在第二特征“之上”、“上方”和“上面”可是第一特征在第二特征正上方或斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”可以是第一特征在第二特征正下方或斜下方,或仅仅表示第一特征水平高度小于第二特征。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请反馈的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
尽管上面已经示出和描述了本申请反馈的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请反馈的限制,本领域的普通技术人员在本申请反馈的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (10)

  1. 一种基于光影优化的三维几何与本征成份重建方法,其特征在于,包括以下步骤:
    通过RGBD相机对动态场景进行拍摄,以得到三维彩色点云时间序列拍摄;
    获取三维深度点云和重建模型顶点之间的匹配点对,并获得点对集合,其中,所述点对集合包含所述三维深度点云上与所述重建模型的顶点对应的三维坐标点;
    根据所述匹配点对及当前视角彩色图像建立基于本征分解的联合能量函数,并求解所述重建模型上每一个顶点的非刚性运动位置变换参数;
    对所述能量函数进行求解,以获得表面模型顶点的形变变换矩阵以及图像上每个项目的本征成份;以及
    根据求解结果将前一帧三维模型的几何进行形变,使形变模型与当前帧的采集点云进行对齐,并根据所述本征成份融合几何及本征成份信息,以补全和更新当前帧模型的几何和本征成份,实现三维几何与本征成份重建。
  2. 根据权利要求1所述的基于光影优化的三维几何与本征成份重建方法,其特征在于,所述能量函数为:
    Figure PCTCN2019086892-appb-100001
    其中,E为运动求解总能量项,
    Figure PCTCN2019086892-appb-100002
    为当前时刻待求解非刚性运动参数和当前帧点云的本征成份,E d为深度数据项,E s为光影优化项,E reg为局部刚性运动约束项,λ d、λ s和λ reg分别为对应各个能量项的权重系数;
    并且,根据非刚性运动驱动模型顶点,计算公式为:
    Figure PCTCN2019086892-appb-100003
    其中,
    Figure PCTCN2019086892-appb-100004
    为作用于顶点v i的变形矩阵,所述变形矩阵包括旋转部分和平移部分,
    Figure PCTCN2019086892-appb-100005
    为所述旋转部分。
  3. 根据权利要求2所述的基于光影优化的三维几何与本征成份重建方法,其特征在于,所述深度数据项表达为:
    Figure PCTCN2019086892-appb-100006
    其中,v为模型表面
    Figure PCTCN2019086892-appb-100007
    上的一个顶点,v′为非刚性驱动变形后的模型顶点,所述模型顶点的计算公式为:
    Figure PCTCN2019086892-appb-100008
    其中,
    Figure PCTCN2019086892-appb-100009
    为第t帧的第j个变形节点的变换矩阵,u t为深度点云上与模型顶点v对应的三 维坐标点,
    Figure PCTCN2019086892-appb-100010
    是所述三维坐标点的法向,P为所有对应点对的集合。
  4. 根据权利要求1所述的基于光影优化的三维几何与本征成份重建方法,其特征在于,基于本征成份分解的光照能量项为:
    Figure PCTCN2019086892-appb-100011
    其中,
    Figure PCTCN2019086892-appb-100012
    为当前彩色相机视角可见的所有模型顶点的集合,M c为彩色相机的投影矩阵,C t(M c(v′))为v′在当前t时刻彩色相机视角下投影到彩色图像上所获得的颜色,B t(v′)为结合模型几何信息和图像本征成份属性。
  5. 根据权利要求1所述的基于光影优化的三维几何与本征成份重建方法,其特征在于,局部刚性约束项为:
    Figure PCTCN2019086892-appb-100013
    其中,
    Figure PCTCN2019086892-appb-100014
    代表第j个变形节点的相邻变形节点集合;
    Figure PCTCN2019086892-appb-100015
    代表所有非刚性变形节点的集合。
  6. 一种基于光影优化的三维几何与本征成份重建装置,包括:
    拍摄模块,用于通过RGBD相机对动态场景进行拍摄,以得到三维彩色点云时间序列拍摄;
    获取模块,用于获取三维深度点云和重建模型顶点之间的匹配点对,并获得点对集合,其中,所述点对集合包含所述三维深度点云上与所述重建模型的顶点对应的三维坐标点;
    分解模块,用于根据所述匹配点对及当前视角彩色图像建立基于本征分解的联合能量函数,并求解所述重建模型上每一个顶点的非刚性运动位置变换参数;
    求解模块,用于对所述能量函数进行求解,以获得表面模型顶点的形变变换矩阵以及图像上每个项目的本征成份;以及
    重建模块,用于根据求解结果将前一帧三维模型的几何进行形变,使形变模型与当前帧的采集点云进行对齐,并根据所述本征成份融合几何及本征成份信息,以补全和更新当前帧模型的几何和本征成份,实现三维几何与本征成份重建。
  7. 根据权利要求6所述的基于光影优化的三维几何与本征成份重建装置,其特征在于,所述能量函数为:
    Figure PCTCN2019086892-appb-100016
    其中,E为运动求解总能量项,
    Figure PCTCN2019086892-appb-100017
    为当前时刻待求解非刚性运动参数和当前帧点云的本征成份,E d为深度数据项,E s为光影优化项,E reg为局部刚性运动约束项,λ d、λ s和λ reg分别为对应各个能量项的权重系数;
    并且,根据非刚性运动驱动模型顶点,计算公式为:
    Figure PCTCN2019086892-appb-100018
    其中,
    Figure PCTCN2019086892-appb-100019
    为作用于顶点v i的变形矩阵,所述变形矩阵包括旋转部分和平移部分,
    Figure PCTCN2019086892-appb-100020
    为所述旋转部分。
  8. 根据权利要求7所述的基于光影优化的三维几何与本征成份重建装置,其特征在于,所述深度数据项表达为:
    Figure PCTCN2019086892-appb-100021
    其中,v为模型表面
    Figure PCTCN2019086892-appb-100022
    上的一个顶点,v′为非刚性驱动变形后的模型顶点,所述模型顶点的计算公式为:
    Figure PCTCN2019086892-appb-100023
    其中,
    Figure PCTCN2019086892-appb-100024
    为第t帧的第j个变形节点的变换矩阵,u t为深度点云上与模型顶点v对应的三维坐标点,
    Figure PCTCN2019086892-appb-100025
    是所述三维坐标点的法向,P为所有对应点对的集合。
  9. 根据权利要求6所述的基于光影优化的三维几何与本征成份重建装置,其特征在于,基于本征成份分解的光照能量项为:
    Figure PCTCN2019086892-appb-100026
    其中,
    Figure PCTCN2019086892-appb-100027
    为当前彩色相机视角可见的所有模型顶点的集合,M c为彩色相机的投影矩阵,C t(M c(v′))为v′在当前t时刻彩色相机视角下投影到彩色图像上所获得的颜色,B t(v′)为结合模型几何信息和图像本征成份属性。
  10. 根据权利要求6所述的基于光影优化的三维几何与本征成份重建装置,其特征在于,局部刚性约束项为:
    Figure PCTCN2019086892-appb-100028
    其中,
    Figure PCTCN2019086892-appb-100029
    代表第j个变形节点的相邻变形节点集合;
    Figure PCTCN2019086892-appb-100030
    代表所有非刚性变形节点的集合。
PCT/CN2019/086892 2018-05-15 2019-05-14 基于光影优化的三维几何与本征成份重建方法及装置 WO2019219014A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810460082.6A CN108898658A (zh) 2018-05-15 2018-05-15 基于光影优化的三维几何与本征成份重建方法及装置
CN201810460082.6 2018-05-15

Publications (1)

Publication Number Publication Date
WO2019219014A1 true WO2019219014A1 (zh) 2019-11-21

Family

ID=64343022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/086892 WO2019219014A1 (zh) 2018-05-15 2019-05-14 基于光影优化的三维几何与本征成份重建方法及装置

Country Status (2)

Country Link
CN (1) CN108898658A (zh)
WO (1) WO2019219014A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898658A (zh) * 2018-05-15 2018-11-27 清华大学 基于光影优化的三维几何与本征成份重建方法及装置
CN109859255B (zh) * 2019-01-31 2023-08-04 天津大学 大动作运动物体的多视角非同时采集与重建方法
CN111932670B (zh) * 2020-08-13 2021-09-28 北京未澜科技有限公司 基于单个rgbd相机的三维人体自画像重建方法及系统
CN112734899B (zh) * 2021-01-20 2022-12-02 清华大学 物体表面局部自遮挡阴影的建模方法和装置
CN112802186B (zh) * 2021-01-27 2022-06-24 清华大学 基于二值化特征编码匹配的动态场景实时三维重建方法
CN113689539B (zh) * 2021-07-06 2024-04-19 清华大学 基于隐式光流场的动态场景实时三维重建方法
CN113932730B (zh) * 2021-09-07 2022-08-02 华中科技大学 一种曲面板材形状的检测装置
CN114155256B (zh) * 2021-10-21 2024-05-24 北京航空航天大学 一种使用rgbd相机跟踪柔性物体形变的方法及系统
CN114782566B (zh) * 2021-12-21 2023-03-10 首都医科大学附属北京友谊医院 Ct数据重建方法和装置、电子设备和计算机可读存储介质
CN117351482B (zh) * 2023-12-05 2024-02-27 国网山西省电力公司电力科学研究院 一种用于电力视觉识别模型的数据集增广方法、系统、电子设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141968A1 (en) * 2007-12-03 2009-06-04 Siemens Corporate Research, Inc. Coronary reconstruction from rotational x-ray projection sequence
US20100322525A1 (en) * 2009-06-19 2010-12-23 Microsoft Corporation Image Labeling Using Multi-Scale Processing
CN103198523A (zh) * 2013-04-26 2013-07-10 清华大学 一种基于多深度图的非刚体三维重建方法及系统
US20140218362A1 (en) * 2013-02-05 2014-08-07 Carestream Health, Inc. Monte carlo modeling of field angle-dependent spectra for radiographic imaging systems
CN108898658A (zh) * 2018-05-15 2018-11-27 清华大学 基于光影优化的三维几何与本征成份重建方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141968A1 (en) * 2007-12-03 2009-06-04 Siemens Corporate Research, Inc. Coronary reconstruction from rotational x-ray projection sequence
US20100322525A1 (en) * 2009-06-19 2010-12-23 Microsoft Corporation Image Labeling Using Multi-Scale Processing
US20140218362A1 (en) * 2013-02-05 2014-08-07 Carestream Health, Inc. Monte carlo modeling of field angle-dependent spectra for radiographic imaging systems
CN103198523A (zh) * 2013-04-26 2013-07-10 清华大学 一种基于多深度图的非刚体三维重建方法及系统
CN108898658A (zh) * 2018-05-15 2018-11-27 清华大学 基于光影优化的三维几何与本征成份重建方法及装置

Also Published As

Publication number Publication date
CN108898658A (zh) 2018-11-27

Similar Documents

Publication Publication Date Title
WO2019219014A1 (zh) 基于光影优化的三维几何与本征成份重建方法及装置
WO2019219012A1 (zh) 联合刚性运动和非刚性形变的三维重建方法及装置
WO2019219013A1 (zh) 联合优化人体体态与外观模型的三维重建方法及系统
CN108154550B (zh) 基于rgbd相机的人脸实时三维重建方法
CN108053437B (zh) 基于体态的三维模型获取方法及装置
CN110335343B (zh) 基于rgbd单视角图像人体三维重建方法及装置
CN110728671B (zh) 基于视觉的无纹理场景的稠密重建方法
US20170330375A1 (en) Data Processing Method and Apparatus
CN110288712B (zh) 室内场景的稀疏多视角三维重建方法
CN108475327A (zh) 三维采集与渲染
CN108629829B (zh) 一种球幕相机与深度相机结合的三维建模方法和系统
CN109919911A (zh) 基于多视角光度立体的移动三维重建方法
CN106683163B (zh) 一种视频监控的成像方法及系统
CN104599317A (zh) 一种实现3d扫描建模功能的移动终端及方法
WO2018032841A1 (zh) 绘制三维图像的方法及其设备、系统
CN108010125A (zh) 基于线结构光和图像信息的真实尺度三维重建系统及方法
CN113379901A (zh) 利用大众自拍全景数据建立房屋实景三维的方法及系统
CN111047678B (zh) 一种三维人脸采集装置和方法
CN108564654B (zh) 三维大场景的画面进入方式
Gava et al. Dense scene reconstruction from spherical light fields
CN112102504A (zh) 一种基于混合现实的三维场景和二维图像混合方法
CN114935316B (zh) 基于光学跟踪与单目视觉的标准深度图像生成方法
CN109003294A (zh) 一种虚实空间位置注册与精准匹配方法
WO2021042961A1 (zh) 定制人脸混合表情模型自动生成方法及装置
TW201509360A (zh) 單鏡頭內視鏡立體視覺化系統及其方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19803211

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19803211

Country of ref document: EP

Kind code of ref document: A1