WO2020224144A1 - 一种基于人体拉普拉斯变形的服装变形方法 - Google Patents

一种基于人体拉普拉斯变形的服装变形方法 Download PDF

Info

Publication number
WO2020224144A1
WO2020224144A1 PCT/CN2019/105295 CN2019105295W WO2020224144A1 WO 2020224144 A1 WO2020224144 A1 WO 2020224144A1 CN 2019105295 W CN2019105295 W CN 2019105295W WO 2020224144 A1 WO2020224144 A1 WO 2020224144A1
Authority
WO
WIPO (PCT)
Prior art keywords
vertices
human body
clothing
mesh
vertex
Prior art date
Application number
PCT/CN2019/105295
Other languages
English (en)
French (fr)
Inventor
骆立康
王露苑
金小刚
刘郴
黄宁海
邵泽希
Original Assignee
上海凌笛数码科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海凌笛数码科技有限公司 filed Critical 上海凌笛数码科技有限公司
Priority to EP19928268.2A priority Critical patent/EP3958217A4/en
Publication of WO2020224144A1 publication Critical patent/WO2020224144A1/zh
Priority to US17/520,593 priority patent/US11704871B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/16Cloth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the invention relates to the technical field of three-dimensional mesh deformation, in particular to a clothing deformation method based on the Laplace deformation of the human body.
  • the existing virtual fitting applications mainly include virtual fitting websites, virtual fitting mirrors and mobile virtual fitting systems.
  • One of the more common ones is the three-dimensional virtual fitting mirror.
  • the "fitting mirror” developed by Russia's AR DOOR technology company and the “Active Lab” full-interactive virtual dressing mirror invented by Japan's Digital Fashion company are typical examples of real-life applications of virtual dressing mirrors.
  • the advantage of the virtual fitting website is simple operation and low equipment requirements; the disadvantage is that it cannot achieve the effect that is completely consistent with the user's body through simple parameter adjustment, and the simulation of the facial features of the model and the texture of the clothes lacks, realism and layering Poor.
  • the working principle of the virtual dressing mirror is to obtain the user's image, body dimension and movement information through a depth camera (such as Kinect), use the body dimension parameters to reconstruct a three-dimensional human body model consistent with the user's body shape, and wear the displayed clothing model to the human body On the model, the user can manipulate the human body model to move and view the effect of virtual fitting.
  • a depth camera such as Kinect
  • the advantage of this method is that the user can complete the fitting through simple actions and gesture operations, can see the effect of the fitting on himself, and the system has better real-time and operational interaction.
  • the disadvantage is that the rendering effect of the model is obviously different from the actual situation, and the clothing model often does not fit well with the human body. Therefore, this puts forward higher requirements on the deformation of the clothing grid, so that it can fit different human bodies and follow The natural topography of human movement.
  • the clothing deformation method proposed by the present invention achieves this goal by driving the deformation of the clothing grid through the deformation of the human body.
  • 3D model deformation In computer graphics and other fields, 3D model deformation is a very important topic. 3D model deformation refers to changing the global shape of the model while keeping the local details as unchanged as possible. The local details of a 3D model are its intrinsic properties, so while the model is deformed, its intrinsic properties need to be kept unchanged.
  • the clothing deformation methods in existing virtual fittings are basically driven by bone skin. These methods require the animator to bind the clothing through the human body in advance, which requires a lot of labor and time costs. In addition, the clothing skinning effect obtained directly through bone binding can easily cause excessive stretching of the clothing grid due to the unreasonable skin weight in the actual virtual fitting, which affects the authenticity of the fitting.
  • the Laplace clothing deformation method that the present invention relies on avoids this problem well.
  • the differential coordinate in the Laplace deformation algorithm is a kind of local coordinate, which represents the intrinsic properties of the local collection details of the 3D model.
  • deformation handles only some vertices on the three-dimensional model need to be selected as fixed areas, called deformation handles; some other vertices are used as deformation areas. By moving the deformation handle, other unselected vertices are changed through the Laplace system to achieve a smooth deformation effect.
  • the discretized vertices of the human body mesh are used as the deformation handles, and the user drives part of the joint movement of the human body model to generate the deformation of the human body model, and then the clothing mesh worn by the human body is driven to deform through the Laplace system .
  • the purpose of the present invention is to provide a clothing deformation method based on the Laplace deformation of the human body.
  • the method provided by the present invention is simple, efficient, and has good real-time performance. It is an important technology for real-time virtual fitting interaction. It solves the problem that clothing mesh deformation requires manual pre-processing, and at the same time overcomes the existing deformation algorithms that tend to cause excessive local stretch Disadvantages, the clothing is smoothly deformed under the drive of the human body, and the local characteristics of the clothing can still be maintained after the deformation.
  • a clothing deformation method based on the Laplace deformation of the human body including the following steps:
  • the input three-dimensional human body and clothing model grids are often unevenly distributed, some parts of the grid are dense, and some parts of the grid are sparse. If such a non-uniform grid is directly applied to the grid deformation, the deformation effect will be greatly affected. Therefore, it is necessary to perform some optimization processing on the human body and clothing model in the pretreatment stage to make it uniform.
  • step (2) the step (1) of human non-uniform grid mesh M B and M C clothing discretize input, i.e. the input grid vertex just take the information to obtain the original vertices Set V B and V C. While discretizing, record the distance and topological connection between all vertices in the original grid data, so that step (7) can restore the mapping.
  • the discretized human vertex set V B and clothing vertex set V C are voxelized, and the space in which they are located is decomposed into n ⁇ n voxels, each voxel cube radius is d.
  • the human body mesh vertex set V B as an example, for the i-th voxel Suppose there are m body mesh vertices in the space covered by it Voxel The contained m mesh vertices are aggregated into a single vertex.
  • step (2) the edge connection relationship between all vertices in the simplified vertex set is added as a new topological connection relationship for constructing Laplacian in step (4) Matrix use.
  • the Laplacian operator matrix L of the human body and clothing model is constructed. Since the human body and the clothing model are two independent models, all vertices on the two models have their respective topological connections, so they are separated in topological structure. At the same time, due to the need to realize the deformation of the clothing mesh driven by the human body mesh, the discrete vertex sets of the two models need to be processed as a whole when constructing the Laplacian matrix. This requires consideration of the topological information and geometric information of the 3D model at the same time, so in this step, a geometric Laplacian matrix needs to be constructed.
  • the positions of all the vertices of the vertex set V can be expressed as a vector F of dimension n.
  • the Laplacian matrix is an n ⁇ n matrix L. Therefore, the Laplacian operation on the discrete human body and clothing model is to multiply the Laplacian operator matrix and the position vector of the vertex, L ⁇ F.
  • the Laplacian matrix is a sparse matrix, and the assignment of non-zero element values is similar to the adjacency matrix between vertices.
  • the diagonal element a ii in the Laplace matrix its value is the number of vertices connected to the point i by an edge.
  • the inverse matrix is solved by preprocessing.
  • the core of Laplace deformation is to transform the coordinates of vertices from Euclidean space to differential coordinate space. To keep the local details of the clothing model unchanged, it is necessary to keep the local differential coordinates unchanged after deformation.
  • the whole deformation process is as follows:
  • the first step is to calculate the differential coordinates of each vertex of the vertex set V:
  • represents the differential coordinates of the vertex, or Laplace coordinates, corresponding to the three components of the three coordinate axes of the local differential coordinate space.
  • the second step is to move some of the vertices on the human body model and use these vertices as the deformation handle to obtain the new Euclidean coordinates of the vertices on the deformation handle:
  • C represents the set of all vertices on the handle.
  • u i represents the new position of the i-th vertex on the handle
  • v′ i represents the new position of the i-th vertex.
  • the third step is to calculate the positions of the remaining vertices of the vertex set V according to the differential coordinates and the new position of the vertex on the handle using the least square method:
  • V' represents the new position vector of all vertices.
  • the fourth step is to simplify the optimization equation of the third step, which can transform the optimization problem into a solution:
  • a T A is positive definite and can be decomposed into two triangular matrices multiplied:
  • the value of the inverse matrix R -1 needs to be calculated in advance, and the position x of all vertices in the final vertex set is solved by intermediate variables.
  • the human body mesh is used as the control vertex to change, which drives the garment mesh to undergo real-time smooth deformation.
  • the movement information input by the customer is used as the new position of the control handle vertex of the human body model.
  • the simplified and deformed human body and clothing grid are mapped back to the grid space of the original resolution according to the originally recorded distance and topological connection relationship to obtain the final deformed human body and clothing grid.
  • a simplified human body mesh vertex set V′ B and a clothing mesh vertex set V′ C are obtained through voxelization and aggregation.
  • a simplified human body mesh vertex set V′ B and a clothing mesh vertex set V′ C are obtained through voxelization and aggregation.
  • the new position of the vertices of the human body mesh with the original resolution can be calculated according to the new positions of the vertices of the simplified human body mesh after deformation, and at the same time Restore the original topological connection relationship, and add an edge connection relationship between the restored vertices.
  • the present invention provides a clothing deformation method based on the human body Laplacian deformation, which can simulate in real time the behavior of clothing deforming with the movement of the human body during the virtual fitting process, and specifically appears in:
  • the clothing grid is deformed by Laplacian deformation using the human body as the control vertex. It only needs to solve the Laplace matrix and the inverse matrix in advance to realize the real-time deformation of the clothing grid;
  • Figure 1 is an effect diagram of two female models of different body types wearing T-shirts and trousers provided by the present invention
  • Figure 2 is an effect diagram of two male models of different body types wearing T-shirts and shorts provided by the present invention
  • the input three-dimensional human body and clothing model grids are often unevenly distributed, some parts of the grid are dense, and some parts of the grid are sparse. If such a non-uniform grid is directly applied to the grid deformation, the deformation effect will be greatly affected. Therefore, it is necessary to perform some optimization processing on the human body and clothing model in the pretreatment stage to make it uniform.
  • step (2) the step (1) of human non-uniform grid mesh M B and M C clothing discretize input, i.e. the input grid vertex just take the information to obtain the original vertices Set V B and V C. While discretizing, record the distance and topological connection between all vertices in the original grid data, so that step (7) can restore the mapping.
  • the discretized human vertex set V B and clothing vertex set V C are voxelized, and the space in which they are located is decomposed into n ⁇ n voxels, each voxel cube radius is d.
  • the human body mesh vertex set V B as an example, for the i-th voxel Suppose there are m body mesh vertices in the space covered by it Voxel The contained m mesh vertices are aggregated into a single vertex.
  • the Laplacian operator matrix L of the human body and clothing model is constructed. Since the human body and the clothing model are two independent models, all vertices on the two models have their respective topological connections, so they are separated in topological structure. At the same time, due to the need to realize the deformation of the clothing mesh driven by the human body mesh, the discrete vertex sets of the two models need to be processed as a whole when constructing the Laplacian matrix. This requires consideration of the topological information and geometric information of the 3D model at the same time, so in this step, a geometric Laplacian matrix needs to be constructed.
  • the positions of all the vertices of the vertex set V can be expressed as a vector F of dimension n.
  • the Laplacian matrix is an n ⁇ n matrix L. Therefore, the Laplacian operation on the discrete human body and clothing model is the multiplication of the Laplacian matrix and the position vector of the vertex, that is, L ⁇ F.
  • the Laplacian matrix is a sparse matrix, and the assignment of non-zero element values is similar to the adjacency matrix between vertices.
  • the inverse matrix is solved by preprocessing.
  • the core of Laplace deformation is to transform the coordinates of vertices from Euclidean space to differential coordinate space. To keep the local details of the clothing model unchanged, it is necessary to keep the local differential coordinates unchanged after deformation.
  • the whole deformation process is as follows:
  • the first step is to calculate the differential coordinates of each vertex of the vertex set V:
  • represents the differential coordinates of the vertex.
  • the second step is to move part of the vertices on the human body model and use these vertices as the deformation handles to obtain the new Euclidean coordinates of the vertices on the deformation handle:
  • C represents the set of all vertices on the handle.
  • u i represents the new position of the i-th vertex on the handle
  • v′ i represents the new position of the i-th vertex.
  • the third step according to the differential coordinates and the new position of the vertex on the handle, use the least square method to calculate the positions of the remaining vertices of the vertex set V:
  • V' represents the new position vector of all vertices.
  • the fourth step is to simplify the optimization equation of the third step, which can transform the optimization problem into solving the following linear equations:
  • a T A is positive definite and can be decomposed into two triangular matrices and multiply:
  • the value of the inverse matrix R -1 needs to be calculated in advance, and the position x of all vertices in the final vertex set is solved by intermediate variables.
  • the human body mesh is used as the control vertex to change, which drives the garment mesh to undergo real-time smooth deformation.
  • the motion information input by the user is used as the new position of the vertex of the control handle of the human body model, and the deformed human body can be obtained by solving the solution of the above least square problem And new locations for clothing models.
  • the simplified and deformed human body and clothing grid are mapped back to the grid space of the original resolution according to the originally recorded distance and topological connection relationship to obtain the final deformed human body and clothing grid.
  • a simplified human body mesh vertex set V′ B and a clothing mesh vertex set V′ C are obtained through voxelization and aggregation.
  • a simplified human body mesh vertex set V′ B and a clothing mesh vertex set V′ C are obtained through voxelization and aggregation.
  • the new position of the vertices of the human body mesh with the original resolution can be calculated according to the new positions of the vertices of the simplified human body mesh after deformation , And restore the original topological connection at the same time, and add an edge connection between the restored vertices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种基于人体拉普拉斯变形的服装变形方法。所述服装变形方法包括以下步骤:(1)输入人体和服装网格;(2)将步骤(1)输入的非均匀人体和服装网格进行离散化处理;(3)聚合离散化后的所有网格顶点,减少数量,形成均匀的离散顶点集;(4)构建人体和服装网格的拉普拉斯方程;(5)预处理求解逆矩阵;(6)根据拉普拉斯矩阵通过人体网格作为控制顶点进行变化,带动服装网格发生实时变形;(7)将变形后的简化网格映射回原始分辨率的网格空间,得到最终变形后的人体和服装网格。本方法以保持网格局部特征不变为基础约束条件,通过人体变形带动服装变形,实现实时高效的虚拟试衣服装网格变形。

Description

一种基于人体拉普拉斯变形的服装变形方法 技术领域
本发明涉及三维网格变形技术领域,特别涉及一种基于人体拉普拉斯变形的服装变形方法。
背景技术
目前已有的虚拟试衣应用主要包括虚拟试衣网站、虚拟试衣镜和移动端的虚拟试衣系统。其中比较常见的是三维的虚拟试衣镜。俄罗斯的AR DOOR科技公司开发的“试衣魔镜”、日本的Digital Fashion公司发明的“Active Lab”全交互虚拟试衣镜就是虚拟试衣镜的现实应用的典型案例。
虚拟试衣网站的优点是操作简单,设备要求门槛低;缺点是无法通过简单参数的调整达到与用户身材完全一致的效果,且模特的面部特征、衣服的质感的模拟缺乏,真实感与层次感较差。虚拟试衣镜的工作原理是通过深度摄像头(例如Kinect)获取用户的影像、身材维度与运动信息,利用身材维度参数重建与用户体型一致的三维人体模型,同时将所展示的服装模型穿到人体模型上,用户可以操控人体模型发生运动,查看虚拟试衣的效果。该方法的优点是用户通过简单的动作和手势操作即可完成试衣,能够看到自身的试穿效果,系统的实时性和操作交互性较好。缺点是模型渲染效果与实际有明显差别,且服装 模型往往不能与人体很好地贴合,因此这就对服装网格变形就提出了较高要求,使其能够贴合不同的人体并且随着人体运动自然地形变。本发明提出的服装变形方法正是通过人体形变带动服装网格形变从而达到这一目的。
在计算机图形学等领域,三维模型变形是一项很重要的课题。三维模型变形指的是改变模型的全局形状的同时尽可能保持局部细节不变。三维模型的局部细节是其内在属性,因此在模型变形的同时需要保持其内在属性不变。现有虚拟试衣中的服装变形方法基本都是骨骼蒙皮驱动的。这些方法需要动画师提前将服装通过人体进行骨骼绑定,这需要很大的人工成本和时间成本。除此之外,直接通过骨骼绑定得到的服装蒙皮效果,在实际的虚拟试衣中很容易因为蒙皮权重的不合理引起服装网格形变局部拉伸过度,影响试衣真实性。本发明的所依赖的拉普拉斯服装变形方法很好地避免了这一问题。
拉普拉斯变形算法中的微分坐标是一种局部坐标,代表了三维模型局部集合细节的内在属性。基于拉普拉斯的变形方法,只需要选定三维模型上的一些顶点作为固定不变的区域,称为变形手柄;其他一些顶点作为产生变形的区域。通过移动变形手柄,其他没被选中的顶点通过拉普拉斯系统发生变化,达到平滑变形的效果。
在本发明中,将人体网格离散化后的顶点作为变形手柄,通过用户驱动人体模型地部分关节运动产生人体模型变形,然后通过拉普拉斯系统将人体所穿着的服装网格带动进行变形。
发明内容
本发明的目的在于提供一种基于人体拉普拉斯变形的服装变形方法。本发明提供的方法简单高效,实时性好,是实时虚拟试衣交互的重要技术,解决了服装网格变形需要人工预处理的问题,同时克服了现有的变形算法容易导致局部拉伸过度的缺点,服装在人体的带动下发生光滑的变形,在变形后仍能够保持服装的局部特征。
一种基于人体拉普拉斯变形的服装变形方法,包括以下步骤:
(1)输入人体和服装多边形网格模型;
(2)将步骤(1)输入的非均匀人体和服装网格模型进行离散化处理;
(3)对离散化后的所有网格顶点进行聚类,减少顶点数量,形成均匀的离散顶点集;
(4)构建人体和服装模型的拉普拉斯矩阵;
(5)预处理求解逆矩阵;
(6)把人体网格作为控制顶点进行编辑,带动服装网格进行实时光滑形变;
(7)将变形后的简化网格映射回原始分辨率的网格空间,得到变形后的人体和服装网格模型。
所述的步骤(1)中,在实际应用中,输入的三维人体和服装模型网格往往是不均匀分布的,有些部分网格稠密,有些部分网格稀疏。如果直接将这样不均匀的网格运用到网格变形中,会很大程度影响变形效果。因此需要在预处理阶段对人体和服装模型进行一些优化处理,使其变得均匀。
所述的步骤(2)中,将步骤(1)输入的非均匀的人体网格M B 和服装网格M C进行离散化处理,即只取输入网格信息中的顶点信息,得到原始顶点集V B和V C。离散化的同时记录下原始网格数据中所有顶点之间的距离和拓扑连接关系,以便步骤(7)恢复映射使用。
所述的步骤(3)中,对于离散化后的人体顶点集V B和服装顶点集V C进行体素化处理,将其所在空间分解成为n×n个体素,每一个体素立方体半径为d。以人体网格顶点集V B为例,对于其中的第i个体素
Figure PCTCN2019105295-appb-000001
假设其覆盖的空间范围内有m个人体网格顶点
Figure PCTCN2019105295-appb-000002
将体素
Figure PCTCN2019105295-appb-000003
所包含的m个网格顶点聚合成为单一顶点。
进一步的,对于所有的人体和服装顶点按照同样的方法进行处理,得到简化、离散、均匀的人体网格顶点集V′ B和服装网格顶点集V′ C
进一步的,根据步骤(2)中保留下来的原始拓扑连接关系,给简化后的顶点集内所有顶点之间添加边连接关系作为新的拓扑连接关系,供步骤(4)中构建拉普拉斯矩阵使用。
所述的步骤(4)中,构建人体和服装模型的拉普拉斯操作符矩阵L。由于人体和服装模型是独立的两个模型,这两个模型上的所有顶点分别有各自的拓扑连接关系,因此他们在拓扑结构上是分离的。同时,由于需要实现人体网格带动服装网格变形,在构建拉普拉斯矩阵时需要将这两个模型离散后的顶点集作为一个整体进行处理。这就要求同时考虑三维模型的拓扑信息和几何信息,因此在本步骤中需要构建几何拉普拉斯矩阵。
进一步的,定义人体和服装模型离散简化后得到的顶点集为V, 其包含n个顶点,对于其中的任意一个顶点i,其欧氏坐标可表示为v i=[v ix,v iy,v iz] T∈R 3。对于顶点集,
Figure PCTCN2019105295-appb-000004
进一步的,顶点集V所有顶点的位置可以表示为维数为n的向量F。对应的,拉普拉斯矩阵是一个n×n的矩阵L。因此,对离散的人体和服装模型进行拉普拉斯操作就是把拉普拉斯操作符矩阵和顶点的位置向量相乘,L×F。
进一步的,拉普拉斯矩阵是一个稀疏矩阵,其非零元素值的赋值方式与顶点之间的邻接矩阵类似。考虑顶点集V的拓扑信息,如果其中的两个顶点i,j之间有一条边相连,则这两点之间的权重非零,即w ij≠0,因此对应的拉普拉斯矩阵中的元素a ij=w ij(i≠j)。
进一步的,对于拉普拉斯矩阵中的对角线上元素a ii,其值为与点i有边相连的顶点的个数。
所述的步骤(5)中,预处理求解逆矩阵。拉普拉斯变形的核心是把顶点的坐标从欧氏空间转换到微分坐标空间。要保持服装模型的局部细节不变,就需要保持变形后的局部微分坐标不变。整个变形过程如下:
第一步,计算顶点集V每个顶点的微分坐标:
Δ=L(V).
其中,Δ表示顶点的微分坐标,或拉普拉斯坐标,对应局部微分坐标空间的三个坐标轴的三个分量。
第二步,移动人体模型上的部分顶点,将这部分顶点作为变形手柄,得到变形手柄上顶点新欧氏坐标:
v′ i=u i,i∈C
其中,C表示手柄上的所有顶点的集合。u i表示手柄上第i个顶点的新位置,v′ i表示第i个顶点的新位置。
第三步,根据微分坐标和手柄上顶点的新位置,用最小二乘法计算顶点集V其余顶点的位置:
Figure PCTCN2019105295-appb-000005
其中,V′表示所有顶点的新位置向量。
第四步,将第三步最优化方程化简,可将最优化问题转化为求解:
AV′=b
其中,
Figure PCTCN2019105295-appb-000006
Figure PCTCN2019105295-appb-000007
进一步的,上述优化问题可以表示为:
min||Ax-b||
由于A不是方阵,导致系统不能直接求解,因此把上述系统表示为:
A TAx=A Tb
x=(A TA) -1A Tb
进一步的,A TA是正定的,可以分解为两个三角矩阵相乘:
A TA=R TR
进一步的,方程组可转化为:
R TRx=A Tb
Figure PCTCN2019105295-appb-000008
Figure PCTCN2019105295-appb-000009
Figure PCTCN2019105295-appb-000010
因此,需要预先计算出逆矩阵R -1的值,用中间变量求解最终顶点集所有顶点的位置x。
所述的步骤(6)中,根据拉普拉斯矩阵通过人体网格作为控制顶点进行变化,带动服装网格发生实时光滑的形变。根据步骤(5)所描述的过程,在虚拟试衣中,顾客输入的运动信息作为人体模型的控制手柄顶点的新位置,通过求解上述最小二乘问题的解,即可求得变形后的人体和服装模型的新位置。
所述的步骤(7)中,将简化变形后的人体和服装网格根据原先记录的距离和拓扑连接关系,映射回原始分辨率的网格空间,得到最终变形后的人体和服装网格。
进一步的,在所述的步骤(3)中,通过体素化和聚合的方式得到简化的人体网格顶点集V′ B和服装网格顶点集V′ C。以人体网格顶点为例,对于原始的、未简化的人体网格顶点集V B中的一点i,假设按照给定的距离范围内,有m个简化后的人体网格顶点:s 1,s 2,…,s m,记录下点i与这m个简化后的人体网格顶点之间的距离
Figure PCTCN2019105295-appb-000011
进一步的,点i的欧氏坐标可以表示为:
Figure PCTCN2019105295-appb-000012
其中,
Figure PCTCN2019105295-appb-000013
表示m个简化顶点的欧氏坐标;
Figure PCTCN2019105295-appb-000014
表示以 距离
Figure PCTCN2019105295-appb-000015
为自变量的权重函数。
进一步的,根据在步骤(3)中记录下来的权重函数和相邻顶点关系,即可根据形变后的简化人体网格顶点的新位置计算出原始分辨率的人体网格顶点的新位置,同时恢复原始的拓扑连接关系,在恢复的顶点之间添加边连接关系。
进一步的,对服装网格顶点使用同样的方法,最后得到最终的映射后的原始分辨率的、变形后的人体和服装网格。
与现有的服装网格变形方法不同,本发明提供了一种基于人体拉普拉斯变形的服装变形方法,可以实时地模拟虚拟试衣过程中服装随人体运动而发生变形的行为,并具体表现在:
(1)通过将人体作为控制顶点进行拉普拉斯变形带动服装网格发生变形,只需要提前求解拉普拉斯矩阵和逆矩阵,即可实现服装网格的实时变形;
(2)服装网格变形过程中能够较好地保留局部特征,避免了拉伸过度的问题。
(3)人体形变可以通过算法光滑地传递到服装网格上。
附图说明
图1为本发明提供的两种不同体型的女模特穿着T恤和长裤的效果图;
图2为本发明提供的两种不同体型的男模特穿着T恤和短裤的效果图;
具体实施方式
下面结合附图和实例对本发明技术方案做进一步详细说明。
(1)输入人体和服装多边形网格模型;
(2)将步骤(1)输入的非均匀人体和服装网格模型进行离散化处理;
(3)对离散化后的所有网格顶点进行聚类,减少顶点数量,形成均匀的离散顶点集;
(4)构建人体和服装模型的拉普拉斯矩阵;
(5)预处理求解逆矩阵;
(6)把人体网格作为控制顶点进行编辑,带动服装网格进行实时光滑形变;
(7)将变形后的简化网格映射回原始分辨率的网格空间,得到变形后的人体和服装网格模型。
所述的步骤(1)中,在实际应用中,输入的三维人体和服装模型网格往往是不均匀分布的,有些部分网格稠密,有些部分网格稀疏。如果直接将这样不均匀的网格运用到网格变形中,会很大程度影响变形效果。因此需要在预处理阶段对人体和服装模型进行一些优化处理,使其变得均匀。
所述的步骤(2)中,将步骤(1)输入的非均匀的人体网格M B和服装网格M C进行离散化处理,即只取输入网格信息中的顶点信息,得到原始顶点集V B和V C。离散化的同时记录下原始网格数据中所有顶点之间的距离和拓扑连接关系,以便步骤(7)恢复映射使用。
所述的步骤(3)中,对于离散化后的人体顶点集V B和服装顶点集V C进行体素化处理,将其所在空间分解成为n×n个体素,每一个 体素立方体半径为d。以人体网格顶点集V B为例,对于其中的第i个体素
Figure PCTCN2019105295-appb-000016
假设其覆盖的空间范围内有m个人体网格顶点
Figure PCTCN2019105295-appb-000017
将体素
Figure PCTCN2019105295-appb-000018
所包含的m个网格顶点聚合成为单一顶点。
(3-3)对于所有的人体和服装顶点按照同样的方法进行处理,得到简化、离散、均匀的人体网格顶点集V′ B和服装网格顶点集V′ C
(3-4)根据步骤(2)中保留下来的原始拓扑连接关系,给简化后的顶点集内所有顶点之间添加边连接关系作为新的拓扑连接关系,供步骤(4)中构建拉普拉斯矩阵使用。
所述的步骤(4)中,构建人体和服装模型的拉普拉斯操作符矩阵L。由于人体和服装模型是独立的两个模型,这两个模型上的所有顶点分别有各自的拓扑连接关系,因此他们在拓扑结构上是分离的。同时,由于需要实现人体网格带动服装网格变形,在构建拉普拉斯矩阵时需要将这两个模型离散后的顶点集作为一个整体进行处理。这就要求同时考虑三维模型的拓扑信息和几何信息,因此在本步骤中需要构建几何拉普拉斯矩阵。
(4-1)定义人体和服装模型离散简化后得到的顶点集为V,其包含n个顶点,对于其中的任意一个顶点i,其欧氏坐标可表示为v i=[v ix,v iy,v iz] T∈R 3。对于顶点集,
Figure PCTCN2019105295-appb-000019
(4-2)顶点集V所有顶点的位置可以表示为维数为n的向量F。对应的,拉普拉斯矩阵是一个n×n的矩阵L。因此,对离散的人体和服装模型进行拉普拉斯操作就是拉普拉斯矩阵和顶点的位置向量 相乘,即L×F。
(4-3)拉普拉斯矩阵是一个稀疏矩阵,其非零元素值的赋值方式与顶点之间的邻接矩阵类似。考虑顶点集V的拓扑信息,如果其中的两个顶点i,j之间有一条边相连,则这两点之间的权重非零,即w ij≠0,因此对应的拉普拉斯矩阵中的元素a ij=w ij(i≠j)。
(4-4)对于拉普拉斯矩阵中的对角线上元素a ii,其值为与点i有边相连的顶点的个数。
所述的步骤(5)中,预处理求解逆矩阵。拉普拉斯变形的核心是把顶点的坐标从欧氏空间转换到微分坐标空间。要保持服装模型的局部细节不变,就需要保持变形后的局部微分坐标不变。整个变形过程如下:
(5-1)第一步,计算顶点集V每个顶点的微分坐标:
Δ=L(V).
其中,Δ表示顶点的微分坐标。
(5-2)第二步,移动人体模型上的部分顶点,将这部分顶点作为变形手柄,得到变形手柄上顶点新欧氏坐标:
v′ i=u i,i∈C
其中,C表示手柄上的所有顶点的集合。u i表示手柄上第i个顶点的新位置,v′ i表示第i个顶点的新位置。
(5-3)第三步,根据微分坐标和手柄上顶点的新位置,用最小二乘法计算顶点集V其余顶点的位置:
Figure PCTCN2019105295-appb-000020
其中,V′表示所有顶点的新位置向量。
(5-4)第四步,将第三步最优化方程化简,可将最优化问题转化为求解如下线性方程组:
AV′=b
其中,
Figure PCTCN2019105295-appb-000021
Figure PCTCN2019105295-appb-000022
(5-5)上述优化问题可以表示为:
min||Ax-b||
由于A不是方阵,系统不能直接求解,因此把上述系统表示为:
A TAx=A Tb
x=(A TA) -1A Tb
(5-6)A TA是正定的,可以分解为两个三角矩阵相乘:
A TA=R TR
(5-7)方程组可转化为:
R TRx=A Tb
Figure PCTCN2019105295-appb-000023
Figure PCTCN2019105295-appb-000024
Figure PCTCN2019105295-appb-000025
因此,需要预先计算出逆矩阵R -1的值,用中间变量求解最终顶 点集所有顶点的位置x。
所述的步骤(6)中,根据拉普拉斯矩阵通过人体网格作为控制顶点进行变化,带动服装网格发生实时光滑的形变。根据步骤(5)所描述的过程,在虚拟试衣中,用户输入的运动信息作为人体模型的控制手柄顶点的新位置,通过求解上述最小二乘问题的解,即可求得变形后的人体和服装模型的新位置。
所述的步骤(7)中,将简化变形后的人体和服装网格根据原先记录的距离和拓扑连接关系,映射回原始分辨率的网格空间,得到最终变形后的人体和服装网格。
(7-1)在所述的步骤(3)中,通过体素化和聚合的方式得到简化的人体网格顶点集V′ B和服装网格顶点集V′ C。以人体网格顶点为例,对于原始的、未简化的人体网格顶点集V B中的一点i,假设按照给定的距离范围内,有m个简化后的人体网格顶点:s 1,s 2,…,s m,记录下点i与这m个简化后的人体网格顶点之间的距离
Figure PCTCN2019105295-appb-000026
(7-2)点i的欧氏坐标可以表示为:
Figure PCTCN2019105295-appb-000027
其中,
Figure PCTCN2019105295-appb-000028
表示m个简化顶点的欧氏坐标;
Figure PCTCN2019105295-appb-000029
表示以距离
Figure PCTCN2019105295-appb-000030
为自变量的权重函数。
(7-3)根据在步骤(3)中记录下来的权重函数和相邻顶点关系,即可根据形变后的简化人体网格顶点的新位置计算出原始分辨率的人体网格顶点的新位置,同时恢复原始的拓扑连接关系,在恢复的顶点之间添加边连接关系。
(7-4)对服装网格顶点使用同样的方法,最后得到最终的映射后的原始分辨率的、变形后的人体和服装网格。
上述是本发明做出的详细说明,但本发明的实施方式不受上述限制,其它任何在本发明专利核心指导思想下所作的改变、替换、组合简化等都包含在本发明专利的保护范围之内。

Claims (8)

  1. 一种基于人体拉普拉斯变形的服装变形方法,包括以下步骤:
    (1)输入人体和服装多边形网格模型;
    (2)将步骤(1)输入的非均匀人体和服装网格模型进行离散化处理;
    (3)对离散化后的所有网格顶点进行聚类,减少顶点数量,形成均匀的离散顶点集;
    (4)构建人体和服装模型的拉普拉斯矩阵;
    (5)预处理求解逆矩阵;
    (6)把人体网格作为控制顶点进行编辑,带动服装网格进行实时光滑形变;
    (7)将变形后的简化网格映射回原始分辨率的网格空间,得到变形后的人体和服装网格模型。
  2. 如权利要求1所述的非均匀人体和服装网格进行离散化处理,其特征在于,所述的步骤(1)中,在实际应用中,输入的三维人体和服装网格模型往往是不均匀分布的,有些部分网格稠密,有些部分网格稀疏。如果直接把不均匀网格应用到网格变形中,对变形效果的影响很大。因此需要在预处理阶段对人体和服装模型进行优化处理,使其变得均匀。
  3. 如权利要求1所述的将输入的非均匀人体和服装网格进行离散化处理,其特征在于,在所述的步骤(2)中,将输入的非均匀的人体网格M B和服装网格M C进行离散化处理,即只取输入网格信息中的顶点信息,得到原始顶点集V B和V C。在离散化的同时,记录下原始网格数据中所有顶点 之间的距离和拓扑连接关系,以便步骤(7)的映射使用。
  4. 如权利要求1所述的聚类离散化后的所有网格顶点,减少数量,形成均匀的离散顶点集,其特征在于,所述的步骤(3)中,对于离散化后的人体顶点集V B和服装顶点集V C进行体素化处理,将其所在空间分解成为n×n个体素,每一个体素立方体半径为d。以人体网格顶点集V B为例,对于其中的第i个体素
    Figure PCTCN2019105295-appb-100001
    假设其覆盖的空间范围内有m个人体网格顶点
    Figure PCTCN2019105295-appb-100002
    将体素
    Figure PCTCN2019105295-appb-100003
    所包含的m个网格顶点聚合成为单一顶点。
    进一步的,对于所有的人体和服装顶点按照同样的方法进行处理,得到简化、离散、均匀的人体网格顶点集V′ B和服装网格顶点集V′ C
    进一步的,根据步骤(2)中保留下来的原始拓扑连接关系,给简化后的顶点集内所有顶点之间添加边连接关系作为新的拓扑连接关系,供步骤(4)中构建拉普拉斯矩阵使用。
  5. 如权利要求4所述的拉普拉斯矩阵,其特征在于,由于人体和服装模型是独立的两个模型,这两个模型上的所有顶点分别有各自的拓扑连接关系,因此他们在拓扑结构上是分离的。同时,由于需要实现人体网格带动服装网格变形,在构建拉普拉斯矩阵时需要将这两个模型离散后的顶点集作为一个整体进行处理。这就要求同时考虑三维模型的拓扑信息和几何信息,因此在本步骤中需要构建几何拉普拉斯矩阵。
    进一步的,定义人体和服装模型离散简化后得到的顶点集为V,其包含n个顶点,对于其中的任意一个顶点i,其欧氏坐标可表示为v i= [v ix,v iy,v iz] T∈R 3。对于顶点集,
    Figure PCTCN2019105295-appb-100004
    进一步的,顶点集V所有顶点的位置可以表示为维数为n的向量F。对应的,拉普拉斯矩阵是一个n×n的矩阵L。因此,对离散的人体和服装模型进行拉普拉斯操作就是把拉普拉斯矩阵和顶点的位置向量相乘,L×F。
    进一步的,拉普拉斯矩阵是一个稀疏矩阵,其非零元素值的赋值方式与顶点之间的邻接矩阵类似。考虑顶点集V的拓扑信息,如果其中的两个顶点i,j之间有一条边相连,则这两点之间的权重非零,即w ij≠0,因此对应的拉普拉斯矩阵中的元素a ij=w ij(i≠j)。
    进一步的,对于拉普拉斯矩阵中的对角线上元素a ii,其值为与点i有边相连的顶点的个数。
  6. 如权利要求1所述的,预处理求解逆矩阵,其特征在于,拉普拉斯变形的核心是把顶点的坐标从欧氏空间转换到微分坐标空间。要保持服装模型的局部细节不变,就需要保持变形后的局部微分坐标不变。整个变形过程如下:
    第一步,计算顶点集V每个顶点的微分坐标:
    Δ=L(V).
    其中,Δ表示顶点的微分坐标,或拉普拉斯坐标,对应局部微分坐标空间的三个坐标轴的三个分量。
    第二步,移动人体模型上的部分顶点,将这部分顶点作为变形手柄, 得到变形手柄上顶点新欧氏坐标:
    v′ i=u i,i∈C
    其中,C表示手柄上的所有顶点的集合。u i表示手柄上第i个顶点的新位置,v′ i表示第i个顶点的新位置。
    第三步,根据微分坐标和手柄上顶点的新位置,用最小二乘法计算顶点集V其余顶点的位置:
    Figure PCTCN2019105295-appb-100005
    其中,V′表示所有顶点的新位置向量。
    第四步,将第三步最优化方程化简,可将最优化问题转化为求解:
    AV′=b
    其中,
    Figure PCTCN2019105295-appb-100006
    Figure PCTCN2019105295-appb-100007
    进一步的,上述优化问题可以表示为:
    min||Ax-b||
    由于A不是方阵,导致系统不能直接求解,因此把上述系统表示为:
    A TAx=A Tb
    x=(A TA) -1A Tb
    进一步的,A TA是正定的,可以分解为两个三角矩阵相乘:
    A TA=R TR
    进一步的,方程组可转化为:
    R TRx=A Tb
    Figure PCTCN2019105295-appb-100008
    Figure PCTCN2019105295-appb-100009
    Figure PCTCN2019105295-appb-100010
    因此,需要预先计算出逆矩阵R -1的值,用中间变量求解最终顶点集所有顶点的位置x。
  7. 如权利要求1所述的,根据拉普拉斯矩阵通过人体网格作为控制顶点进行变化,带动服装网格发生实时光滑的形变,其特征在于,根据步骤(5)所描述的过程,在虚拟试衣中,顾客输入的运动信息作为人体模型的控制手柄顶点的新位置,通过求解权利要求6所述的最小二乘问题的解,即可求得变形后的人体和服装模型的新位置。
  8. 如权利要求1所述的,将变形后的简化网格映射回原始分辨率的网格空间,其特征在于,在所述的步骤(3)中,通过体素化和聚合的方式得到简化的人体网格顶点集V′ B和服装网格顶点集V′ C。以人体网格顶点为例,对于原始的、未简化的人体网格顶点集V B中的一点i,假设按照给定的距离范围内,有m个简化后的人体网格顶点:s 1,s 2,...,s m,记录下点i与这m个简化后的人体网格顶点之间的距离
    Figure PCTCN2019105295-appb-100011
    点i的欧氏坐标可以表示为:
    Figure PCTCN2019105295-appb-100012
    其中,
    Figure PCTCN2019105295-appb-100013
    表示m个简化顶点的欧氏坐标;
    Figure PCTCN2019105295-appb-100014
    表示以距离
    Figure PCTCN2019105295-appb-100015
    为自变量的权重函数。
    根据在步骤(3)中记录下来的权重函数和相邻顶点关系,即可根据形变后的简化人体网格顶点的新位置计算出原始分辨率的人体网格顶点的新位置,同时恢复原始的拓扑连接关系,在恢复的顶点之间添加边连接关系。
    对服装网格顶点使用同样的方法,最后得到最终的映射后的原始分辨率的、变形后的人体和服装网格。
PCT/CN2019/105295 2019-05-07 2019-09-11 一种基于人体拉普拉斯变形的服装变形方法 WO2020224144A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19928268.2A EP3958217A4 (en) 2019-05-07 2019-09-11 GARMENT DEFORMATION METHOD BASED ON HUMAN BODY LAPLACE DEFORMATION
US17/520,593 US11704871B2 (en) 2019-05-07 2021-11-05 Garment deformation method based on the human body's Laplacian deformation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910376727.2A CN110176063B (zh) 2019-05-07 2019-05-07 一种基于人体拉普拉斯变形的服装变形方法
CN201910376727.2 2019-05-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/520,593 Continuation US11704871B2 (en) 2019-05-07 2021-11-05 Garment deformation method based on the human body's Laplacian deformation

Publications (1)

Publication Number Publication Date
WO2020224144A1 true WO2020224144A1 (zh) 2020-11-12

Family

ID=67690571

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/105295 WO2020224144A1 (zh) 2019-05-07 2019-09-11 一种基于人体拉普拉斯变形的服装变形方法

Country Status (4)

Country Link
US (1) US11704871B2 (zh)
EP (1) EP3958217A4 (zh)
CN (1) CN110176063B (zh)
WO (1) WO2020224144A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792859A (zh) * 2021-09-13 2021-12-14 中南大学 一种无监督形状对应方法及人体形状对应方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110176063B (zh) 2019-05-07 2022-05-27 浙江凌迪数字科技有限公司 一种基于人体拉普拉斯变形的服装变形方法
CN115511578A (zh) * 2022-10-18 2022-12-23 深圳市影儿服饰有限公司 一种自适应形象的智能试衣算法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678769A (zh) * 2013-11-12 2014-03-26 浙江大学 基于人体骨架与二维形变的真实感服装创建方法
US20140375635A1 (en) * 2013-06-21 2014-12-25 Kabushiki Kaisha Toshiba Methods and systems for generating a three dimensional representation of a subject
CN104821006A (zh) * 2015-05-18 2015-08-05 浙江理工大学 一种基于人体混合包围盒的动态服装仿真方法
CN106096130A (zh) * 2016-06-12 2016-11-09 北京航空航天大学 一种基于拉普拉斯坐标的不同材料衣物仿真及优化方法
CN106960463A (zh) * 2017-03-13 2017-07-18 东华大学 面向真实扫描人体的三维虚拟服装快速试衣方法
CN107798713A (zh) * 2017-09-04 2018-03-13 昆明理工大学 一种面向二维虚拟试穿的图像变形方法
CN110176063A (zh) * 2019-05-07 2019-08-27 上海凌笛数码科技有限公司 一种基于人体拉普拉斯变形的服装变形方法

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070273711A1 (en) * 2005-11-17 2007-11-29 Maffei Kenneth C 3D graphics system and method
GB201104312D0 (en) * 2011-03-14 2011-04-27 Bell Alexandra Improved virtual try on simulation service
CN103473806B (zh) * 2013-09-23 2016-03-16 北京航空航天大学 一种基于单幅图像的服装三维模型构建方法
EP3335197A1 (en) * 2015-08-14 2018-06-20 Metail Limited Method and system for generating an image file of a 3d garment model on a 3d body model
US9754410B2 (en) * 2017-02-15 2017-09-05 StyleMe Limited System and method for three-dimensional garment mesh deformation and layering for garment fit visualization
US11145138B2 (en) * 2017-04-28 2021-10-12 Linden Research, Inc. Virtual reality presentation of layers of clothing on avatars
US11094136B2 (en) * 2017-04-28 2021-08-17 Linden Research, Inc. Virtual reality presentation of clothing fitted on avatars
US10373373B2 (en) * 2017-11-07 2019-08-06 StyleMe Limited Systems and methods for reducing the stimulation time of physics based garment simulations
CN109308732B (zh) * 2018-09-07 2019-07-02 中山大学 基于控制网格变形的部件网格融合方法及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140375635A1 (en) * 2013-06-21 2014-12-25 Kabushiki Kaisha Toshiba Methods and systems for generating a three dimensional representation of a subject
CN103678769A (zh) * 2013-11-12 2014-03-26 浙江大学 基于人体骨架与二维形变的真实感服装创建方法
CN104821006A (zh) * 2015-05-18 2015-08-05 浙江理工大学 一种基于人体混合包围盒的动态服装仿真方法
CN106096130A (zh) * 2016-06-12 2016-11-09 北京航空航天大学 一种基于拉普拉斯坐标的不同材料衣物仿真及优化方法
CN106960463A (zh) * 2017-03-13 2017-07-18 东华大学 面向真实扫描人体的三维虚拟服装快速试衣方法
CN107798713A (zh) * 2017-09-04 2018-03-13 昆明理工大学 一种面向二维虚拟试穿的图像变形方法
CN110176063A (zh) * 2019-05-07 2019-08-27 上海凌笛数码科技有限公司 一种基于人体拉普拉斯变形的服装变形方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3958217A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792859A (zh) * 2021-09-13 2021-12-14 中南大学 一种无监督形状对应方法及人体形状对应方法

Also Published As

Publication number Publication date
CN110176063A (zh) 2019-08-27
EP3958217A1 (en) 2022-02-23
CN110176063B (zh) 2022-05-27
US11704871B2 (en) 2023-07-18
EP3958217A4 (en) 2022-07-06
US20220058873A1 (en) 2022-02-24

Similar Documents

Publication Publication Date Title
Wang et al. A framework for 3D model reconstruction in reverse engineering
Yu et al. Mesh editing with poisson-based gradient field manipulation
WO2020224144A1 (zh) 一种基于人体拉普拉斯变形的服装变形方法
CN105844711A (zh) 在细分曲面上雕刻2d图像
US20230169727A1 (en) Generative Nonlinear Human Shape Models
CN103678769B (zh) 基于人体骨架与二维形变的真实感服装创建方法
CN107481313A (zh) 一种基于学习有效点云生成的密集三维物体重建方法
JP7294788B2 (ja) 3d配置のタイプに応じた2d画像の分類
CN104392484B (zh) 一种三维树木建模方法及装置
Pan et al. Automatic skinning and weight retargeting of articulated characters using extended position-based dynamics
CN109816789B (zh) 一种基于深度神经网络的三维模型参数化方法
Petkov et al. Interactive visibility retargeting in vr using conformal visualization
Li et al. Detail‐Aware Deep Clothing Animations Infused with Multi‐Source Attributes
Yu et al. On generating realistic avatars: dress in your own style
Qi et al. Divided Voxels: an efficient algorithm for interactive cutting of deformable objects
Gu et al. Customized 3D digital human model rebuilding by orthographic images-based modelling method through open-source software
Weng et al. As‐Rigid‐As Possible Distance Field Metamorphosis
Li et al. 3d shape reconstruction of furniture object from a single real indoor image
US11645813B2 (en) Techniques for sculpting digital faces based on anatomical modeling
Bui et al. Height-field construction using cross contours
Fadaifard et al. Image warping for retargeting garments among arbitrary poses
Zhang et al. Fast Mesh Reconstruction from Single View Based on GCN and Topology Modification.
Mensmann et al. Interactive cutting operations for generating anatomical illustrations from volumetric data sets
Zheng et al. Creating reference image of realistic cloth folded surface using sketch-based interactive modeling
Zhang et al. 3D design platform of virtual national costume based on digital nonlinear random matrix

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19928268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019928268

Country of ref document: EP

Effective date: 20211102