CN104376597B - A kind of hair method for reconstructing based on multi-direction constraint - Google Patents
A kind of hair method for reconstructing based on multi-direction constraint Download PDFInfo
- Publication number
- CN104376597B CN104376597B CN201410740979.6A CN201410740979A CN104376597B CN 104376597 B CN104376597 B CN 104376597B CN 201410740979 A CN201410740979 A CN 201410740979A CN 104376597 B CN104376597 B CN 104376597B
- Authority
- CN
- China
- Prior art keywords
- hair
- point
- field
- grid
- geometry
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/30—Polynomial surface description
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Mathematical Analysis (AREA)
- Algebra (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
本发明公开了一种基于多方向约束的头发重建方法,利用从图像中重建的头发表面几何、头发表面方向和头模型几何,重建出完整的头发模型。其步骤为:(1)基于拉普拉斯方程,利用头发表面方向求解头发的空间网格方向场;(2)基于拉普拉斯方程,利用头发表面几何和头模型几何求解头发的空间网格距离场;(3)根据步骤(1)的方向场方向、步骤(2)的距离场梯度方向和已知的头发表面方向,从头皮均匀分布的发根点重建出各个发丝几何;(4)根据步骤(3)的得到的初始几何,利用步骤(2)得到的方向,对初始头发模型进行基于能量最小化的优化,得到最终头发模型。本发明能够重建出发丝生长分布均匀,外表与采集图像相似的头发模型。
The invention discloses a hair reconstruction method based on multi-direction constraints, which reconstructs a complete hair model by using the hair surface geometry, hair surface direction and head model geometry reconstructed from images. The steps are: (1) based on the Laplace equation, using the hair surface direction to solve the hair spatial grid direction field; (2) based on the Laplace equation, using the hair surface geometry and the head model geometry to solve the hair spatial grid (3) According to the direction of the direction field in step (1), the gradient direction of the distance field in step (2) and the known hair surface direction, reconstruct the geometry of each hair from the evenly distributed hair root points of the scalp; ( 4) According to the initial geometry obtained in step (3), use the direction obtained in step (2) to optimize the initial hair model based on energy minimization to obtain the final hair model. The invention can reconstruct a hair model with even distribution of hair growth and similar appearance to the collected image.
Description
技术领域technical field
本发明属于计算机虚拟现实领域,具体地说,一种基于多方向约束的头发重建。The invention belongs to the field of computer virtual reality, in particular, a hair reconstruction based on multi-directional constraints.
背景技术Background technique
在计算机图形学中,对虚拟人物的正确建模,一直是研究者的一个重要的课题。无论是电影特效,视频游戏,虚拟现实或是其他图形学相关的领域,视觉真实的人物建模技术都有广泛的应用。而头发则是人物的一个重要特征,头发的形态因人而异,有时甚至是区别不同人的重要特征。同时,构成头发的发丝非常多,头发的造型、运动及其具有的光学特性都十分复杂,真实感的头发建模成为了计算机图形学的一个研究难点。In computer graphics, the correct modeling of virtual characters has always been an important topic for researchers. Whether it is special effects for movies, video games, virtual reality or other graphics-related fields, visually realistic character modeling technology has a wide range of applications. Hair is an important feature of a character. The shape of hair varies from person to person, and sometimes it is an important feature that distinguishes different people. At the same time, there are many strands of hair, and the shape, movement and optical properties of hair are very complex. Realistic hair modeling has become a research difficulty in computer graphics.
在电影,动画或虚拟现实领域,三维头发模型的几何建模一直是一件繁琐的工作。目前,大部分模型仍然依靠艺术家利用交互工具手工进行头发造型与建模的工作。近年来,不断开始有研究者尝试用自动化的方法,从图像数据中重建与真实头发相近似的模型。基于图像的头发建模方法,一般根据图像信息得到初步的头发几何约束与方向约束,再从中重建出完整的头发模型。目前基于图像的三维头发重建方法一般均首先获取头发表面几何,并通过分析各图像二维的方向信息重建出头发表面的方向。随后,一类方法,直接将表面方向场扩散到整个空间,利用该均匀方向场重建头发,得到的头发足够均匀但外形与图像差别较大;另一类方首先重建外部发丝,再将其连接至头皮,得到的头发外形与图像相似,但无法保证头发均匀也无法重建内部结构。In the fields of film, animation or virtual reality, geometric modeling of 3D hair models has always been a tedious job. Currently, most models still rely on artists to manually shape and model hair using interactive tools. In recent years, researchers have been trying to use automated methods to reconstruct a model similar to real hair from image data. Image-based hair modeling methods generally obtain preliminary hair geometric constraints and direction constraints based on image information, and then reconstruct a complete hair model from them. At present, image-based 3D hair reconstruction methods generally first obtain the geometry of the hair surface, and then reconstruct the direction of the hair surface by analyzing the two-dimensional direction information of each image. Subsequently, one type of method directly spreads the surface direction field to the whole space, and uses the uniform direction field to reconstruct the hair, and the obtained hair is uniform enough but the shape is quite different from the image; the other type of method first reconstructs the outer hair, and then its Attached to the scalp, the resulting hair looks similar to the image, but does not guarantee uniform hair nor reconstruct the internal structure.
相比传统的两种方法,本发明将两类方法的优势结合起来,使得头发的生长在内部按均匀方向生长,在外部按表面方向生长,综合了两种方法的优势。Compared with the traditional two methods, the present invention combines the advantages of the two methods, so that the hair grows in the uniform direction inside and in the surface direction outside, and combines the advantages of the two methods.
因此,本发明针对给予图像的头发建模的工作极具研究意义和应用前景。Therefore, the present invention's work on image-based hair modeling has great research significance and application prospects.
发明内容Contents of the invention
本发明的技术解决问题:克服现有技术的一些局限性,提供一种基于多方向约束的头发重建方法,有效的在获取初步头发表面数据后重建头发的发丝几何,得到视觉真实的头发模型。The technology of the present invention solves the problem: overcomes some limitations of the existing technology, provides a hair reconstruction method based on multi-directional constraints, effectively reconstructs the hair geometry of the hair after obtaining the preliminary hair surface data, and obtains a visually realistic hair model .
本发明的技术解决方案:以头发表面几何、头发表面方向、头模型几何为输入,基于多方向约束的头发重建,利用网格化的方向场和头发表面方向共同约束发丝重建,从而重建头发模型。其特征在于如下步骤:The technical solution of the present invention: take hair surface geometry, hair surface direction, and head model geometry as input, hair reconstruction based on multi-directional constraints, use gridded direction field and hair surface direction to jointly constrain hair reconstruction, so as to reconstruct hair Model. It is characterized by the following steps:
(1)根据输入的头发表面方向重建空间网格化的头发方向场;(1) Reconstruct the spatially gridded hair direction field from the input hair surface direction;
(2)根据输入的头发表面几何和头模型几何重建空间网格化的距离场;(2) Reconstruct a spatially gridded distance field from the input hair surface geometry and head model geometry;
(3)利用(1)所得方向场、输入的头发表面方向和(2)所得距离场重建发丝几何;(3) Using (1) the obtained direction field, the input hair surface direction and (2) the obtained distance field to reconstruct the hairline geometry;
(4)利用(1)所得方向场,对(3)所得的头发模型进行优化;(4) Utilize (1) gained direction field, optimize the hair model of (3) gained;
本发明的优点在于:The advantages of the present invention are:
1、本发明所设计的头发发丝的重建过程,发丝均匀的从头皮长出,逐渐生长到头发表面,发丝分布均匀合理,在空间网格方向的约束下发丝生长方向自然。1. In the reconstruction process of the hair strands designed by the present invention, the strands of hair evenly grow from the scalp and gradually grow to the surface of the hair, the distribution of the strands of hair is even and reasonable, and the growth direction of the strands of hair is natural under the constraint of the direction of the spatial grid.
2、本发明所设计的头发发丝的重建过程,发丝生长接近头皮时,将按头发表面方向生长,头发外面接近采集图像。2. In the hair reconstruction process designed by the present invention, when the hair grows close to the scalp, it will grow according to the direction of the hair surface, and the outside of the hair is close to the collected image.
3、本发明所设计的多方向约束的头发重建,使得头发生长分布均匀的同时保持外部形态逼真,便于用于重建模型的重用。3. The multi-directional constrained hair reconstruction designed by the present invention makes the hair growth evenly distributed while maintaining the realistic external shape, which is convenient for the reuse of the reconstruction model.
附图说明Description of drawings
图1为本发明方法的数据流程图。Fig. 1 is a data flow chart of the method of the present invention.
具体实施方式detailed description
下面结合附图和具体实施方式对本发明作进一步详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.
本发明方法的主要流程图如图1所示,在具备头发表面几何,头发表面方向,头模型几何为输入下,具体步骤如下:The main flow chart of the inventive method is as shown in Figure 1, possessing hair surface geometry, hair surface direction, under head model geometry is input, concrete steps are as follows:
(1)空间网格化头发方向场求解。(1) Solve the hair direction field with spatial meshing.
本发明中空间网格化头发方向场求解,为整个重建区域提供了连续分布的网格方向场。首先根据输入的头发表面几何和头模型几何中的点,计算出整个区域的包围盒,设定一定的网格单元大小,将整个区域均匀分割成立方体网格,实现空间网格化。随后,建立拉普拉斯方程,网格内含有头发表面几何的网格方向设为该头发几何的方向,包围盒边界网格设为狄利克雷边界,其余网格为未知方向。最后,求解拉普拉斯方程,得到空间平滑的网格方向场。The invention solves the spatial gridded hair direction field, which provides a continuously distributed grid direction field for the entire reconstruction area. First, according to the input hair surface geometry and the points in the head model geometry, the bounding box of the entire area is calculated, and a certain grid unit size is set, and the entire area is evenly divided into cubic grids to realize spatial gridding. Then, the Laplace equation is established, the direction of the grid containing the hair surface geometry in the grid is set as the direction of the hair geometry, the boundary grid of the bounding box is set as the Dirichlet boundary, and the other grids are unknown directions. Finally, the Laplace equation is solved to obtain a spatially smooth grid direction field.
(2)空间网格化距离场;(2) Spatial gridded distance field;
为了使头发生长从头皮逐渐向头发表面靠拢,本发明利用距离场提供相关方向约束。首先利用所述步骤(1)中的网格化所得到的网格,在空间建立拉普拉斯方程,网格内包含有头发表面几何的的网格距离值设为0,包含头模型几何表面的网格距离值设为-1,空间包围盒边界的网格设置为1。随后,求解该拉普拉斯方程,得到在空间连续的头发表面距离场,头发表面为0,头发内部为负,头发外部为正。In order to make hair grow closer to the hair surface gradually from the scalp, the present invention uses the distance field to provide relative direction constraints. First, use the grid obtained by gridding in the step (1) to establish the Laplace equation in space, and the grid distance value containing the hair surface geometry in the grid is set to 0, including the head model geometry The grid distance value of the surface is set to -1, and the grid of the spatial bounding box boundary is set to 1. Then, the Laplace equation is solved to obtain the hair surface distance field continuous in space, where the hair surface is 0, the hair interior is negative, and the hair exterior is positive.
(3)基于多方向的发丝重建。(3) Based on multi-directional hairline reconstruction.
此步骤为本发明的核心步骤,从头皮开始重建各发丝。首先,用户在头模型表面指定头发生长区域,在区域中均匀采样出发根点,头发的发根的生长方向设为改点在头模型的方向方向与远离头发分缝线的切线方向之间一定比例。随后开始逐点重建出整根发丝,用ξ表示一根发丝,ξ由其上各点ξ(0),ξ(1).....ξ(N)构成,发丝的重建从发根ξ(0)开始,逐点重建,对每一ξ(i)重建,由公式(1)确定。This step is the core step of the present invention, starting from the scalp to rebuild each strand of hair. First, the user specifies the hair growth area on the surface of the head model, and evenly samples the root point in the area. The growth direction of the hair root is set to be between the direction of the head model and the tangent direction away from the hair parting line. Proportion. Then start to reconstruct the whole hair point by point, use ξ to represent a hair, ξ is composed of points ξ(0), ξ(1).....ξ(N) on it, the hair reconstruction starts from Starting from hair root ξ(0), it is reconstructed point by point, and is reconstructed for each ξ(i), which is determined by formula (1).
ξ(i)=ξ(i-1)+α1V(ξ(i-1))+α2▽(D(ξ(i-1)))+α3Ο(ξ(i-1))+α4S(ξ(i-1))(1)ξ(i)=ξ(i-1)+α 1 V(ξ(i-1))+α 2 ▽(D(ξ(i-1)))+α 3 Ο(ξ(i-1)) +α 4 S(ξ(i-1))(1)
其中,ξ(i-1)即ξ(i)前一点的位置,V(ξ(i-1))为ξ(i-1)所在位置的所述(1)的方向场值,D(ξ(i-1))为ξ(i-1)所在位置的所述(2)的距离场值,▽(D(ξ(i-1)))即该位置的距离梯度方向,Ο(ξ(i-1))为ξ(i-1)的生长方向,根节点方向由前述确定,其余则由ξ(i-1)-ξ(i-2)确定,S(ξ(i-1))为距离ξ(i-1)最近的头发表面几何的方向。公式中α1,α2,α3,α4为各方向约束的权重,α1,α2距离头皮越近权重越大,α4距离头发表面越近权重越大,α3为常数。Among them, ξ(i-1) is the position before ξ(i), V(ξ(i-1)) is the direction field value of (1) at the position of ξ(i-1), D(ξ (i-1)) is the distance field value of (2) at the position of ξ(i-1), ▽(D(ξ(i-1))) is the distance gradient direction of the position, Ο(ξ( i-1)) is the growth direction of ξ(i-1), the root node direction is determined by the above, and the rest is determined by ξ(i-1)-ξ(i-2), S(ξ(i-1)) is the geometric direction of the hair surface closest to ξ(i-1). In the formula, α 1 , α 2 , α 3 , and α 4 are the weights of constraints in each direction. The closer α 1 and α 2 are to the scalp, the greater the weight, and the closer α 4 is to the hair surface, the greater the weight. α 3 is a constant.
对于每根发丝,当其新的重建点ξ(i)的距离场值D(ξ(i))大于一定阈值时,或发丝的总长度大于一定阈值时,发丝重建结束。For each hair, when the distance field value D(ξ(i)) of its new reconstruction point ξ(i) is greater than a certain threshold, or the total length of the hair is greater than a certain threshold, the hair reconstruction ends.
(4)基于能量最小化的头发优化。(4) Hair optimization based on energy minimization.
基于能量最小化的头发优化对所述步骤(3)重建得到的所有发丝进行优化。优化由对能量E的最小化实现,E由公式(2)确定。The hair optimization based on energy minimization optimizes all the hair strands reconstructed in the step (3). Optimization is achieved by minimizing the energy E, which is determined by equation (2).
其中,ξ0(i)为所述步骤(3)重建得到的初始发丝点,ξ(i)为所求点,V、Ο的方向含义与所述步骤(3)相同,即V(ξ0(i-i))为ξ0(i-i)所在位置的所述(1)的方向场值,Ο(ξ0(i-i))为ξ0(i-1)的生长方向。N(ξ0(i))为所有距离ξ0(i)足够近的各发丝上距离ξ0(i)最近的点的点集。w(ξ0 n(in))为均一化的按距离计算的高斯的权重,αreg,αort,αn为各能量项系数,均为常数。Among them, ξ 0 (i) is the initial hairline point reconstructed in the step (3), ξ(i) is the point to be sought, and the meanings of the directions of V and Ο are the same as those in the step (3), that is, V(ξ 0 (ii)) is the direction field value of (1) where ξ 0 (ii) is located, and Ο(ξ 0 (ii)) is the growth direction of ξ 0 (i-1). N(ξ 0 (i)) is the point set of the closest point to ξ 0 (i) on each hairline that is close enough to ξ 0 (i). w(ξ 0 n (i n )) is the Gaussian weight calculated according to the distance, and α reg , α ort , α n are the coefficients of each energy item, all of which are constants.
本发明说明书中未作详细描述的内容属于本领域专业技术人员公知的现有技术。The contents not described in detail in the description of the present invention belong to the prior art known to those skilled in the art.
以上所述仅为本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应该视为本发明的保护范围。The foregoing is only a preferred embodiment of the present invention, and it should be pointed out that for those skilled in the art, some improvements and modifications can also be made without departing from the principle of the present invention. It should be regarded as the protection scope of the present invention.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410740979.6A CN104376597B (en) | 2014-12-05 | 2014-12-05 | A kind of hair method for reconstructing based on multi-direction constraint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410740979.6A CN104376597B (en) | 2014-12-05 | 2014-12-05 | A kind of hair method for reconstructing based on multi-direction constraint |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104376597A CN104376597A (en) | 2015-02-25 |
CN104376597B true CN104376597B (en) | 2017-03-29 |
Family
ID=52555486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410740979.6A Expired - Fee Related CN104376597B (en) | 2014-12-05 | 2014-12-05 | A kind of hair method for reconstructing based on multi-direction constraint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104376597B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105405163B (en) * | 2015-12-28 | 2017-12-15 | 北京航空航天大学 | A kind of static scalp electroacupuncture method true to nature based on multi-direction field |
CN107886516B (en) * | 2017-11-30 | 2020-05-15 | 厦门美图之家科技有限公司 | Method and computing equipment for computing hair trend in portrait |
CN108629834B (en) * | 2018-05-09 | 2020-04-28 | 华南理工大学 | Three-dimensional hair reconstruction method based on single picture |
CN109087377B (en) * | 2018-08-03 | 2019-11-12 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling image |
CN109685876B (en) | 2018-12-21 | 2020-11-03 | 北京达佳互联信息技术有限公司 | Hair rendering method and device, electronic equipment and storage medium |
CN113763228B (en) * | 2020-06-01 | 2024-03-19 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN112465943B (en) * | 2020-12-04 | 2023-05-30 | 上海米哈游天命科技有限公司 | Color rendering method and device, electronic equipment and storage medium |
CN115661375B (en) * | 2022-12-27 | 2023-04-07 | 北京百度网讯科技有限公司 | Three-dimensional hair style generation method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102419868A (en) * | 2010-09-28 | 2012-04-18 | 三星电子株式会社 | Device and method for modeling 3D (three-dimensional) hair based on 3D hair template |
CN102800129A (en) * | 2012-06-20 | 2012-11-28 | 浙江大学 | Hair modeling and portrait editing method based on single image |
CN103035030A (en) * | 2012-12-10 | 2013-04-10 | 西北大学 | Hair model modeling method |
CN103606186A (en) * | 2013-02-02 | 2014-02-26 | 浙江大学 | Virtual hair style modeling method of images and videos |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7098910B2 (en) * | 2003-05-14 | 2006-08-29 | Lena Petrovic | Hair rendering method and apparatus |
-
2014
- 2014-12-05 CN CN201410740979.6A patent/CN104376597B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102419868A (en) * | 2010-09-28 | 2012-04-18 | 三星电子株式会社 | Device and method for modeling 3D (three-dimensional) hair based on 3D hair template |
CN102800129A (en) * | 2012-06-20 | 2012-11-28 | 浙江大学 | Hair modeling and portrait editing method based on single image |
CN103035030A (en) * | 2012-12-10 | 2013-04-10 | 西北大学 | Hair model modeling method |
CN103606186A (en) * | 2013-02-02 | 2014-02-26 | 浙江大学 | Virtual hair style modeling method of images and videos |
Non-Patent Citations (3)
Title |
---|
A Hybrid Image-CAD Based System for Modeling Realistic Hairstyles;Xuan Yu 等;《Conference: Proceedings of the 18th meeting of the ACM SIGGRAPH Symposium on Interactive 3D》;20140331;第63-70页 * |
Dynamic Hair Capture using Spacetime Optimization;Zexiang Xu 等;《ACM Transactions on Graphics(TOG)-Proceedings of ACM SIGGRAPH》;20141130;第33卷(第6期);正文第1节、第5.1-5.3节 * |
基于动力学的头发造型方法;宋金莲 等;《中国科学院研究生院学报》;20120731;第29卷(第4期);第543-548页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104376597A (en) | 2015-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104376597B (en) | A kind of hair method for reconstructing based on multi-direction constraint | |
CN107316340B (en) | A fast face modeling method based on a single photo | |
Mehra et al. | Abstraction of man-made shapes | |
CN103606186B (en) | The virtual hair style modeling method of a kind of image and video | |
CN105405163B (en) | A kind of static scalp electroacupuncture method true to nature based on multi-direction field | |
US10147217B2 (en) | Audio-based caricature exaggeration | |
CN102339475B (en) | Fast Hair Modeling Method Based on Surface Mesh | |
CN103854306A (en) | High-reality dynamic expression modeling method | |
CN111462306B (en) | Three-dimensional hair parametric model method based on volume vector field sparse localization decomposition | |
Zhang et al. | Bas-relief generation and shape editing through gradient-based mesh deformation | |
CN111524226B (en) | Method for detecting key point and three-dimensional reconstruction of ironic portrait painting | |
CN106960465A (en) | A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching | |
CN108242074B (en) | Three-dimensional exaggeration face generation method based on single irony portrait painting | |
CN103942090B (en) | Data-driven real-time hair motion simulation method | |
CN103093488B (en) | A virtual hairstyle interpolation and gradient animation generation method | |
US9710966B2 (en) | Styling of computer graphics hair through volumetric flow dynamics | |
Bao et al. | A survey of image-based techniques for hair modeling | |
CN105427364A (en) | Multi-point touch two-dimensional animation production method | |
CN107146273B (en) | Adaptive floating tangent matching method for image-based hair modeling | |
Yu et al. | A hybrid image-cad based system for modeling realistic hairstyles | |
Zhang et al. | Neural Modelling of Flower Bas‐relief from 2D Line Drawing | |
Bao et al. | Realistic hair modeling from a hybrid orientation field | |
CN105205856B (en) | 3D diseases on plant stalk modeling methods based on Freehandhand-drawing | |
Bao et al. | An image-based hair modeling and dynamic simulation method | |
Dinh et al. | LoopDraw: A loop-based autoregressive model for shape synthesis and editing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170329 Termination date: 20201205 |