GB2531585A - Methods and systems for generating a three dimensional model of a subject - Google Patents

Methods and systems for generating a three dimensional model of a subject Download PDF

Info

Publication number
GB2531585A
GB2531585A GB1418867.6A GB201418867A GB2531585A GB 2531585 A GB2531585 A GB 2531585A GB 201418867 A GB201418867 A GB 201418867A GB 2531585 A GB2531585 A GB 2531585A
Authority
GB
United Kingdom
Prior art keywords
mesh
blocks
subdividing
subject
subjects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1418867.6A
Other versions
GB2531585B (en
GB201418867D0 (en
GB2531585B8 (en
GB2531585A8 (en
Inventor
Stenger Bjorn
Perbert Frank
Pham Minh-Tri
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB1418867.6A priority Critical patent/GB2531585B8/en
Publication of GB201418867D0 publication Critical patent/GB201418867D0/en
Priority to US14/921,908 priority patent/US9905047B2/en
Priority to JP2015209075A priority patent/JP6290153B2/en
Publication of GB2531585A publication Critical patent/GB2531585A/en
Publication of GB2531585B publication Critical patent/GB2531585B/en
Application granted granted Critical
Publication of GB2531585B8 publication Critical patent/GB2531585B8/en
Publication of GB2531585A8 publication Critical patent/GB2531585A8/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/36Level of detail

Abstract

Point cloud or depth image data for a subject is received or captured along with user inputs (S206) indicating a plurality of cubes. The cubes are grouped into blocks to form a representation of a class of (e.g. human) subjects. A first mesh is generated (S204) comprising a plurality of quadrilaterals by subdividing patches, preferably regularly, corresponding to faces of blocks from the plurality. The first mesh is fitted (S206) to the point cloud data to generate a fitted mesh. Further meshes are generated iteratively, each comprising a plurality of quadrilaterals by subdividing patches of the fitted mesh from the previous iteration and fitting the further mesh to the point cloud data. The iteratively generated fitted mesh is outputted as a three dimensional subject model, optionally being displayed. User inputs may further comprise indications of the location of joints in a subject skeleton (e.g. indications of a plurality of rings on the blocks), a representative skeleton being generated from the indications. Such a generated model for each of a group of subjects may provide a statistical model for the entire group. An independent claim directed towards only the step (S206) of fitting of a first mesh to a depth image is included.

Description

Methods and Systems for Generating a Three Dimensional Model of a Subject
FIELD
Embodiments described herein relate generally the generation of three dimensional representations of subjects, such as humans.
BACKGROUND
The estimation of human body shape has a wide variety of applications, from medical to commercial domains. In medicine, for example, it may be possible to visualise future changes to a 3-D body to encourage lifestyle change. In the commercial domain accurate capture of nude body shape would allow virtual fitting; visualisation of the customer in different clothing.
Obtaining a desired shape for a subject such as a human is usually the result of fitting a mesh to data, such as images or 3D scans. Fitting algorithms can often achieve efficiency benefits from a course to fine representation.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following, embodiments are described, by way of example only, with reference to the accompanying drawings in which: Figure 1 shows a model generating system according to an embodiment; Figure 2 shows a method of generating a representation of a subject according to an embodiment; Figure 3 shows user input of a class topology according to an embodiment; Figure 4a shows the generation of external patches in an embodiment; Figure 4b shows the generation of internal patches in an embodiment; Figure 4c shows parameterization of patches in an embodiment; Figure 5a shows examples of a regularity rule in an embodiment; Figure 5b shows examples of a connectedness rule in an embodiment; Figures 6a to 6c show examples of the effects of applying a subdivision rule in an embodiment; Figure 7 shows the subdivision domains for a human subject in an embodiment; Figures 8a to 8c show three different subdivision parameters of a registered human shape corresponding to different subdivision vectors; Figure 9 shows an example of a location of a voxel in a voxel space in an embodiment; Figure 10a shows examples of definitions of rings in an embodiment; Figure 10b shows the definition of a continuous ring in an embodiment; Figure 11a shows a skeleton generated in an embodiment; Figure 11 b shows the continuous rings input by a user to generate the skeleton shown in Figure 11a; Figure 12 shows definitions of influence areas in the parameterized domain in an embodiment; Figures 13a to 13c show landmarks computed at different levels of subdivision in an embodiment; Figures 14a to 14e show different subdivision of a Catmull-Clark subdivision scheme; Figure 15a shows a high resolution mesh in an embodiment; Figure 15b shows resampling of the high resolution mesh to a low resolution mesh in an embodiment; Figures 16a to 16e show a hand model according to an embodiment; Figure 17 shows a system for generating a three dimensional representation of a subject from a depth image; and Figure 18 shows a method of generating a three dimensional representation of a subject according to an embodiment.
DETAILED DESCRIPTION
In an embodiment a method of generating a three dimensional model of a subject is disclosed. The method comprises receiving point cloud data for a subject; receiving user inputs indicating a plurality of cubes, and a grouping of the cubes into a plurality of blocks to form a representation of a class of subjects; generating a first mesh comprising a plurality of quadrilaterals by subdividing patches corresponding to faces of blocks of the plurality of blocks; fitting the first mesh to the point cloud data to generate a fitted mesh; iteratively generating further meshes, each comprising a plurality of quadrilaterals by subdividing patches of the fitted mesh from the previous iteration and fitting the further mesh to the point cloud data; and outputting as the three dimensional model of the subject the iteratively generated fitted mesh.
In an embodiment subdividing each mesh comprises subdividing patches corresponding to faces of the blocks at regular intervals along each edge of the face.
In an embodiment subdividing each mesh comprises subdividing a first patch corresponding to a face of a first block along a first edge with a first set of subdivisions and subdividing a second patch corresponding to a face of a second block adjacent to the first edge with a second set of subdivisions such that each subdivision of the first set connects to a corresponding subdivision of the second set at the first edge.
In an embodiment the class of subjects is human subjects.
In an embodiment the user inputs further comprise indications of locations of joints in a skeleton of the subject, the method further comprising generating a representative skeleton from the indications.
In an embodiment the indications of locations of joints comprise indications of a plurality of rings on the blocks indicating the locations of the joints.
In an embodiment the grouping of the cubes into a plurality of blocks comprises an indication of symmetry as correspondences between cubes.
In an embodiment the method further comprises capturing the point cloud data for the subject.
In an embodiment a method of generating a statistical model for the three dimensional shape of a class of subjects from three dimensional point cloud data for a plurality of test subjects within the class of subjects is disclosed. The method comprises for each test subject of the plurality of test subjects, iteratively generating models of increasing resolution by: fitting a first mesh to the point cloud data, the first mesh comprising a plurality of quadrilaterals formed by subdividing patches corresponding to faces of blocks of a plurality of blocks, each block of the plurality of blocks formed from at least one cube, to obtain a fitted first mesh; generating a second mesh by subdividing the fitted first mesh; and repeating the fitting and generating steps using the second mesh in place of the first mesh, and outputting the result of the iteration as a statistical model for the class of subjects.
In an embodiment subdividing the first fitted mesh comprises subdividing a patches corresponding to faces of the blocks at regular intervals along each edge of the face.
In an embodiment subdividing the first fitted mesh comprises subdividing a first patch corresponding to a face of a first block along a first edge with a first set of subdivisions and subdividing a second patch corresponding to a face of a second block adjacent to the first edge with a second set of subdivisions such that each subdivision of the first set connects to a corresponding subdivision of the second set at the first edge.
In an embodiment the class of subjects is human subjects.
In an embodiment the method further comprises generating a representative skeleton for each of the test subjects.
In an embodiment the method further comprises enforcing at least one symmetry rule defined by correspondences between blocks of the plurality of blocks.
In an embodiment the method further comprises capturing the three dimensional point cloud data for each of the test subjects.
In an embodiment a method of generating a three dimensional representation of a subject from a depth image is disclosed. The method comprises fitting a first mesh to the depth image, the first mesh comprising a plurality of quadrilaterals formed by subdividing patches corresponding to faces of blocks of a plurality of blocks, each block of the plurality of blocks formed from at least one cube.
In an embodiment the method further comprises capturing the depth image of the subject.
In an embodiment a system for generating a three dimensional model of a subject is disclosed. The system comprises a user interface configured to receive a user inputs indicating a plurality of cubes, and a grouping of the cubes into a plurality of blocks to form a representation of a class of subjects; a processor configured to generate a first mesh comprising a plurality of quadrilaterals by subdividing patches corresponding to faces of blocks of the plurality of blocks; fit the first mesh to point cloud data for a subject from the class of subjects to generate a fitted mesh; and iteratively generating further meshes, each comprising a plurality of quadrilaterals by subdividing patches of the fitted mesh from the previous iteration and fitting the further mesh to the point cloud data, the system being configured to output as the three dimensional model of the subject the iteratively generated fitted mesh.
One embodiment provides a computer program product comprising computer executable instructions which, when executed by a processor, cause the processor to perform a method as set out above. The computer program product may be embodied in a carrier medium, which may be a storage medium or a signal medium. A storage medium may include optical storage means, or magnetic storage means, or electronic storage means.
The described embodiments can be incorporated into a specific hardware device, a general purpose device configured by suitable software, or a combination of both.
Aspects can be embodied in a software product, either as a complete software implementation, or as an add-on component for modification or enhancement of existing software (such as a plug in). Such a software product could be embodied in a carrier medium, such as a storage medium (e.g. an optical disk or a mass storage memory such as a FLASH memory) or a signal medium (such as a download).
Specific hardware devices suitable for the embodiment could include an application specific device such as an ASIC, an FPGA or a DSP, or other dedicated functional hardware means. The reader will understand that none of the foregoing discussion of embodiment in software or hardware limits future implementation of the invention on yet to be discovered or defined means of execution.
Embodiments described herein relate to the generation and use of representations which are referred to in the following as CubeShapes for subjects such as humans.
Figure 1 shows a model generating system according to an embodiment. The model generating system 100 comprises an input module 110, a memory 120, a processing module 130 and a point cloud capture device 140. The model generating system 100 may further comprise an output device, for example a network connection to output the models which it generates, alternatively, or additionally, the model generating system 100 may comprise a further processing module to process models which it generates and a display or other output device to output the results of the further processing.
The input module 110 allows a user to input a class topology which defines the approximate shape of a subject. The input module 110 may allow a user to input indications of a class topology using a mouse. The memory 120 comprises storage for a class topology 122, a mesh topology 124, a mesh instance 126 and point cloud data 128. The class topology 122 roughly describes the shape of a class of subjects such as humans as a set of cubes grouped into blocks. The mesh topology 124 describes a mesh or statistical model of the subjects at a given resolution. The mesh instance 126 is a mesh topology associated with 3D point and normals to represent the three dimensional shape of a subject. The point cloud data 128 is data indicating the three dimensional shape captured from a plurality of subjects.
The processing module 130 comprises a subdivision module 132 and a fitting module 134. The subdivision module 132 is configured to subdivide the class topology 122 to compute a mesh topology 124 and to subdivide a mesh topology 124 to compute a further mesh topology of a higher resolution. The fitting module is configure to fit the mesh topology 124 to the point cloud data 128 to compute a mesh instance 124. This is achieved using for example non-rigid ICP.
The point cloud capture device 140 is for example a laser scanner configured to capture point cloud data from subjects.
Figure 2 shows a method of generating a representation of a subject according to an embodiment. In step S202, a user input describing a class topology is received. The user describes the class-topology, that is, the topology of the class of objects to be represented. In step S204, a mesh topology is generated by subdividing the class topology. The class-topology can then be subdivided into a mesh-topology given some subdivision parameters. In step S206, a mesh instance is generated by associating three dimensional points and normals with the mesh topology.
A class-topology is formally defined as a graph of blocks Gbiock (Vbiock; Ebiock). A block b E Vbi"k is an axis-aligned discrete 3D box, which can be defined as (origin, size) E N3 x 1%1+3, where origin and origin + size are the two diagonally opposite corners of the box. Two blocks 1;1; k are connected, that is (hi, hi) E Eb1"k, if and only if they are adjacent (block collisions are not allowed). The discrete 3D space N3 in which the boxes exist is referred to as the cube space.
In practice, an editor allows quick creation of a class-topology in two steps using simple mouse interactions.
Figure 3 shows user input of a class topology according to an embodiment. First, as shown in Figure 3a, a user adds new cubes 302 to approximate the shape of the class of objects. Defining the geometric topology correctly is important. Approximating the proportions prevents the quads of the mesh to be too stretched. Selecting multiple cubes allows adding several cubes with one click, allowing for faster modelling. Figure 3b shows the completed set of cubes. Then the user groups the cubes into blocks. This is shown in Figure 3c. Blocks are named, preferably using meaningful semantic information. This may be, for example "arm" or "leg" in the case of a human. As shown in Figure 3c, the cubes are grouped into the following blocks: head 352, neck 354, upper right arm 356, lower right arm 358, upper left arm 360, lower left arm 362, torso 362, right leg 364, right foot 366, left leg 368 and left foot 370.
It is noted that the procedure described above is the only user interaction required and the remaining steps are computed automatically from the given graph Gblock.
The two main operations that are applied to the arrangement of cubes are: subdivision to change the mesh resolution and deformation to fit the surface to a point cloud. It is noted that no operation changes the topology of the shape. Here, the topology is defined so that the human shape is seen as a smooth manifold with boundary.
The use of a topology constructed from cubes arranged to approximate the shape of the subject, for example a human has two benefits. Firstly, sampling: using a simple shape such as a cube to represent a human shape would lead to a mesh with highly over-stretched quads. For instance, the arms would need to be pulled out of the original cube. Such stretched quads are not desirable in many contexts and are usually removed using re-meshing techniques. Note that they are some flexibility with the way cubes can be subdivided but as a rule of thumb, the closer the shape is from a generic shape, the more regular the vertex sampling will be and the more "square" the quads will be. Secondly, there is a semantic reason: blocks can be identified with relevant names, like "torso" for instance. As will be seen below, this is extremely handy in several situations. But using a simple cube to describe a human shape would not allow to access separate limbs individually, which would not be very helpful.
Once the 3D blocks have been defined, surface patches are generated from the blocks.
Figure 4a shows the generation of external patches in an embodiment. Once the 3D blocks have been defined, corresponding 2D patches are automatically generated. A patch p, is one face of a block, it is parameterized in [0,112. As shown in Figure 4a, four patches 412 414 416 and 418 on the surface of the torso are visible.
A patch can be external or internal. Figure 4b shows internal patches in an embodiment. As shown in Figure 4b an internal patch 422 joins the upper right arm to the torso, an internal patch 424 joins the torso to the neck, an internal patch 426 joins the torso to the upper left arm, an internal patch 428 joins the torso to the right leg and an internal patch 430 joins the torso to the left leg. Only external patches are used for describing the surface of a subject. The patches can be represented by a graph G patch(Vpatch, Epatch) where two patches are connected, that is (pi,pj) E Epatch if and only if they are adjacent.
Figure 4c shows the parameterization of patches. Locations on the patches are parameterized as (u, v) c [0,112, where u maps to red and v maps to green. The elements of Epatch can be interpreted geometrically as the edges between adjacent patches. The patches are named automatically from the block names by adding the relevant post_x: right/left, bottom/top, back/front (respectively mapping to the positive/negative direction x, y and z). The names can be manually changed to something more meaningful. Meaningful naming of the blocks and patches will ease the programmatic access to the mesh.
In order to fully define a class-topology some rules need to be defined about how the blocks/patches will be subdivided into a mesh-topology. To understand these rules, it is pointed out that by design we expect a watertight mesh made of quads only. In order to keep the mesh well connected and avoid so-called "T-junctions", we need to make sure that the patches are subdivided in a consistent manner. More precisely, a subdivision must obey two rules: 1. Regularity: a patch can only be subdivided using a regular grid. That is, all the squares of a patch must be subdivided equally along a given direction.
2. Connectedness: two adjacent patches must be subdivided so that the adjacent squares.
Figure 5a shows examples of the regularity rule. By design, a subdivided patch is accessed as an image would be, using a simple regular grid with "pixels" of equal sizes. The examples on the left hand side are divided equally in the horizontal and vertical directions. Therefore the examples on the left hand side satisfy the regularity rule. The top right example is not divided equally in the horizontal direction. The bottom right example is not divided equally in the vertical direction. Therefore the examples shown on the right hand side do not satisfy the regularity rule.
Figure 5b shows examples of the connectedness rule. In the first example, the divisions of the upper patch are all connected to divisions on the lower patch. Therefore the first example satisfies the connectedness rule. In the second example there are divisions on the upper patch which do not continue onto the lower patch. Therefore the second example does not satisfy the connectedness rule. In the third example there are divisions on the upper patch which do not connect with divisions on the lower patch and also divisions on the lower patch which do not connect to divisions on the upper patch. Therefore, the third example does not satisfy the connectedness rule.
The rules tend to propagate subdivisions across the surface and form groups of patches subdivided similarly. Those groups can be described as the equivalence-classes of binary relations between blocks -d where d E x,y,z is a dimension. Using the binary relation formalism, the two rule above become: 1. Regularity: for any patch p spanning the two dimensions di, di: p-dip and p-dip (reflexivity) 2. Connectedness: given two adjacent patches pi,p j: pi_dpj and pi--dpi where d is the direction of the edge pi n pi, it can be shown that: if pi-apt and pi_apk then pi_apk (transitivity).
Figures 6a to 6c show examples of the effects of applying the subdivision rule.
Figure 6a shows an example in which subdivisions are made parallel to the plane of the torso. As shown in Figure 6a, the patches on the sides of the block 610 forming the head are in the vertical direction by a plurality of divisions 612. The patches on the sides of the block 614 forming the neck are also divided by a plurality of divisions 616 which match the divisions 612 on the patches on the sides of the block 610 forming the head. As shown in Figure 6a, the patches on the sides of the other blocks are also divided with corresponding divisions.
Figure 6b shows an example in which the blocks forming the arms are subdivided. As shown in Figure 6b, the patches on the top and the front of the block forming the right arm are divided by a plurality of divisions 622. Here it is noted that the divisions on patch forming the front of the right arm continue on the patch forming the top of the right arm. The patches on the top and front of the block forming the left arm 624 are divided by a corresponding set of divisions 626.
Figure 6c shows a subdivision of the subdivision domain described by (i) its dimension: x (ii) a list of blocks: right foot 632, left foot 634, right leg 636, left leg 638, torso 640, neck 614, head 610. Similarly to blocks and patches, giving a semantically meaningful name to each subdivision domain is encouraged. For instance, the above example could be named torso right left. Some subdivision domains can be merged for symmetry reasons. For example the two legs of a human shape: it makes sense to subdivide them equally along the bottom/up direction.
These binary relations naturally form equivalence classes, where each of the classes is called a subdivision domain.
Note that given a dimension d, it can be proven that all the patches of a given block either (i) belong to the same equivalence class or (ii) are perpendicular to d and are not affected by subdivision in this dimension. This property makes it easier to describe a given a subdivision domain: it is simply defined as a dimension d and by all the blocks bi whose non-perpendicular patches are in the same equivalence class.
Figure 7 shows the subdivision domains for a human subject. As shown in Figure 7 in the case of a human shape, nine subdivision domains can be defined. These subdivision domains are: arm right left 710, hand right left 720, torso right left 730, head bottom top 740, foot bottom top 750, leg bottom top 760, neck bottom top 770, torso bottom top 780, all back front 790. It means that subdividing a class-topology into a mesh-topology is a function which takes nine input parameters (nine degrees of freedom). The vector containing these parameters is called the subdivision vector. It is noted that Figure 6a shows the all back front 790 subdomain division, Figure 6b shows the arm right left 710 subdomain division and Figure 6c shows the torso left right 730 subdomain division.
Figure 8 shows three different subdivision parameters of a registered human shape corresponding to different subdivision vectors.
Figure 8a shows subdivision parameters of a registered human shape corresponding to a subdivision vector [5; 2; 3; 4; 2; 7; 3; 3; 3].
Figure 8b shows subdivision parameters of a registered human shape corresponding to a subdivision vector [10; 4; 5; 9; 3; 15; 5; 5; 5].
Figure 8c shows subdivision parameters of a registered human shape corresponding to a subdivision vector [20; 7; 10; 17; 6; 29; 8; 9; 8].
Note that the numbers in the subdivision vectors have been chosen to minimize the stretch of each quad and correspond to the number of divisions in each of the subdivision domains in the order listed above in the description of Figure 7. The three dimensional points are the result of a non-rigid registration algorithm.
Formally, creating a mesh-topology is straightforward. each patch is divided using the two relevant numbers in the subdivision vector. The result of the operation can be seen as a graph of quads Gq"a(Vq"a; Eedge) and a graph of vertices Gyert"(V"rt"; Eeage).
Note that the set of edges Eedge is common to Gq,,ad and Gvertex.
As defined earlier, the discrete 3D space N3 in which the cubes exist before subdivision is referred to as the cube space. The subdivision can also be described as acting on a volume rather than on a surface: the cubes are subdivided into voxels. It is convenient to refer to a voxel using two coordinates (c, E N3 x N3 where c belongs to the cube space and refers to the cube that contains the voxel, and a is the coordinate of the voxel within the (subdivided) cube. In this context, the space N3 x N3 is referred to as the voxel space.
Figure 9 shows an example of a location of a voxel in the voxel space (c e) E N3 x M3 with c = (2,0) and e = (1,2).
Once a mesh-topology has been created, in order to obtain a mesh-instance one needs to assign 3D points to each vertex in V",tex. In summary, the class-topology describes a class of objects (e.g. human body shapes), the mesh-topology describes a mesh at a specified resolution (but without any knowledge of a particular individual's shape), and the mesh-instance is a 3D mesh of a particular individual at a specified resolution.
Embodiments of the CubeShape provide a simple mesh representation which facilitates certain operations.
Given a CubeShape, it is straightforward to compute rings (also called loops) over the surface (like a bracelet, for instance). The mesh being defined by quads only, rings can be created by specifying a start position and a direction in the voxel space. The ring is then automatically defined. Some care must be taken when extraordinary points (that is, points whose number of neighbours is not 4) are along the path.
Figure 10a shows examples of an unambiguous ring and an ambiguous ring. The rings are defined on a plurality of cubes 1000 in voxel space. The first ring is created by specifying a start position 1010 and a direction shown by the arrow 1012. The path 1014 of the first ring is unambiguous as at each point there are 4 neighbours. A second ring is created by specifying a start position 1020 and a direction shown by the arrow 1022. However, because the second ring passes through a point where there are 5 neighbours, there is an ambiguity as to whether the ring should follow a first path 1024 or a second path 1026. In this case, going straight is ambiguous and the user must provide further specification Note that the previously defined rings are called discrete rings as they are computed on the mesh-topology. They cannot be used to define a ring at mid-height of the torso, for example, because depending on the subdivision vector, vertices might not exist at precisely that height.
Figure 10b shows the definition of continuous rings which can be defined anywhere on the mesh-topology. In order to define a continuous ring 1030, two consecutive discrete rings 1032 and 1034 are defined and an interpolation weight w between them is given. Continuous rings are important because they allow defining rings independently of a mesh-topology. Only the class-topology is needed and the height can be specified in the parameterized space of a patch.
Continuous rings can be used to generate a skeleton of the shape.
Figure 11a shows a skeleton generated in an embodiment. The skeleton 1100 can be used to deform the shape by rotating bones in the skeleton around their respective joints. The topology is given by the user. The skeleton comprises a plurality of joints 1101 1102 1103 and connections between the joints. The method used to generate the skeleton is a heuristic to approximate an anatomical skeleton. It is nevertheless useful to perform simple task like animating the human shape in a convincing manner. The skeleton generation is based on continuous rings and the mesh-topology is not required.
Figure 11b shows the continuous rings input by a user. Each ring input by the user corresponds to a joint in the skeleton. As shown in figure 11a, a ring 1151 around the right foot of the subject corresponds to a joint 1101 in the foot of the skeleton. A ring 1152 around the right ankle of the subject corresponds to a right ankle joint on the skeleton. A ring 1153 around the right knee of the subject corresponds to the right knee joint on the skeleton.
It is noted that the rings here are manually chosen and do not correspond to the true anatomic skeleton. The code skeleton generation is not shown here, but would typically create the elbow joint at the average of the points in the elbow ring. The joint orientations are computed using a combination of geometric conventions. For instance, the x-axis is aligned with the previous joint and, in the case of the elbow, the y-axis is as close as possible to the general right-left direction of the human shape.
An advantage is that the skeleton generation algorithm is independent of the subdivision. The generating a skeleton is fast (tens of milliseconds on a modern machine) because many values can be pre-computed. Given a mesh-topology, all the continuous rings can be computed and cached as they do not require any 3D knowledge of a mesh-instance. Once cached, and given a new mesh-instance, the continuous rings can be re-used in order to produce a new skeleton.
Once a skeleton has been generated, it can be used for Linear Blend Skinning (also known as Skeletal Subspace Deformation) using the standard equation: = wor; where vi is the vertex position in neutral pose (also called dress pose), is the skinned vertex position, M1 is the object-to-joint dress-pose transformation of the joint] and M'i is the object-to-joint skinned transformation of joint j.
Note that this equation can be inverted in order to unskin a mesh: j I I ArlAi1 I VI-
I
This is useful if, given a CubeShape in an unknown pose, one wants to retrieve the shape in the neutral pose.
Skinning requires computing the skinning weights w1. In the Computer Graphics industry this is typically done manually by an artist. An alternative is to learn the weights from a database of people in different poses. While the algorithm infers automatically how the bones influence the surface, it can be beneficial to limit the influence of the bone over the mesh.
Figure 12 shows definitions of influence areas in the parameterized domain in an embodiment. Using the parameterization of the patch, areas over the surface can be defined and used to guide the skinning-weight learning process. In this case the left shoulder bone is limited by the left upper-arm area 1210 combined with the left upper-torso area 1220.
Exploiting naturally occurring symmetry can be useful in many ways. In the case of the human body shape it can be used to generate the right-left symmetry of a given shape.
A simple reason for this is to double the size of a database of registered meshes at no additional cost. Given its regular structure, it is easy to describe symmetry at the graph-topology level: blocks are simply put into correspondence. For instance, the right-left symmetry of a human shape can be described, in the case of the arms, by putting the left arm and the right arm block into correspondence.
Symmetry can be similarly applied to the continuous rings and the influence area, since the symmetry is defined at the graph-topology level, it can be evaluated for any mesh resolution. Generally any tools applied to the CubeShape should be defined as acting on the graph-topology, ensuring that the tool is independent of the mesh resolution.
Landmarks can be defined at the graph-topology level by providing (i) the patch and (ii) the u; v parameterization within this patch. For instance, the landmark of the left knee crease is defined as belonging to the patch left leg back with the parameterization 0:5; 0:5.
Figures 13a to 13c show landmarks computed at different levels of subdivision. Figure 13a shows a low resolution mesh with a landmark 1310 on the head and a landmark 1320 on the torso. Figure 13b shows a medium resolution mesh calculated by subdividing the mesh shown in Figure 13a. The two landmarks 1310 and 1320 are computed for the medium resolution. Figure 13c shows a high resolution mesh calculated by subdividing the mesh shown in Figure 13b. Again, the landmarks 1310 and 1320 are computer for the high resolution. The landmarks only have to be defined once at the graph-topology level and can then be computed for any resolution.
Since the CubeShape is made of quads only, the surface can be described using Catmull-Clark subdivision schemes.
Figure 14 shows different subdivision of the Catmull-Clark subdivision scheme. Figure 14a shows a cube. Figures 14b to 14e show successive subdivisions of the surface to approximate a sphere.
Embodiments of the CubeShape can be seen as a simple way to create the initial coarsest shape (using cubes). In addition to traditional subdivision surface representation, it also gives a simple way to parameterize the shape.
Every patch of a high-resolution CubeShape can be seen as a 2D image (each vertex being a 3D positions). Given a new subdivision vector, generating a lower-resolution mesh is therefore equivalent to subsampling these images.
Figure 15a shows a high resolution mesh and Figure 15b shows resampling of the high resolution mesh to a low resolution mesh.
Because of the constraints imposed by the CubeShape, it can be difficult to model certain parts of the shape. For instance, this is the case with human hands. From a topological point of view, hands can be represented by the CubeShape described so far but one might require a better representation to avoid over-stretching the quads and have better access to individual fingers.
A solution is to model a separate model representing one hand only and then connecting it to the body, granted that the subdivision of the connecting patch matches exactly.
Figures 16a to 16e show a hand model according to an embodiment. Figure 16a shows the arrangement of cubes 1610 which is input by a user. Figure 16b shows the blocks 1620, 1621, 1622, 1623, 1624, & 1625 defined by the user.
Figure 16c shows the patches which are computed from the blocks defined in Figure 16b. As shown in Figure 16c, three patches 1631, 1632 & 1633 on the block 1622 forming the index finger are visible.
Figure 16d shows the subdivision domain. As shown in Figure 16d, there are 7 subdivision domains: one subdividing the palm top to bottom 1641, one subdividing the hand front to back 1642, five, each subdividing a respective one of the fingers and thumb 1643, 1644, 1645, 1646 & 1647.
Figure 16e shows how the hand model is connected to the body. The two cubes 1650 at the end of the arm 1652 are replaced by the hand.
It is noted that the left hand can simply be generated as the right/left symmetry of the right hand.
Figure 17 shows a system 1700 for generating a three dimensional representation of a subject from a depth image. The system comprises a depth image capture device which captures depth data from the subject, storage 1720 which stores a CubeShape model 1722, a processing module 1730 which fits the CubeShape model 1722 to the depth data captured by the depth image capture device 1710 and a display 1740 which displays a result determined by the processor 1730.
The CubeShape model 1722 is the output of the method described above with reference to figures 1 and 2.
Figure 18 shows a method of generating a three dimensional representation of a subject according to an embodiment.
In step S1802, a depth image of the subject is acquired. In step S1804, a CubeShape model is fitted to the depth data. In step 51806, a CubeShape mesh instance which describes the data is output.
The resolution of the Cube shape model is chosen to match the sensor noise: the noisier the sensor, the coarser the subdivision. It is noted that the CubeShape subdivision capability helps adapting the statistical model to a given sensor. For example, given a more accurate sensor, the statistical model can be re-trained using a higher mesh resolution.
The depth image may be captured by the depth image capture device 1710. Alternatively, the method may be carried out on a depth image transferred to the system 1700 either over a network or on a storage medium.
The method may be started using a manual trigger such as button or a remote control. Alternatively, the method may be triggered by a voice or gesture command.
The method may be automatically triggered in different ways. If a person is detected standing in a particular area the method may be automatically triggered. This detection may be as a result of a detection from the depth image capture device, or by a detection from a separate sensor located on the floor. Alternatively, the method may be triggered if a person is detected as being within the statistical space of pose and shape regardless of their location in the real world. For example, if the system assesses that it can provide a good body shape estimate the method will be triggered.
The output of the system may be a 3D data file including a set of locations in three dimensions and a mesh definition.
In an embodiment the display is configured to display a visualisation of the subject wearing an item of clothing without the need for them to try on the actual product.
In an embodiment, the system is configured to calculate measurements of the subject. The measurements may be discrete sizes, for example, small, medium and large or measurements such as waist or inside leg of the subject.
In the above description, the use of CubeShape as a model for a human is described. Embodiments may also be used to model other subjects.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods, and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims (20)

  1. CLAIMS: 1. A method of generating a three dimensional model of a subject, the method comprising receiving point cloud data for a subject; receiving user inputs indicating a plurality of cubes, and a grouping of the cubes into a plurality of blocks to form a representation of a class of subjects; generating a first mesh comprising a plurality of quadrilaterals by subdividing patches corresponding to faces of blocks of the plurality of blocks; fitting the first mesh to the point cloud data to generate a fitted mesh; iteratively generating further meshes, each comprising a plurality of quadrilaterals by subdividing patches of the fitted mesh from the previous iteration and fitting the further mesh to the point cloud data; and outputting as the three dimensional model of the subject the iteratively generated fitted mesh.
  2. 2. A method according to claim 1, wherein subdividing each mesh comprises subdividing patches corresponding to faces of the blocks at regular intervals along each edge of the face.
  3. 3. A method according to claim 1, wherein subdividing each mesh comprises subdividing a first patch corresponding to a face of a first block along a first edge with a first set of subdivisions and subdividing a second patch corresponding to a face of a second block adjacent to the first edge with a second set of subdivisions such that each subdivision of the first set connects to a corresponding subdivision of the second set at the first edge.
  4. A method according to claim 1, wherein the class of subjects is human subjects.
  5. 5. A method according to claim 1, wherein the user inputs further comprise indications of locations of joints in a skeleton of the subject, the method further comprising generating a representative skeleton from the indications.
  6. 6. A method according to claim 5, wherein the indications of locations of joints comprise indications of a plurality of rings on the blocks indicating the locations of the joints.
  7. 7. A method according to claim 1, wherein the grouping of the cubes into a plurality of blocks comprises an indication of symmetry as correspondences between cubes.
  8. 8. A method according to claim 1, further comprising capturing the point cloud data for the subject.
  9. 9. A method of generating a statistical model for the three dimensional shape of a class of subjects from three dimensional point cloud data for a plurality of test subjects within the class of subjects, the method comprising, for each test subject of the plurality of test subjects, iteratively generating models of increasing resolution by: fitting a first mesh to the point cloud data, the first mesh comprising a plurality of quadrilaterals formed by subdividing patches corresponding to faces of blocks of a plurality of blocks, each block of the plurality of blocks formed from at least one cube, to obtain a fitted first mesh; generating a second mesh by subdividing the fitted first mesh; and repeating the fitting and generating steps using the second mesh in place of the first mesh, and outputting the result of the iteration as a statistical model for the class of subjects.
  10. 10. A method according to claim 9, wherein subdividing the first fitted mesh comprises subdividing a patches corresponding to faces of the blocks at regular intervals along each edge of the face.
  11. 11. A method according to claim 9, wherein subdividing the first fitted mesh comprises subdividing a first patch corresponding to a face of a first block along a first edge with a first set of subdivisions and subdividing a second patch corresponding to a face of a second block adjacent to the first edge with a second set of subdivisions such that each subdivision of the first set connects to a corresponding subdivision of the second set at the first edge.
  12. 12. A method according to claim 9 wherein the class of subjects is human subjects.
  13. 13. A method according to claim 12 further comprising generating a representative skeleton for each of the test subjects.
  14. 14. A method according to claim 9, further comprising enforcing at least one symmetry rule defined by correspondences between blocks of the plurality of blocks.
  15. 15. A method according to claim 9, further comprising capturing the three dimensional point cloud data for each of the test subjects.
  16. 16. A method of generating a three dimensional representation of a subject from a depth image, the method comprising fitting a first mesh to the depth image, the first mesh comprising a plurality of quadrilaterals formed by subdividing patches corresponding to faces of blocks of a plurality of blocks, each block of the plurality of blocks formed from at least one cube.
  17. 17. A method according to claim 16, further comprising capturing the depth image of the subject.
  18. 18. A system for generating a three dimensional model of a subject, the system comprising a user interface configured to receive a user inputs indicating a plurality of cubes, and a grouping of the cubes into a plurality of blocks to form a representation of a class of subjects; a processor configured to generate a first mesh comprising a plurality of quadrilaterals by subdividing patches corresponding to faces of blocks of the plurality of blocks; fit the first mesh to point cloud data for a subject from the class of subjects to generate a fitted mesh; and iteratively generating further meshes, each comprising a plurality of quadrilaterals by subdividing patches of the fitted mesh from the previous iteration and fitting the further mesh to the point cloud data, the system being configured to output as the three dimensional model of the subject the iteratively generated fitted mesh.
  19. 19. A computer readable carrier medium carrying processor executable instructions which when executed on a processor cause the processor to carry out a method according to claim 1.
  20. 20. A computer readable carrier medium carrying processor executable instructions which when executed on a processor cause the processor to carry out a method according to claim 9.Amendments to the claims have been filed as follows: CLAIMS: 1. A method of generating a three dimensional model of a subject, the method comprising receiving point cloud data for a subject; receiving user inputs indicating a plurality of cubes, and a grouping of the cubes into a plurality of blocks to form a representation of a class of subjects; generating a first mesh comprising a plurality of quadrilaterals by subdividing patches corresponding to faces of blocks of the plurality of blocks; fitting the first mesh to the point cloud data to generate a fitted mesh; iteratively generating further meshes, each comprising a plurality of quadrilaterals by subdividing patches of the fitted mesh from the previous iteration and fitting the further mesh to the point cloud data; and outputting as the three dimensional model of the subject the iteratively generated fitted mesh, wherein the model represents the three dimensional shape of the subject. N***2. A method according to claim 1, wherein subdividing each mesh comprises subdividing patches corresponding to faces of the blocks at regular intervals along r 20 each edge of the face.3. A method according to claim 1, wherein subdividing each mesh comprises subdividing a first patch corresponding to a face of a first block along a first edge with a first set of subdivisions and subdividing a second patch corresponding to a face of a second block adjacent to the first edge with a second set of subdivisions such that each subdivision of the first set connects to a corresponding subdivision of the second set at the first edge.4. A method according to claim 1, wherein the class of subjects is human subjects.5. A method according to claim 1, wherein the user inputs further comprise indications of locations of joints in a skeleton of the subject, the method further comprising generating a representative skeleton from the indications.6. A method according to claim 5, wherein the indications of locations of joints comprise indications of a plurality of rings on the blocks indicating the locations of the joints.7. A method according to claim 1, wherein the grouping of the cubes into a plurality of blocks comprises an indication of symmetry as correspondences between cubes.8. A method according to claim 1, further comprising capturing the point cloud data for the subject.9. A method of generating a statistical model for the three dimensional shape of a class of subjects from three dimensional point cloud data for a plurality of test subjects within the class of subjects, the method comprising, for each test subject of the plurality of test subjects, iteratively generating models of increasing resolution by: fitting a first mesh to the point cloud data, the first mesh comprising a plurality of quadrilaterals formed by subdividing patches corresponding to faces of CD blocks of a plurality of blocks, each block of the plurality of blocks formed from at least one cube, to obtain a fitted first mesh; generating a second mesh by subdividing the fitted first mesh; and repeating the fitting and generating steps using the second mesh in place of the first mesh, and outputting the result of the iteration as a statistical model for the class of subjects, wherein the statistical model is a representation of the three dimensional shape of the class of subjects.10. A method according to claim 9, wherein subdividing the first fitted mesh comprises subdividing a patches corresponding to faces of the blocks at regular intervals along each edge of the face.11. A method according to claim 9, wherein subdividing the first fitted mesh comprises subdividing a first patch corresponding to a face of a first block along a first edge with a first set of subdivisions and subdividing a second patch corresponding to a face of a second block adjacent to the first edge with a second set of subdivisions such that each subdivision of the first set connects to a corresponding subdivision of the second set at the first edge.12. A method according to claim 9 wherein the class of subjects is human subjects.13. A method according to claim 12 further comprising generating a representative skeleton for each of the test subjects.14. A method according to claim 9, further comprising enforcing at least one symmetry rule defined by correspondences between blocks of the plurality of blocks.15. A method according to claim 9, further comprising capturing the three dimensional point cloud data for each of the test subjects.16. A method of generating a three dimensional representation of a subject from a depth image, the method comprising fitting a first mesh to the depth image, the first mesh comprising a plurality of quadrilaterals formed by subdividing patches corresponding to faces of blocks of a CD plurality of blocks, each block of the plurality of blocks formed from at least one cube, wherein the first mesh is a representation of the three dimensional shape of the subject.17. A method according to claim 16, further comprising capturing the depth image of the subject. 25 18. A system for generating a three dimensional model of a subject, the system comprising a user interface configured to receive a user inputs indicating a plurality of cubes, and a grouping of the cubes into a plurality of blocks to form a representation of a class of subjects; a processor configured to generate a first mesh comprising a plurality of quadrilaterals by subdividing patches corresponding to faces of blocks of the plurality of blocks; fit the first mesh to point cloud data for a subject from the class of subjects to generate a fitted mesh; and iteratively generating further meshes, each comprising a plurality of quadrilaterals by subdividing patches of the fitted mesh from the previous iteration and fitting the further mesh to the point cloud data, the system being configured to output as the three dimensional model of the subject the iteratively generated fitted mesh, wherein the model is a representation of the three dimensional shape of the subject.19. A computer readable carrier medium carrying processor executable instructions which when executed on a processor cause the processor to carry out a method according to claim 1.20. A computer readable carrier medium carrying processor executable instructions which when executed on a processor cause the processor to carry out a method according to claim 9.AT
GB1418867.6A 2014-10-23 2014-10-23 Methods and systems for generating a three dimensional model of a subject Active GB2531585B8 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1418867.6A GB2531585B8 (en) 2014-10-23 2014-10-23 Methods and systems for generating a three dimensional model of a subject
US14/921,908 US9905047B2 (en) 2014-10-23 2015-10-23 Method and systems for generating a three dimensional model of a subject by iteratively generating meshes
JP2015209075A JP6290153B2 (en) 2014-10-23 2015-10-23 Method and system for generating a three-dimensional model of an object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1418867.6A GB2531585B8 (en) 2014-10-23 2014-10-23 Methods and systems for generating a three dimensional model of a subject

Publications (5)

Publication Number Publication Date
GB201418867D0 GB201418867D0 (en) 2014-12-03
GB2531585A true GB2531585A (en) 2016-04-27
GB2531585B GB2531585B (en) 2016-09-14
GB2531585B8 GB2531585B8 (en) 2017-03-15
GB2531585A8 GB2531585A8 (en) 2017-03-15

Family

ID=52013492

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1418867.6A Active GB2531585B8 (en) 2014-10-23 2014-10-23 Methods and systems for generating a three dimensional model of a subject

Country Status (3)

Country Link
US (1) US9905047B2 (en)
JP (1) JP6290153B2 (en)
GB (1) GB2531585B8 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10956625B2 (en) * 2015-09-01 2021-03-23 Siemens Industry Software Inc. Mesh generation system and method
US10120057B1 (en) * 2015-10-13 2018-11-06 Google Llc System and method for determining the direction of an actor
US9691153B1 (en) 2015-10-21 2017-06-27 Google Inc. System and method for using image data to determine a direction of an actor
US10025308B1 (en) 2016-02-19 2018-07-17 Google Llc System and method to obtain and use attribute data
US20190370537A1 (en) * 2018-05-29 2019-12-05 Umbo Cv Inc. Keypoint detection to highlight subjects of interest
CN109377564B (en) * 2018-09-30 2021-01-22 清华大学 Monocular depth camera-based virtual fitting method and device
US20200202622A1 (en) * 2018-12-19 2020-06-25 Nvidia Corporation Mesh reconstruction using data-driven priors
CN110210431B (en) * 2019-06-06 2021-05-11 上海黑塞智能科技有限公司 Point cloud semantic labeling and optimization-based point cloud classification method
CN111368371B (en) * 2020-03-06 2023-06-20 江南造船(集团)有限责任公司 Ship insulation material statistics method and system, readable storage medium and terminal
CN114064286B (en) * 2021-11-19 2022-08-05 北京太琦图形科技有限公司 Method, apparatus, device and medium for processing unstructured grid data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01311373A (en) * 1988-06-09 1989-12-15 Hitachi Ltd Method and device for meshing
US6453275B1 (en) * 1998-06-19 2002-09-17 Interuniversitair Micro-Elektronica Centrum (Imec Vzw) Method for locally refining a mesh
US6847359B1 (en) * 1999-10-14 2005-01-25 Fuji Photo Film Co., Ltd Image processing apparatus and image capturing apparatus, which approximate object shapes using a plurality of polygon patches
US6879324B1 (en) * 1998-07-14 2005-04-12 Microsoft Corporation Regional progressive meshes
US20080043023A1 (en) * 2006-08-15 2008-02-21 Microsoft Corporation Approximating subdivision surfaces with bezier patches

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3005282A (en) 1958-01-28 1961-10-24 Interlego Ag Toy building brick
JPH08185545A (en) 1994-12-28 1996-07-16 Nippon Steel Corp Method and device for generating image
JP4550221B2 (en) 2000-04-24 2010-09-22 パナソニック株式会社 Three-dimensional space reconstruction device and three-dimensional space reconstruction method
WO2003088085A1 (en) * 2002-04-04 2003-10-23 Arizona Board Of Regents Three-dimensional digital library system
US8042056B2 (en) * 2004-03-16 2011-10-18 Leica Geosystems Ag Browsers for large geometric data visualization
JP6069923B2 (en) 2012-07-20 2017-02-01 セイコーエプソン株式会社 Robot system, robot, robot controller
JP2014186588A (en) * 2013-03-25 2014-10-02 Seiko Epson Corp Simulation apparatus, program, and image generating method
JP6008766B2 (en) 2013-03-25 2016-10-19 住友重機械工業株式会社 Support device and computer program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01311373A (en) * 1988-06-09 1989-12-15 Hitachi Ltd Method and device for meshing
US6453275B1 (en) * 1998-06-19 2002-09-17 Interuniversitair Micro-Elektronica Centrum (Imec Vzw) Method for locally refining a mesh
US6879324B1 (en) * 1998-07-14 2005-04-12 Microsoft Corporation Regional progressive meshes
US6847359B1 (en) * 1999-10-14 2005-01-25 Fuji Photo Film Co., Ltd Image processing apparatus and image capturing apparatus, which approximate object shapes using a plurality of polygon patches
US20080043023A1 (en) * 2006-08-15 2008-02-21 Microsoft Corporation Approximating subdivision surfaces with bezier patches

Also Published As

Publication number Publication date
JP2016103265A (en) 2016-06-02
US9905047B2 (en) 2018-02-27
JP6290153B2 (en) 2018-03-07
GB2531585B (en) 2016-09-14
US20160117859A1 (en) 2016-04-28
GB201418867D0 (en) 2014-12-03
GB2531585B8 (en) 2017-03-15
GB2531585A8 (en) 2017-03-15

Similar Documents

Publication Publication Date Title
US9905047B2 (en) Method and systems for generating a three dimensional model of a subject by iteratively generating meshes
Wang et al. From laser-scanned data to feature human model: a system based on fuzzy logic concept
Hasler et al. Learning skeletons for shape and pose
Werghi Segmentation and modeling of full human body shape from 3-D scan data: A survey
KR100900824B1 (en) Sketch based 3d model creating apparatus and method
Magnenat-Thalmann Modeling and simulating bodies and garments
JP6863596B6 (en) Data processing device and data processing method
JP2022544353A (en) A method for estimating naked body shape from hidden body scans
Petkov et al. Interactive visibility retargeting in vr using conformal visualization
Twarog et al. Playing with puffball: simple scale-invariant inflation for use in vision and graphics
Noborio et al. Experimental results of 2D depth-depth matching algorithm based on depth camera Kinect v1
Ma et al. Realistic modeling and animation of human body based on scanned data
JP2021026759A (en) System and method for performing 3d imaging of objects
Morigi et al. Reconstructing surfaces from sketched 3d irregular curve networks
CN114581288A (en) Image generation method and device, electronic equipment and storage medium
Yang et al. HiLo: Detailed and Robust 3D Clothed Human Reconstruction with High-and Low-Frequency Information of Parametric Models
Saini et al. Free-form surface reconstruction from arbitrary perspective images
Pagani et al. Areal surface texture parameters on surface
Jiménez et al. Tetra-trees properties in graphic interaction
Zhang Virtual prototyping with surface reconstruction and freeform geometric modeling using level-set method
Geng et al. Sketch based garment modeling on an arbitrary view of a 3D virtual human model
Chou et al. Fast octree construction endowed with an error bound controlled subdivision scheme
Hu et al. Semantic feature extraction of 3d human model from 2d orthographic projection
Yamauchi et al. 3D reconstruction of a human body from multiple viewpoints
JP2000322599A (en) Manufacture of three-dimensional model

Legal Events

Date Code Title Description
S117 Correction of errors in patents and applications (sect. 117/patents act 1977)

Free format text: REQUEST FILED; REQUEST FOR CORRECTION UNDER SECTION 117 FILED ON 23 DECEMBER 2016

Free format text: CORRECTIONS ALLOWED; REQUEST FOR CORRECTION UNDER SECTION 117 FILED ON 23 DECEMBER 2016 ALLOWED ON 6 MARCH 2017

S117 Correction of errors in patents and applications (sect. 117/patents act 1977)

Free format text: CORRECTIONS ALLOWED; REQUEST FOR CORRECTION UNDER SECTION 117 FILED ON 23 DECEMBER 2016 ALLOWED ON 6 MARCH 2017

Free format text: REQUEST FILED; REQUEST FOR CORRECTION UNDER SECTION 117 FILED ON 23 DECEMBER 2016