CN114926605B - Shell extraction method of three-dimensional model - Google Patents
Shell extraction method of three-dimensional model Download PDFInfo
- Publication number
- CN114926605B CN114926605B CN202210846386.2A CN202210846386A CN114926605B CN 114926605 B CN114926605 B CN 114926605B CN 202210846386 A CN202210846386 A CN 202210846386A CN 114926605 B CN114926605 B CN 114926605B
- Authority
- CN
- China
- Prior art keywords
- texture
- model
- map
- depth
- pixels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Image Generation (AREA)
Abstract
The invention discloses a shell extracting method of a three-dimensional model, and belongs to the technical field of multi-dimensional model creation. The method comprises the following steps: obtaining a depth map of the model; calculating pixels in the depth map to generate a plurality of space points; constructing a triangular network by using the space points to obtain vertex data, vertex index data and a normal array; acquiring a model shooting position by utilizing the normal direction of each triangulation network, and shooting to obtain a second texture map; acquiring a texture pixel value corresponding to each triangulation network according to the second texture map; filling the texture pixel values corresponding to all the triangular meshes into the blank texture to obtain a texture map and texture coordinates of each vertex of each triangular mesh; and outputting the vertex data, the vertex index data, the normal array, the texture coordinates and the texture map into a data format which can be used for three-dimensional display. The invention realizes the automatic shell extraction of the model, saves time and labor, improves the efficiency, reduces the cost and reduces the service requirement on technical personnel compared with manual shell extraction.
Description
Technical Field
The invention relates to the technical field of multi-dimensional model creation, in particular to a shell extracting method of a three-dimensional model.
Background
Multidimensional models are generally created through model software, and as software and technology are continuously upgraded, models created through software are more and more refined, more and more complex and richer in internal structure. However, the internal structure of a fine and complex model is not necessarily displayed in all scenes, and only the shell of the model may be displayed. Therefore, the model shell extraction and the model shell generation method become key technologies in the field of model creation.
At present, model shells are usually manufactured by adopting a manual mode, such as model shell extraction and model shell generation methods proposed in patent applications CN112258654A and CN112149252A, and the manual mode is not only low in efficiency and high in cost, but also closely related to the proficiency of technicians in operating model software, and has high business requirements for the technicians.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides the following technical scheme.
The invention provides a shell extracting method of a three-dimensional model, which comprises the following steps:
acquiring a depth map of the model;
calculating pixels in the depth map to generate a plurality of space points;
constructing a triangular network by using the space points to obtain vertex data, vertex index data and a normal array;
acquiring a model shooting position by utilizing the normal direction of each triangulation network, and shooting to obtain a second texture map;
acquiring a texture pixel value corresponding to each triangular net according to the second texture map;
filling the texture pixel values corresponding to all the triangular nets into blank textures to obtain texture maps and texture coordinates of each vertex of each triangular net;
and outputting the vertex data, the vertex index data, the normal array, the texture coordinates and the texture map into a data format which can be used for three-dimensional display.
Preferably, the obtaining a depth map of the model comprises:
setting a shooting tool at a multi-point position on a spherical surface which takes a center point of the model as a sphere center and takes a fixed length as a radius;
and shooting the model by utilizing the shooting tools of all the position points respectively to obtain a depth map of the model.
Preferably, the position of the photographing tool is calculated according to the following formula:
position of imaging tool = viewpoint position + viewDirection R
The viewpoint position is the position of the center point of the model, R is the radius of the sphere with fixed length, and viewDirection is the direction vector of the shooting tool on the spherical surface.
Preferably, the calculating the pixels in the depth map and generating the plurality of spatial points includes:
acquiring a first texture map corresponding to the depth map;
marking invalid depth pixels in the depth map according to the first texture map, and calculating the valid depth pixels in the depth map to generate a plurality of spatial points.
Preferably, said marking invalid depth pixels in said depth map according to said first texture map comprises:
marking transparent pixels of the first texture map;
comparing the depth map with the first texture map, marking pixels of the depth map corresponding to transparent pixels of the first texture map as invalid depth pixels.
Preferably, the calculating effective depth pixels in the depth map comprises:
transforming the effective depth pixel to be between [0, 1] through a viewport, and then zooming to be between [ -1, 1 ];
and multiplying the effective depth pixel obtained after scaling by an inverse matrix of a projection matrix and an inverse matrix of a view matrix of the model in sequence.
Preferably, the constructing the triangulation network by using the spatial points comprises:
and reducing the number of the space points, and constructing a triangular network by using the space points obtained after the treatment.
Preferably, the reducing the number of the spatial points comprises:
obtaining a wrapper box of the model;
dividing the bounding box into L equal-sized cubes, where L is the width of the depth map;
and calculating a mean position point of the cube containing the plurality of points, and replacing the plurality of points by using the mean position point.
Preferably, the model shooting position is obtained by using the normal direction of each triangulation network, and is calculated according to the following formula:
shooting position = model center point position + viewDirection R
The viewpoint position is the position of the center point of the model, R is the fixed length, and viewDirection is the direction vector of the shooting tool on the normal of the triangulation network.
Preferably, the obtaining a texel value corresponding to each triangulation network according to the second texture map comprises:
sequentially transforming each vertex of the triangular net into a model view matrix and a projection matrix;
scaling the vertex transformed into the projection matrix into a mapping interval of [0, 1 ];
calculating the coordinates of the vertex on the second texture map according to the position and the size of the viewport;
and forming the texture pixel value corresponding to the triangular net by using the coordinates of the three vertexes on the second texture map.
The invention has the beneficial effects that: the invention provides a shell extraction method of a three-dimensional model, which comprises the steps of firstly calculating pixels in a model depth map to generate a plurality of space points, and then constructing a triangular network by using the space points to obtain vertex data, vertex index data and a normal array; acquiring a model shooting position by using the normal direction of each triangulation network, and shooting to obtain a second texture map; acquiring a texture pixel value corresponding to each triangular net according to the second texture map; filling texture pixel values corresponding to all the triangular meshes into blank textures to obtain texture maps and texture coordinates of each vertex of each triangular mesh; and finally, outputting the vertex data, the vertex index data, the normal array, the texture coordinates and the texture map into a data format for three-dimensional display. The method provided by the invention can realize automatic shell extraction of the model, and compared with manual shell extraction, the method saves time and labor, improves the efficiency, reduces the cost and reduces the service requirement on technical personnel.
Drawings
FIG. 1 is a schematic flow chart of a shell extraction method for a three-dimensional model according to the present invention;
FIG. 2 is a schematic structural diagram of a shell extracting device of the three-dimensional model of the present invention.
Detailed Description
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Example one
The method provided by the invention can be implemented in the following terminal environment, and the terminal can comprise one or more of the following components: a processor, a memory, and a display screen. Wherein the memory has stored therein at least one instruction that is loaded and executed by the processor to implement the methods described in the embodiments described below.
A processor may include one or more processing cores. The processor connects various parts within the overall terminal using various interfaces and lines, performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory, and calling data stored in the memory.
The Memory may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory may be used to store instructions, programs, code sets, or instructions.
The display screen is used for displaying user interfaces of all the application programs.
In addition, those skilled in the art will appreciate that the above-described terminal configurations are not intended to be limiting, and that the terminal may include more or fewer components, or some components may be combined, or a different arrangement of components. For example, the terminal further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a power supply, and other components, which are not described herein again.
As shown in fig. 1, the present invention provides a shell extracting method for a three-dimensional model, comprising:
s101, obtaining a depth map of the model;
s102, calculating pixels in the depth map to generate a plurality of space points;
s103, constructing a triangular network by using the space points to obtain vertex data, vertex index data and a normal array;
s104, obtaining a model shooting position by utilizing the normal direction of each triangulation network, and shooting to obtain a second texture map;
s105, acquiring a texture pixel value corresponding to each triangulation network according to the second texture map;
s106, filling all texture pixel values corresponding to the triangulation network into blank textures to obtain a texture map and texture coordinates of each vertex of each triangulation network;
and S107, outputting the vertex data, the vertex index data, the normal array, the texture coordinates and the texture map into a data format for three-dimensional display.
Since the shell of the model comprises the shape and the texture, the invention realizes the automatic shell drawing of the model by respectively acquiring the data of the shape and the texture which can be used for three-dimensional display. Calculating pixels in the model depth map to generate a plurality of space points; and constructing a triangular net by using the space points to obtain vertex data, vertex index data and a normal array, namely data of the shape of the model shell, which can be used for three-dimensional display. Acquiring a model shooting position by utilizing the normal direction of each triangulation network, and shooting to obtain a second texture map; acquiring a texture pixel value corresponding to each triangular net according to a second texture map; and filling the texture pixel values corresponding to all the triangular meshes into blank textures to obtain texture maps and texture coordinates of each vertex of each triangular mesh, namely data of the shell textures of the model, which can be used for three-dimensional display.
The method provided by the invention realizes automatic shell extraction of the model, saves time and labor, improves efficiency, reduces cost and reduces service requirements on technical personnel compared with manual shell extraction.
Step S101 is executed, and specifically, the following method may be adopted to implement:
s1011, arranging the shooting tool at a multi-point position on a spherical surface which takes the center point of the model as the center of sphere and takes the fixed length as the radius;
and S1012, shooting the model by utilizing the shooting tools of all the position points respectively to obtain a depth map of the model.
Wherein the photographing tool may be a camera. In the implementation process, the cameras can be arranged at a plurality of points on the spherical surface, the cameras arranged at each point shoot the models, and finally the depth maps of a plurality of models with the same number as the cameras are obtained.
The position of the shooting tool can be calculated according to the following formula:
position of imaging tool = viewpoint position + viewDirection R
The viewpoint position is the position of the center point of the model, R is the radius of the sphere with fixed length, and viewDirection is the direction vector of the shooting tool on the spherical surface.
viewDirection may be calculated as follows.
The three components of viewDirection are x, y, z, respectively. According to the euler angle concept, pitch = 0 can be defined as the horizontal (z = 0) direction, and the yaw is counterclockwise from the x-axis, then
x = cos(yaw)* cos(pitch)
y = sin(yaw)* cos(pitch)
z = sin(pitch)
Where yaw is the yaw angle and pitch is the pitch angle.
Step S102 is performed, including:
s1021, a first texture map corresponding to the depth map is obtained.
At the same time when the depth map is acquired in step S101, a first texture map corresponding to the depth map may be acquired. As an embodiment, for example, a camera is arranged at a plurality of position points on a spherical surface, and a camera is used to photograph the model to obtain a depth map, and at the same time, the camera may be used to photograph the model to obtain a first texture map. Also, the first texture map corresponds to the depth map. And the depth map obtained by photographing by the camera at the same position point corresponds to the first texture map.
S1022, marking invalid depth pixels in the depth map according to the first texture map, and calculating the valid depth pixels in the depth map to generate a plurality of spatial points.
Wherein marking invalid depth pixels in the depth map according to the first texture map comprises:
marking transparent pixels of the first texture map;
comparing the depth map with the first texture map, marking pixels of the depth map corresponding to transparent pixels of the first texture map as invalid depth pixels.
Because the model needing shell extraction does not necessarily fill the screen, a part of invalid pixels exist in the depth map of the model, and the part of invalid pixels can be removed before the spatial points are calculated, so that the calculation process of the subsequent spatial points is simpler and faster. In the invention, before removing the invalid pixels of the depth map, the invalid pixels are marked. The invalid pixels in the depth map are marked by the following method:
firstly, transparent pixels of the first texture map are marked, specifically, pixels on the first texture map corresponding to the texture part which is not projected by the shell-drawn model during photographing are marked as transparent pixels.
Then, the pixels corresponding to the depth map and the first texture map are compared one by one, and if the transparent channel of the pixel in the first texture map is marked as transparent, the corresponding pixel in the depth map is marked as an invalid pixel.
Invalid depth pixels marked in the depth map may be deleted and only valid depth pixels may be calculated.
In a preferred embodiment of the present invention, the calculating the effective depth pixels in the depth map includes:
transforming the effective depth pixel to be between [0, 1] through a viewport, and then zooming to be between [ -1, 1 ];
and multiplying the effective depth pixel obtained after scaling by an inverse matrix of a projection matrix and an inverse matrix of a view matrix of the model in sequence.
Wherein, the specific calculation process can adopt the prior art means.
And step S103 is executed, a triangular network is constructed by utilizing the space points, and vertex data, vertex index data and a normal array are obtained.
The calculated space points can be constructed into a triangulation network by adopting a Delaunay triangulation algorithm.
The Delaunay triangulation algorithm can be implemented by using the prior art, and is not described herein.
In a preferred embodiment of the present invention, the number of the spatial points is reduced, and a triangulation network is constructed by using the spatial points obtained after the processing.
By adopting the method, the construction process of the triangular net can be simpler and quicker after the number of the space points is reduced.
In another preferred embodiment of the present invention, the reducing the number of the spatial points comprises:
obtaining an outer box of the model;
dividing the bounding box into L equal-sized cubes, wherein L is the width of the depth map;
and calculating a mean position point of the cube containing the plurality of points, and replacing the plurality of points by using the mean position point.
In the above method, for a cube containing a plurality of points, the points are replaced with the average position points of the points, thereby changing the plurality of points into one point, and achieving a reduction in the number of spatial points. And the method is simple and easy to implement.
Wherein the average position point can be calculated for a cube containing a plurality of points using existing techniques. And will not be described in detail herein.
And step S104 is executed, the shooting position of the model is obtained by utilizing the normal direction of each triangulation network, and a second texture map is obtained through shooting.
Wherein, the model shooting position can be calculated according to the following formula:
shooting position = model center point position + viewDirection R
The viewpoint position is the position of the center point of the model, R is the fixed length, and viewDirection is the direction vector of the shooting tool on the normal of the triangulation network.
And photographing the model at the calculated model photographing position to obtain a second texture map.
Step S105 is executed, and obtaining a texel value corresponding to each triangulation network according to the second texture map includes:
sequentially transforming each vertex of the triangular net into a model view matrix and a projection matrix;
scaling the vertex transformed into the projection matrix into a mapping interval of [0, 1 ];
calculating the coordinates of the vertex on the second texture map according to the position and the size of the viewport;
and forming the texture pixel value corresponding to the triangular net by using the coordinates of the three vertexes on the second texture map.
The pixel value of each pixel inside each triangular net can be obtained by adopting the method.
Step S106 is executed, and the texture pixel values corresponding to all the triangular nets are filled into blank textures to obtain texture maps and texture coordinates of each vertex of each triangular net;
and step S107 is executed, and the vertex data, the vertex index data, the normal array, the texture coordinates and the texture map are output into a data format which can be used for three-dimensional display.
By adopting the method, the automatic shell extraction of the model is realized, the working efficiency is improved, the cost is reduced, and the service requirement on technicians is reduced.
Example two
As shown in fig. 2, the present invention provides a shell extracting apparatus for a three-dimensional model, comprising:
a depth map obtaining module 201, configured to obtain a depth map of the model;
a spatial point generating module 202, configured to calculate pixels in the depth map and generate a plurality of spatial points;
the triangulation network construction module 203 is configured to construct a triangulation network by using the spatial points, so as to obtain vertex data, vertex index data and a normal array;
a second texture map obtaining module 204, configured to obtain a model shooting position in a normal direction of each triangulation network, and obtain a second texture map through shooting;
a texture pixel value obtaining module 205, configured to obtain a texture pixel value corresponding to each triangulation network according to the second texture map;
a texture map generating module 206, configured to fill all texture pixel values corresponding to the triangulation networks into blank textures, so as to obtain texture coordinates of each vertex of each triangulation network and a texture map;
and the data format output module 207 is configured to output the vertex data, the vertex index data, the normal array, the texture coordinates, and the texture map into a data format that can be used for three-dimensional display.
The depth map acquisition module is specifically used for arranging a shooting tool at a multi-point position on a spherical surface which takes the center point of the model as the center of sphere and takes the fixed length as the radius;
and shooting the model by utilizing the shooting tools of all the position points respectively to obtain a depth map of the model.
Wherein, the position of the shooting tool is calculated according to the following formula:
position of imaging tool = viewpoint position + viewDirection R
The viewpoint position is the position of the center point of the model, R is the sphere radius with fixed length, and viewDirection is the direction vector of the shooting tool on the sphere.
The spatial point generation module is specifically configured to obtain a first texture map corresponding to the depth map;
and marking invalid depth pixels in the depth map according to the first texture map, and calculating the valid depth pixels in the depth map to generate a plurality of space points.
Wherein said marking invalid depth pixels in the depth map according to the first texture map comprises:
marking transparent pixels of the first texture map;
and comparing the depth map with the first texture map, and marking the pixels of the depth map corresponding to the transparent pixels of the first texture map as invalid depth pixels.
Further, the calculating effective depth pixels in the depth map comprises:
transforming the effective depth pixel to be between [0, 1] through a viewport, and then zooming to be between [ -1, 1 ];
and sequentially multiplying the effective depth pixel obtained after scaling by an inverse matrix of a projection matrix and an inverse matrix of a view matrix of the model.
The triangulation network construction module is specifically used for reducing the number of the space points and constructing a triangulation network by using the space points obtained after processing.
Wherein the reducing the number of spatial points comprises:
obtaining an outer box of the model;
dividing the bounding box into L equal-sized cubes, wherein L is the width of the depth map;
and calculating a mean position point of the cube containing the plurality of points, and replacing the plurality of points by using the mean position point.
Further, the shooting position of the model is obtained by using the normal direction of each triangulation network, and the shooting position is calculated according to the following formula:
shooting position = model center point position + viewDirection R
The viewpoint position is the position of a model central point, R is a fixed length, and viewDirection is a direction vector of a shooting tool on the normal of the triangulation network.
Further, the obtaining a texel value corresponding to each triangulation network according to the second texture map includes:
sequentially transforming each vertex of the triangular net into a model view matrix and a projection matrix;
scaling the vertexes transformed into the projection matrix into the mapping interval of [0, 1 ];
calculating the coordinates of the vertex on the second texture map according to the position and the size of the viewport;
and forming the texture pixel value corresponding to the triangular net by using the coordinates of the three vertexes on the second texture map.
The apparatus provided in the embodiment of the present invention can be implemented by the method provided in the first embodiment, and specific implementation methods can be referred to the description in the first embodiment, and are not described herein again.
The invention also provides a memory storing a plurality of instructions for implementing the method according to the first embodiment.
The invention also provides an electronic device comprising a processor and a memory connected to the processor, wherein the memory stores a plurality of instructions, and the instructions can be loaded and executed by the processor to enable the processor to execute the method according to the first embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (6)
1. A shell extracting method for a three-dimensional model is characterized by comprising the following steps:
obtaining a depth map of the model;
calculating pixels in the depth map to generate a plurality of spatial points;
constructing a triangular network by using the space points to obtain vertex data, vertex index data and a normal array;
acquiring a model shooting position by utilizing the normal direction of each triangulation network, and shooting to obtain a second texture map;
acquiring a texture pixel value corresponding to each triangular net according to the second texture map;
filling texture pixel values corresponding to all the triangular meshes into blank textures to obtain texture maps and texture coordinates of each vertex of each triangular mesh;
outputting the vertex data, the vertex index data, the normal array, the texture coordinates and the texture map into a data format for three-dimensional display;
the calculating the pixels in the depth map and generating a plurality of spatial points includes:
acquiring a first texture map corresponding to the depth map;
marking invalid depth pixels in the depth map according to the first texture map, and calculating the valid depth pixels in the depth map to generate a plurality of space points;
said marking invalid depth pixels in said depth map according to said first texture map, comprising:
marking transparent pixels of the first texture map;
comparing the depth map with the first texture map, and marking pixels of the depth map corresponding to transparent pixels of the first texture map as invalid depth pixels;
the calculating effective depth pixels in the depth map comprises:
transforming the effective depth pixel to be between [0, 1] through a viewport, and then zooming to be between [ -1, 1 ];
multiplying the effective depth pixel obtained after scaling by an inverse matrix of a projection matrix and an inverse matrix of a view matrix of the model in sequence;
the obtaining a texel value corresponding to each triangulation network according to the second texture map comprises:
sequentially transforming each vertex of the triangular net into a model view matrix and a projection matrix;
scaling the vertexes transformed into the projection matrix into the mapping interval of [0, 1 ];
calculating the coordinates of the vertex on the second texture map according to the position and the size of the viewport;
and forming the texture pixel value corresponding to the triangular net by using the coordinates of the three vertexes on the second texture map.
2. The method of shelling a three-dimensional model as defined in claim 1, wherein said obtaining a depth map of the model comprises:
setting a shooting tool at a multi-point position on a spherical surface which takes the center point of the model as the center of a sphere and takes the fixed length as the radius;
and shooting the model by utilizing the shooting tools of all the position points respectively to obtain a depth map of the model.
3. The shell extraction method of the three-dimensional model according to claim 2, wherein the position of the photographing tool is calculated according to the following formula:
position of imaging tool = viewpoint position + viewDirection R
The viewpoint position is the position of the center point of the model, R is the radius of the sphere with fixed length, and viewDirection is the direction vector of the shooting tool on the spherical surface.
4. The shell extraction method for the three-dimensional model according to claim 1, wherein the constructing the triangular mesh by using the spatial points comprises:
and reducing the number of the space points, and constructing a triangular network by using the space points obtained after the treatment.
5. The shell extraction method for a three-dimensional model according to claim 4, wherein said reducing the number of said spatial points comprises:
obtaining an outer box of the model;
dividing the bounding box into L equal-sized cubes, where L is the width of the depth map;
and calculating a mean position point of the cube containing the plurality of points, and replacing the plurality of points by using the mean position point.
6. The shell drawing method for the three-dimensional model according to claim 1, wherein the model shooting position is obtained by using the normal direction of each triangulation network, and the formula is as follows:
shooting position = model center point position + a × H;
wherein H is a fixed length, and a is a direction vector of the shooting tool on the normal of the triangular net.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210846386.2A CN114926605B (en) | 2022-07-19 | 2022-07-19 | Shell extraction method of three-dimensional model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210846386.2A CN114926605B (en) | 2022-07-19 | 2022-07-19 | Shell extraction method of three-dimensional model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114926605A CN114926605A (en) | 2022-08-19 |
CN114926605B true CN114926605B (en) | 2022-09-30 |
Family
ID=82815680
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210846386.2A Active CN114926605B (en) | 2022-07-19 | 2022-07-19 | Shell extraction method of three-dimensional model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114926605B (en) |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000339499A (en) * | 1999-05-27 | 2000-12-08 | Mitsubishi Electric Corp | Texture mapping and texture mosaic processor |
WO2012132237A1 (en) * | 2011-03-31 | 2012-10-04 | パナソニック株式会社 | Image drawing device for drawing stereoscopic image, image drawing method, and image drawing program |
CA3004241A1 (en) * | 2015-11-11 | 2017-05-18 | Sony Corporation | Encoding apparatus and encoding method, decoding apparatus and decoding method |
CN108230439B (en) * | 2017-12-28 | 2021-09-03 | 苏州慧筑信息科技有限公司 | Web-end three-dimensional model lightweight method, electronic equipment and storage medium |
CN113544746A (en) * | 2019-03-11 | 2021-10-22 | 索尼集团公司 | Image processing apparatus, image processing method, and program |
CN110335342B (en) * | 2019-06-12 | 2020-12-08 | 清华大学 | Real-time hand model generation method for immersive simulator |
CN113112608A (en) * | 2021-04-20 | 2021-07-13 | 厦门汇利伟业科技有限公司 | Method for automatically establishing three-dimensional model from object graph |
-
2022
- 2022-07-19 CN CN202210846386.2A patent/CN114926605B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114926605A (en) | 2022-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113178014B (en) | Scene model rendering method and device, electronic equipment and storage medium | |
CN102306395B (en) | Distributed drawing method and device of three-dimensional data | |
CN110533594B (en) | Model training method, image reconstruction method, storage medium and related device | |
US20150022521A1 (en) | Sparse GPU Voxelization for 3D Surface Reconstruction | |
CN108230384A (en) | Picture depth computational methods, device, storage medium and electronic equipment | |
CN108833877A (en) | Image processing method and device, computer installation and readable storage medium storing program for executing | |
CN106611441A (en) | Processing method and device for three-dimensional map | |
CN116109765A (en) | Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium | |
CN111739150B (en) | Noble metal three-dimensional model construction method and device | |
CN111508033A (en) | Camera parameter determination method, image processing method, storage medium, and electronic apparatus | |
CN111612878A (en) | Method and device for making static photo into three-dimensional effect video | |
CN114092611A (en) | Virtual expression driving method and device, electronic equipment and storage medium | |
CN114461978B (en) | Data processing method and device, electronic equipment and readable storage medium | |
CN108230430B (en) | Cloud layer mask image processing method and device | |
CN114926605B (en) | Shell extraction method of three-dimensional model | |
CN110038302B (en) | Unity 3D-based grid generation method and device | |
CN112562067A (en) | Method for generating large-batch point cloud data sets | |
CN115294277B (en) | Three-dimensional reconstruction method and device of object, electronic equipment and storage medium | |
CN109166176B (en) | Three-dimensional face image generation method and device | |
CN116958481A (en) | Point cloud reconstruction method and device, electronic equipment and readable storage medium | |
CN109741465B (en) | Image processing method and device and display device | |
CN107358641A (en) | Prime number spiral scanning method and system | |
CN115035231A (en) | Shadow baking method, shadow baking device, electronic apparatus, and storage medium | |
CN113126944A (en) | Depth map display method, display device, electronic device, and storage medium | |
JP2000306118A (en) | Tree texture generating device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 102600 608, floor 6, building 1, courtyard 15, Xinya street, Daxing District, Beijing Patentee after: Beijing Feidu Technology Co.,Ltd. Address before: 102600 608, floor 6, building 1, courtyard 15, Xinya street, Daxing District, Beijing Patentee before: Beijing Feidu Technology Co.,Ltd. |