CN114119924A - Three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure - Google Patents
Three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure Download PDFInfo
- Publication number
- CN114119924A CN114119924A CN202111440077.7A CN202111440077A CN114119924A CN 114119924 A CN114119924 A CN 114119924A CN 202111440077 A CN202111440077 A CN 202111440077A CN 114119924 A CN114119924 A CN 114119924A
- Authority
- CN
- China
- Prior art keywords
- texture
- point cloud
- fuzzy
- model
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure, which comprises the following steps: 1) generating the shape and texture of the three-dimensional geometric model to obtain the point cloud characteristics of the three-dimensional geometric model; 2) extracting a fuzzy boundary point cloud; 3) generating a characterization saliency that opposes fuzzy boundaries of the network based on the conditions. The method is based on multi-scale voxelization fuzzy bounding box selection, generates the texture generated by the countermeasure network through the condition to be mapped, embeds the extracted voxel blocks into the three-dimensional geometric model, realizes the global texture vividness of the geometric model in a multi-scale voxel segmentation mode, changes the global texture optimization into the local texture optimization, and reduces the calculation amount.
Description
Technical Field
The invention relates to the field of three-dimensional geometric model reconstruction, in particular to a three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure.
Background
In the existing three-dimensional geometric model reconstruction technology, a model fidelity step is usually involved, the reconstructed three-dimensional geometric model only has shape information, therefore, after the three-dimensional geometric model is reconstructed, texture information is usually supplemented on the basis of the geometric model, so that the whole three-dimensional geometric model has more realistic visual effect
In 2019, Wang et al, university of Chinese academy of sciences, proposed a recognization method with Supervised Texture Generation (Wang J, Zhong Y, Li Y, et al, Re-Identification Supervised Texture Generation [ C ]//2019IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE,2019.), and generated human Texture under the supervision of human recognization. The composite image is rendered by the textures extracted from the input and the similarity between the rendered image and the input image is maximized by using the re-recognition network as a perceptual metric. Texture generation from a single image can be achieved and of higher quality than that generated by other available methods. The disadvantages are that: the boundary information of the generated model is not processed, and the generated texture boundary has a fuzzy condition.
In 2021, shenzhen, mythic carp science limited dun and data et al proposed a texture generation method, apparatus, device and storage medium (application publication No. CN112950739A), which acquired a first view texture of a target object by a design apparatus and input a pre-trained texture generation network to acquire a second view texture, thereby reducing the view acquisition requirement of the target object, reducing the cost of texture generation and improving the accuracy and reality of texture generation. The method has the disadvantages that a multi-view is adopted for texture generation, part of features are lost in the generation process of boundary texture, and the situation of fuzzy texture cannot be solved.
In 2021, a three-dimensional model texture generation method, device, equipment and storage medium (application publication number: CN113223149A) proposed by penguihui et al of middlings (west ampere) aerial survey remote sensing research institute limited company, by screening selectable shielding model surfaces from model surfaces of a plurality of models outside a first model, determining target model surface alternative images, screening optimal images from the alternative images according to the types of the target model surfaces, and generating textures of the target model surfaces based on the optimal images, the reality of three-dimensional model texture generation is improved. The disadvantage is that by generating the texture based on the optimal image, the generated texture only retains the original features.
In 2021, li ji zhu university at zhejiang university, et al proposed a texture fusion method facing real-time three-dimensional reconstruction with RGB-D camera (grant No. CN110827397B), which compares the confidence weight in the adaptive weight field of the real-time frame with the latest confidence weight of the reference point cloud through texture fusion of real-time three-dimensional reconstruction with the confidence weight of the real-time frame, and selects an operation from three operations of replacement, fusion and retention to update the texture result, thereby realizing texture fusion applied to three-dimensional reconstruction. High-quality data is extracted, the blurring of texture fusion can be effectively reduced, a clear texture reconstruction result is realized, and the texture reconstruction precision is remarkably improved by embedding the texture reconstruction result into an RGB-D reconstruction frame at lower calculation cost. The disadvantage is that the texture is updated by replacing the fusion fixation operation, the result is limited by the definition of the input data, and the boundary texture fuzzy condition can not be completely processed.
In conclusion, the existing three-dimensional reconstruction method has the defect that certain blurring phenomenon exists on the generated texture boundary, so that the fidelity of three-dimensional reconstruction exists to a certain extent.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure, and can effectively improve the fuzzy condition of texture boundaries after three-dimensional reconstruction.
The purpose of the invention is realized as follows: a three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure comprises the following steps:
step 1), generating the shape and texture of a three-dimensional geometric model to obtain point cloud characteristics of the three-dimensional geometric model;
step 2), extracting a fuzzy boundary point cloud;
step 3) generating a characterization saliency of fuzzy boundaries of the countermeasure network based on the condition.
As a further limitation of the present invention, the step 1) specifically comprises: generating shape, posture and translation parameters for the SMPL by adopting an HMR and using an iterative three-dimensional regression module, and carrying out shape reconstruction on the input picture through the SMPL; and constructing a Unet network to generate texture mapping to the reconstructed model mesh, and obtaining the point cloud characteristics of the three-dimensional geometric model.
As a further limitation of the present invention, the step 2) specifically includes:
step 2.1) region growing segmentation based on pixel values; sorting the point clouds according to the RGB pixel values according to the basic characteristics of the point clouds obtained in the step 1); selecting the initial seed point cloud with the lowest RGB pixel value, comparing the surrounding adjacent point cloud with the seed point cloud, and judging whether the color distance between the adjacent point cloud and the seed point cloud is small enough; merging point clouds with the same or similar properties with the seed point cloud in the neighborhood around the seed point cloud into the region where the seed pixel is located, continuously growing the new point cloud serving as the seed to the periphery until all the point clouds meeting the conditions are included, and acquiring a growing region; further dividing the growing area, and defining the point cloud at the fuzzy boundary of the texture as a fuzzy boundary area;
the color distance is as follows:
wherein D (X)1,X2) Is X1And X2Of R, wherein R1 G1 B1And R2 G2 B2Respectively represent points X1And point X2Pixel values in the RGB three-color domain;
step 2.2) framing and selecting a fuzzy boundary based on multi-scale voxelization; the construction comprises the construction established in step 1)Dividing the whole point cloud space according to one voxel based on multi-scale voxelization representation, judging whether the cube represented by the voxel contains the fuzzy boundary region obtained in the step 2.1), defining the fuzzy boundary region which can be completely contained in the divided cube and is obtained in the step 2.1) as an effective node, and if not, further dividing the effective node until no effective node exists; constructing a data set T by using the minimum effective node capable of containing a certain fuzzy boundary region, wherein T epsilon (A)1,A2,…,An)。
As a further limitation of the present invention, the step 3) specifically includes:
step 3.1) generating a confrontation network generation texture based on the condition;
inputting texture mapping of the existing voxel block as a group of discrimination data into a discriminator model to discriminate texture output of a generator, adding a vivid texture boundary label into the discriminator, and using batch standard processing; converting initial noise point data into a two-dimensional vector as input of a generating model, filling and activating by a transposition convolutional layer, adding a BN layer and a ReLu layer into the output of each layer of transposition convolutional layer, normalizing pixels of an output image, generating texture, performing texture mapping on a voxel block to obtain a voxel block output as the generated voxel block, inputting a discrimination model, and repeatedly calculating to enable the discrimination model to reach Nash balance, wherein the loss function of the whole model is as follows:
wherein G represents a generator, D represents a discriminator, z represents a generator input noise variable, y represents a sharp texture boundary label, D (x) represents the output of the discriminator, and G (z) represents the output of the generator when the noise variable is input; solving for D with G fixed,the fixed D is solved for G,indicates the resulting optimum arbiter, G*Representing the resulting optimal generator; finally, a voxel block set T with a realistic texture boundary is obtained2Wherein T is2∈(A’1,A’2,…,A’n);
Step 3.2) embedding the extracted voxel blocks into a three-dimensional geometric model; traversing the three-dimensional geometric model obtained in the step 1), and carrying out segmentation on the points A in the voxel block data set T segmented in the step 2.2)1,A2,…,AnReplacing the voxel block set T obtained in the step 3.1)2Of (1)'1,A’2,…,A’nAnd finally, realizing a realistic three-dimensional geometric model.
By adopting the technical scheme, compared with the prior art, the invention has the beneficial effects that: firstly, generating shape, posture and translation parameters for SMPL by adopting HMR and using an iterative three-dimensional regression module, and carrying out human body shape reconstruction on an input picture through the SMPL; secondly, constructing a Unet network to generate a model mesh mapped to the reconstructed texture; then, obtaining spatial topological information of the point cloud, performing frame selection on a local boundary by using multi-scale voxels based on an RGB channel, finally generating a confrontation network through a condition to generate texture for mapping, embedding the extracted voxel blocks into a three-dimensional geometric model, and realizing high-resolution texture generation by using a mode of combining local discrimination with overall discrimination, so that the calculation time of texture generation can be reduced, the fidelity of the texture boundary can be improved, and the feature significance of an unsupervised fuzzy boundary can be realized.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure as shown in FIG. 1 comprises the following steps:
step 1), generating the shape and texture of a three-dimensional geometric model to obtain point cloud characteristics of the three-dimensional geometric model;
first, input images to a HMR (human Mesh recovery) that generates shape, pose, and translation parameters for SMPL using an iterative three-dimensional regression module and predicts SMPL (Skinned Multi-Person Linear Model) parameters, and represents the estimated 3D Mesh m of the input image as: and M is equal to M (beta, theta, gamma), then, a texture is generated by using U-Net, a tensor is rendered by using Opendr, the generated texture is mapped to a 3D grid, pixels are distributed to the surface of the three-dimensional geometric model by using a rendering function R (M, t) of Opendr and a UV corresponding relation provided by SMPL, and gaps are filled by using linear interpolation, so that the point cloud characteristics of the three-dimensional geometric model are finally obtained.
Step 2), extracting a fuzzy boundary point cloud;
step 2.1) region growing segmentation based on pixel values; sorting the point clouds according to the RGB pixel values according to the basic characteristics of the point clouds obtained in the step 1); selecting the initial seed point cloud with the lowest RGB pixel value, comparing the surrounding adjacent point cloud with the seed point cloud, and judging whether the color distance between the adjacent point cloud and the seed point cloud is small enough; merging point clouds with the same or similar properties with the seed point cloud in the neighborhood around the seed point cloud into the region where the seed pixel is located, continuously growing the new point cloud serving as the seed to the periphery until all the point clouds meeting the conditions are included, and acquiring a growing region; further dividing the growing area, and defining the point cloud at the fuzzy boundary of the texture as a fuzzy boundary area;
the color distance is as follows:
wherein D (X)1,X2) Is X1And X2Of R, wherein R1 G1 B1And R2 G2 B2Respectively represent points X1And point X2Pixel values in the RGB three-color domain;
step 2.2) framing and selecting a fuzzy boundary based on multi-scale voxelization; constructing a cube containing all point clouds of the three-dimensional geometric model established in the step 1), and entering the whole point cloud space into one voxel based on multi-scale voxelization representationLine segmentation, judging whether a cube represented by a voxel contains the fuzzy boundary region obtained in the step 2.1), defining the fuzzy boundary region which can completely contain the fuzzy boundary region obtained in the step 2.1) in the divided cube as an effective node, otherwise, defining the fuzzy boundary region as an invalid node, and further dividing the effective node until no effective node exists; constructing a data set T by using the minimum effective node capable of containing a certain fuzzy boundary region, wherein T epsilon (A)1,A2,…,An)。
Step 3) generating the feature saliency of the fuzzy boundary of the countermeasure network based on the condition;
step 3.1) generating a confrontation network generation texture based on the condition;
inputting texture mapping of the existing voxel block as a group of discrimination data into a discriminator model to discriminate texture output of a generator, adding a vivid texture boundary label into the discriminator, and using batch standard processing; converting initial noise point data into a two-dimensional vector as input of a generating model, filling and activating the two-dimensional vector through a transposition convolutional layer, adding a BN layer and a ReLu layer into the output of each layer of transposition convolutional layer, normalizing pixels of an output image, generating texture, performing texture mapping on a voxel block to obtain a voxel block output as the generated voxel block, inputting a discrimination model, keeping the generating model unchanged, training the discrimination model to be maximum, keeping the discrimination model unchanged, training the generating model to be minimum, and repeatedly calculating to enable the discrimination model to reach Nash equilibrium, wherein a loss function of the whole model is as follows:
wherein G represents a generator, D represents a discriminator, z represents a generator input noise variable, y represents a sharp texture boundary label, D (x) represents the output of the discriminator, and G (z) represents the output of the generator when the noise variable is input; solving for D with G fixed,the fixed D is solved for G,indicates the resulting optimum arbiter, G*Representing the resulting optimal generator; finally, a voxel block set T with a realistic texture boundary is obtained2Wherein T is2∈(A’1,A’2,…,A’n);
Step 3.2) embedding the extracted voxel blocks into a three-dimensional geometric model; traversing the three-dimensional geometric model obtained in the step 1), and carrying out segmentation on the points A in the voxel block data set T segmented in the step 2.2)1,A2,…,AnReplacing the voxel block set T obtained in the step 3.1)2Of (1)'1,A’2,…,A’nAnd finally, realizing a realistic three-dimensional geometric model.
According to the method, an iterative three-dimensional regression module is used by an HMR to generate shape, posture and translation parameters for the SMPL, and the shape of a human body is reconstructed on an input picture through the SMPL; constructing a Unet network to generate a model mesh mapped to a reconstructed texture; then, obtaining spatial topological information of the point cloud, performing frame selection on a local boundary by using multi-scale voxels based on an RGB channel, finally generating a confrontation network through a condition to generate texture for mapping, and realizing high-resolution texture generation by using a local generation discrimination mode, so that the calculation time of texture generation can be reduced, the fidelity of the texture boundary can be improved, and unsupervised high-quality texture generation can be realized.
The present invention is not limited to the above-mentioned embodiments, and based on the technical solutions disclosed in the present invention, those skilled in the art can make some substitutions and modifications to some technical features without creative efforts according to the disclosed technical contents, and these substitutions and modifications are all within the protection scope of the present invention.
Claims (4)
1. A three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure is characterized by comprising the following steps:
step 1), generating the shape and texture of a three-dimensional geometric model to obtain point cloud characteristics of the three-dimensional geometric model;
step 2), extracting a fuzzy boundary point cloud;
step 3) generating a characterization saliency of fuzzy boundaries of the countermeasure network based on the condition.
2. The three-dimensional model fuzzy texture feature saliency method based on conditional generation confrontation according to claim 1, characterized in that said step 1) comprises in particular: generating shape, posture and translation parameters for the SMPL by adopting an HMR and using an iterative three-dimensional regression module, and carrying out shape reconstruction on the input picture through the SMPL; and constructing a Unet network to generate texture mapping to the reconstructed model mesh, and obtaining the point cloud characteristics of the three-dimensional geometric model.
3. The three-dimensional model fuzzy texture feature saliency method based on conditional generation confrontation according to claim 1, characterized in that said step 2) comprises in particular:
step 2.1) region growing segmentation based on pixel values; sorting the point clouds according to the RGB pixel values according to the basic characteristics of the point clouds obtained in the step 1); selecting the initial seed point cloud with the lowest RGB pixel value, comparing the surrounding adjacent point cloud with the seed point cloud, and judging whether the color distance between the adjacent point cloud and the seed point cloud is small enough; merging point clouds with the same or similar properties with the seed point cloud in the neighborhood around the seed point cloud into the region where the seed pixel is located, continuously growing the new point cloud serving as the seed to the periphery until all the point clouds meeting the conditions are included, and acquiring a growing region; further dividing the growing area, and defining the point cloud at the fuzzy boundary of the texture as a fuzzy boundary area;
the color distance is as follows:
wherein D (X)1,X2) Is X1And X2Of R, wherein R1 G1 B1And R2 G2 B2Respectively represent points X1And point X2Pixel values in the RGB three-color domain;
step 2.2) framing and selecting a fuzzy boundary based on multi-scale voxelization; constructing a cube containing all point clouds of the three-dimensional geometric model established in the step 1), dividing the whole point cloud space according to one voxel based on multi-scale voxelization expression, judging whether the cube represented by the voxel contains the fuzzy boundary region obtained in the step 2.1), defining that the fuzzy boundary region obtained in the step 2.1) can be completely contained in the divided cube as an effective node, otherwise, the fuzzy boundary region is an invalid node, and further dividing the effective node until no effective node exists; constructing a data set T by using the minimum effective node capable of containing a certain fuzzy boundary region, wherein T epsilon (A)1,A2,…,An)。
4. The three-dimensional model fuzzy texture feature saliency method based on conditional generation confrontation according to claim 3, characterized in that said step 3) comprises in particular:
step 3.1) generating a confrontation network generation texture based on the condition;
inputting texture mapping of the existing voxel block as a group of discrimination data into a discriminator model to discriminate texture output of a generator, adding a vivid texture boundary label into the discriminator, and using batch standard processing; converting initial noise point data into a two-dimensional vector as input of a generating model, filling and activating by a transposition convolutional layer, adding a BN layer and a ReLu layer into the output of each layer of transposition convolutional layer, normalizing pixels of an output image, generating texture, performing texture mapping on a voxel block to obtain a voxel block output as the generated voxel block, inputting a discrimination model, and repeatedly calculating to enable the discrimination model to reach Nash balance, wherein the loss function of the whole model is as follows:
where G denotes a generator, D denotes a discriminator,z represents the generator input noise variable, y represents the vivid texture boundary label, D (x) represents the output of the discriminator, and G (z) represents the output when the generator inputs the noise variable; solving for D with G fixed,the fixed D is solved for G, indicates the resulting optimum arbiter, G*Representing the resulting optimal generator; finally, a voxel block set T with a realistic texture boundary is obtained2Wherein T is2∈(A’1,A’2,…,A’n);
Step 3.2) embedding the extracted voxel blocks into a three-dimensional geometric model; traversing the three-dimensional geometric model obtained in the step 1), and carrying out segmentation on the points A in the voxel block data set T segmented in the step 2.2)1,A2,…,AnReplacing the voxel block set T obtained in the step 3.1)2Of (1)'1,A’2,…,A’nAnd finally, realizing a realistic three-dimensional geometric model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111440077.7A CN114119924A (en) | 2021-11-30 | 2021-11-30 | Three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111440077.7A CN114119924A (en) | 2021-11-30 | 2021-11-30 | Three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114119924A true CN114119924A (en) | 2022-03-01 |
Family
ID=80368281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111440077.7A Pending CN114119924A (en) | 2021-11-30 | 2021-11-30 | Three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114119924A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115861392A (en) * | 2023-01-13 | 2023-03-28 | 无锡艾米特智能医疗科技有限公司 | Soft tissue registration method based on image data processing |
CN117252991A (en) * | 2023-10-25 | 2023-12-19 | 北京华科软科技有限公司 | Fusion method of voxel construction and boundary representation and three-dimensional graphic engine |
-
2021
- 2021-11-30 CN CN202111440077.7A patent/CN114119924A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115861392A (en) * | 2023-01-13 | 2023-03-28 | 无锡艾米特智能医疗科技有限公司 | Soft tissue registration method based on image data processing |
CN115861392B (en) * | 2023-01-13 | 2023-05-26 | 无锡艾米特智能医疗科技有限公司 | Soft tissue registration method based on image data processing |
CN117252991A (en) * | 2023-10-25 | 2023-12-19 | 北京华科软科技有限公司 | Fusion method of voxel construction and boundary representation and three-dimensional graphic engine |
CN117252991B (en) * | 2023-10-25 | 2024-03-29 | 北京华科软科技有限公司 | Fusion method of voxel construction and boundary representation and three-dimensional graphic engine |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Densely connected pyramid dehazing network | |
JP5645842B2 (en) | Image processing apparatus and method using scale space | |
CN110381268B (en) | Method, device, storage medium and electronic equipment for generating video | |
CN109146001B (en) | Multi-view ISAR image fusion method | |
CN114119924A (en) | Three-dimensional model fuzzy texture feature saliency method based on condition generation countermeasure | |
US20230051960A1 (en) | Coding scheme for video data using down-sampling/up-sampling and non-linear filter for depth map | |
US20210383534A1 (en) | System and methods for image segmentation and classification using reduced depth convolutional neural networks | |
CN111105382B (en) | Video repair method | |
CN110853064B (en) | Image collaborative segmentation method based on minimum fuzzy divergence | |
CN113297988B (en) | Object attitude estimation method based on domain migration and depth completion | |
CN110147816B (en) | Method and device for acquiring color depth image and computer storage medium | |
WO2021226862A1 (en) | Neural opacity point cloud | |
CN117413300A (en) | Method and system for training quantized nerve radiation field | |
CN116310111A (en) | Indoor scene three-dimensional reconstruction method based on pseudo-plane constraint | |
CN116958420A (en) | High-precision modeling method for three-dimensional face of digital human teacher | |
CN109741358B (en) | Superpixel segmentation method based on adaptive hypergraph learning | |
Chen et al. | Image vectorization with real-time thin-plate spline | |
CN116993947B (en) | Visual display method and system for three-dimensional scene | |
CN117612153A (en) | Three-dimensional target identification and positioning method based on image and point cloud information completion | |
CN116993926A (en) | Single-view human body three-dimensional reconstruction method | |
RU2710659C1 (en) | Simultaneous uncontrolled segmentation of objects and drawing | |
CN116805356A (en) | Building model construction method, building model construction equipment and computer readable storage medium | |
CN116452715A (en) | Dynamic human hand rendering method, device and storage medium | |
Zhu et al. | Modified fast marching and level set method for medical image segmentation | |
CN113673567B (en) | Panorama emotion recognition method and system based on multi-angle sub-region self-adaption |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |