CN113256543A - Point cloud completion method based on graph convolution neural network model - Google Patents

Point cloud completion method based on graph convolution neural network model Download PDF

Info

Publication number
CN113256543A
CN113256543A CN202110410965.8A CN202110410965A CN113256543A CN 113256543 A CN113256543 A CN 113256543A CN 202110410965 A CN202110410965 A CN 202110410965A CN 113256543 A CN113256543 A CN 113256543A
Authority
CN
China
Prior art keywords
point cloud
neural network
network model
viewpoint
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110410965.8A
Other languages
Chinese (zh)
Inventor
邹艳妮
张怡睿
徐嘉伯
刘小平
刘捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang University
Original Assignee
Nanchang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang University filed Critical Nanchang University
Priority to CN202110410965.8A priority Critical patent/CN113256543A/en
Publication of CN113256543A publication Critical patent/CN113256543A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention belongs to the technical field of point cloud completion, and relates to a point cloud completion method based on a graph convolution neural network model, which comprises the steps of firstly preprocessing a data set by using a viewpoint deleting operation; then, local feature fusion operation is proposed, and the corresponding relation between local midpoints is searched through a matrix which can be learned in the neural network, so that the aggregation of the local features of the point cloud is realized, and the information of the point is considered and the information of adjacent points is combined in the feature extraction process; finally, a G-Net structure matched with the local feature fusion operation is designed, and the structure consists of an encoder containing the local feature fusion operation and a decoder of multi-stage prediction. The novel point cloud completion method solves the problem of insufficient extraction of local features in the existing method, so that the encoder extracts more comprehensive semantic information and promotes the decoder to better focus on local details, thereby improving the accuracy of point cloud completion and realizing high-detail point cloud completion.

Description

Point cloud completion method based on graph convolution neural network model
Technical Field
The invention belongs to the field of point cloud completion, and particularly relates to a point cloud completion method based on a graph convolution neural network model.
Background
3D vision has been one of the current research subjects, and among various methods for describing 3D objects, point clouds are widely used in 3D object description due to their simple storage structure and small amount of stored data.
The point cloud data can be obtained in various ways, one of which is to scan the surface of an object by a scanner to obtain the point cloud data, but the point cloud data obtained by scanning is not complete due to human factors or the influence of light, environment and the like, and the point cloud completion aims to repair the incomplete point cloud data.
Applying deep learning techniques to point cloud data is a direction of recent growth. For the point cloud completion task, a neural network can be trained to achieve the purpose, namely: inputting an incomplete point cloud structure and generating a complete point cloud structure.
At present, most network models based on deep learning have poor effect on point cloud completion because the models only consider global features and ignore local features of point clouds, so that only generally correct and locally inaccurate structures can be generated.
For this reason, it is necessary to solve the problem of local feature extraction and fusion.
Disclosure of Invention
In order to overcome the defects of local feature extraction in the prior art, the invention provides a point cloud completion method based on a graph convolution neural network model, local feature fusion operation based on a graph convolution theory is performed, a network structure matched with the local feature fusion operation is designed, and the problem of inaccurate local prediction can be effectively solved. Experiments prove that the invention has excellent effect on vision and indexes.
The technical scheme adopted by the invention is as follows:
a point cloud completion method based on a graph convolution neural network model comprises the following steps:
s1, data set acquisition: acquiring a public data set ShapeNet, and completing construction of a point cloud data set required by model training;
s2, data processing: deleting a part of point clouds in the complete point cloud data by using viewpoint deleting operation so as to construct incomplete point clouds;
s3, constructing a neural network model: based on the graph convolution thought, providing local feature fusion operation to improve point cloud convolution operation, and constructing a G-Net network model;
s4, model training: an Adam optimizer is used for training a network, so that a loss function is reduced, and a point cloud completion effect is improved;
s5, model storage: and when the loss function of the model tends to be stable and does not decline, the model is saved.
In step S1, the public data set sharenet includes 16 types of point cloud data, and each individual point cloud data file of all types is composed of X point cloud coordinates X, y, and z in a three-dimensional coordinate system. The values of X are not the same.
In step S2, the specific methods of the viewpoint deleting operation include viewpoint initialization, viewpoint selection, and adjacent point cloud deletion.
The specific method for initializing the viewpoint comprises the following steps: taking 4 points of [1,0,0] [0,0,1] [1,0,1] [ -1,0,0] as fixed viewpoints, and three numbers in the [ represent coordinates of x, y and z in a three-dimensional coordinate system; the specific steps of the viewpoint selection operation are as follows: deleting point cloud data at different positions in complete point cloud data so as to simulate incomplete point clouds with different parts missing, and sequentially using 4 viewpoints as central points; the method comprises the following specific steps of: and calculating the distances between all the points and the viewpoint, and deleting 512 point clouds closest to the viewpoint.
In step S3, the specific formula of the local feature fusion operation is:
Figure BDA0003024044100000011
wherein f isiRepresenting the features of the extracted point cloud i, and ^ integral () representing a convolution operation,
Figure BDA0003024044100000012
representing the aggregation operation, A representing the matrix, n being the total number of point clouds, KiRepresenting a set consisting of a point cloud i and its surrounding point clouds; the local feature fusion operation comprises the following specific steps: the point cloud i is characterized in that 3-dimensional x, y and z coordinates are converted into higher-dimensional representation through convolution operation; the aggregation operation means that high-dimensional features are spliced, for example, after the aggregation operation is performed on the dimension a and the dimension b, the dimension a + b is obtained; the values in matrix a are learned through a neural network.
In step S3, the G-Net network model includes an encoder part and a decoder part. Specific objects of the encoder part are: extracting the coordinate characteristics of the input incomplete point cloud; the specific structure of the encoder part is as follows: the method comprises the steps of firstly, inputting point cloud coordinates, preliminarily extracting features of the point cloud coordinates through a convolution layer, then, realizing aggregation of local features of the point cloud through stacking of three layers of local feature fusion operations, and finally, obtaining high-dimensional feature vectors through a maximum pooling layer to serve as final output of an encoder structure. The specific goals of the decoder part are: generating point cloud coordinates of the missing part through the high-dimensional features extracted by the decoder part; the specific structure of the decoder part is as follows: the input high-dimensional features are firstly used for generating sparse point cloud prediction through a linear layer, then the features of sparse point cloud are extracted through three convolutional layers, then the high-dimensional features, sparse point cloud coordinates, sparse point cloud features and fixed-size 2D grid (a square with the origin as the center and the length and the width of 1) coordinates are aggregated, and finally complete point cloud is predicted through a convolutional layer and a linear layer.
In step S4, the formula of the loss function is:
Figure BDA0003024044100000021
wherein CD (S)1,S2) Point cloud set S representing predictive generation1And a real point cloud set S2The euclidean space average closest distance between them. And x and y respectively represent one point cloud in the point cloud set and consist of three coordinates of x, y and z.
Compared with the prior art, the invention has the following beneficial effects:
the invention solves the problem of insufficient extraction of local features by using a graph convolution theory, and fuses the features of adjacent points and the points, so that the overall shape can be predicted, a better expression effect can be realized on local details in the process of generating a missing part, and the overall completion quality is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of the G-Net structure of the present invention;
FIG. 3 is a fragmentary aircraft point cloud of example 1;
FIG. 4 is the complete aircraft point cloud of example 1;
FIG. 5 is an incomplete table point cloud of example 2;
FIG. 6 is the complete table point cloud of example 2;
FIG. 7 is an incomplete chair point cloud of example 3;
fig. 8 is the complete chair point cloud of example 3.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in FIG. 1, a point cloud completion method based on a graph convolution neural network model uses a Shapelet data set to perform point cloud completion training, wherein the data set comprises 16 types and has 15011 point cloud data, 12137 point cloud data in the training set and 2874 point cloud data in the testing set. And preprocessing each point cloud data through viewpoint operation to obtain incomplete point clouds input by a network. Inputting the incomplete point cloud into G-Net, wherein the structure of the G-Net is shown in fig. 2, training is carried out by using a chamfer distance loss function, and the weight with the best effect is stored after 100 rounds of training. The detailed process comprises the following steps:
s1, data set acquisition: acquiring a public data set ShapeNet, and completing construction of a point cloud data set required by model training;
s2, data processing: deleting a part of point clouds in the complete point cloud data by using viewpoint deleting operation so as to construct incomplete point clouds;
s3, constructing a neural network model: based on the graph convolution thought, providing local feature fusion operation to improve point cloud convolution operation, and constructing a G-Net network model;
s4, model training: an Adam optimizer is used for training a network, so that a loss function is reduced, and a point cloud completion effect is improved;
s5, model storage: and when the loss function of the model tends to be stable and does not decline, the model is saved.
The specific methods of the viewpoint deleting operation include viewpoint initialization, viewpoint selection and adjacent point cloud deletion.
The specific method for initializing the viewpoint comprises the following steps: taking 4 points of [1,0,0] [0,0,1] [1,0,1] [ -1,0,0] as fixed viewpoints, and three numbers in the [ represent coordinates of x, y and z in a three-dimensional coordinate system; the specific steps of the viewpoint selection operation are as follows: deleting point cloud data at different positions in complete point cloud data so as to simulate incomplete point clouds with different parts missing, and sequentially using 4 viewpoints as central points; the method comprises the following specific steps of: and calculating the distances between all the points and the viewpoint, and deleting 512 point clouds closest to the viewpoint.
The specific formula of the local feature fusion operation is as follows:
Figure BDA0003024044100000031
wherein f isiRepresenting the features of the extracted point cloud i, and ^ integral () representing a convolution operation,
Figure BDA0003024044100000032
representing the aggregation operation, A representing the matrix, n being the total number of point clouds, KiRepresenting the point cloud i and its weekA set of surrounding point clouds; the local feature fusion operation comprises the following specific steps: the point cloud i is characterized in that 3-dimensional x, y and z coordinates are converted into higher-dimensional representation through convolution operation; the aggregation operation means that high-dimensional features are spliced, for example, after the aggregation operation is performed on the dimension a and the dimension b, the dimension a + b is obtained; the values in matrix a are learned through a neural network.
The G-Net network model includes an encoder portion and a decoder portion. Specific objects of the encoder part are: extracting the coordinate characteristics of the input incomplete point cloud; the specific structure of the encoder part is as follows: the input point cloud coordinate firstly passes through a layer of convolution layer, then passes through the stacking of three layers of local feature fusion operation, and finally passes through a layer of maximum pooling layer to obtain the high-dimensional feature. The specific goals of the decoder part are: generating point cloud coordinates of the missing part through the high-dimensional features extracted by the decoder part; the specific structure of the decoder part is as follows: the input high-dimensional features are firstly used for generating sparse point cloud prediction through a linear layer, then the features of sparse point cloud are extracted through three convolutional layers, then the high-dimensional features, sparse point cloud coordinates, sparse point cloud features and fixed-size 2D grid (a square with the origin as the center and the length and the width of 1) coordinates are aggregated, and finally complete point cloud is generated through a convolutional layer and a linear layer.
The formula of the loss function is:
Figure BDA0003024044100000033
wherein CD (S)1,S2) Point cloud set S representing predictive generation1And a real point cloud set S2The euclidean space average closest distance between them. And x and y respectively represent one point cloud in the point cloud set and consist of three coordinates of x, y and z.
The invention uses the embodiment 1-3 to visually display the completion effect of G-Net.
Example 1:
and after the G-Net model is trained in the training set, selecting an incomplete point cloud of the airplane in the testing set as input, and outputting a complete point cloud of the airplane. An incomplete aircraft point cloud is shown in fig. 3 and a complete aircraft point cloud is shown in fig. 4.
Example 2:
and selecting an incomplete table point cloud in the test set as an input, and outputting the complete table point cloud. An incomplete table point cloud is shown in fig. 5 and a complete table point cloud is shown in fig. 6.
Example 3:
and selecting an incomplete chair point cloud in the test set as an input, and outputting an complete chair point cloud. An incomplete chair point cloud is shown in fig. 7 and a complete chair point cloud is shown in fig. 8.
In order to verify the effectiveness of G-Net on indexes, G-Net and PCN are trained on the same training set, and indexes in a test set are counted, wherein the smaller the index is, the smaller the error is, the better the effect is, and the results are as follows:
Figure BDA0003024044100000034
as can be seen from the table, the G-Net provided by the invention has higher completion accuracy and better completion quality in the point cloud completion network of the same type.
The foregoing merely represents preferred embodiments of the invention, which are described in some detail and detail, and therefore should not be construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes, modifications and substitutions can be made without departing from the spirit of the present invention, and these are all within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (7)

1. A point cloud completion method based on a graph convolution neural network model is characterized by comprising the following steps: the method comprises the following steps:
s1, data set acquisition: acquiring a public data set ShapeNet, and completing construction of a point cloud data set required by model training;
s2, data processing: deleting a part of point clouds in the complete point cloud data by using viewpoint deleting operation so as to construct incomplete point clouds;
s3, constructing a neural network model: based on the graph convolution thought, providing local feature fusion operation to improve point cloud convolution operation, and constructing a G-Net network model;
s4, model training: an Adam optimizer is used for training a network, so that a loss function is reduced, and a point cloud completion effect is improved;
s5, model storage: and when the loss function of the model tends to be stable and does not decline, the model is saved.
2. The point cloud completion method based on the atlas neural network model according to claim 1, wherein: in step S1, the public data set sharenet includes 16 types of point cloud data, each individual point cloud data file of all types is composed of X point cloud coordinates X, y, and z in a three-dimensional coordinate system, and the values of X are different.
3. The point cloud completion method based on the atlas neural network model according to claim 1, wherein: in step S2, the specific methods of the viewpoint deleting operation include viewpoint initialization, viewpoint selection, and adjacent point cloud deletion.
4. The point cloud completion method based on the atlas neural network model, according to claim 3, characterized in that: the specific method for initializing the viewpoint comprises the following steps: taking 4 points of [1,0,0] [0,0,1] [1,0,1] [ -1,0,0] as fixed viewpoints, and three numbers in the [ represent coordinates of x, y and z in a three-dimensional coordinate system; the specific steps of the viewpoint selection operation are as follows: deleting point cloud data at different positions in complete point cloud data so as to simulate incomplete point clouds with different parts missing, and sequentially using 4 viewpoints as central points; the method comprises the following specific steps of: and calculating the distances between all the points and the viewpoint, and deleting 512 point clouds closest to the viewpoint.
5. The point cloud completion method based on the atlas neural network model according to claim 1, wherein: in step S3, the specific formula of the local feature fusion operation is:
Figure FDA0003024044090000011
wherein f isiRepresenting the features of the extracted point cloud i, and ^ integral () representing a convolution operation,
Figure FDA0003024044090000012
representing the aggregation operation, A representing the matrix, n being the total number of point clouds, KiRepresenting a set consisting of a point cloud i and its surrounding point clouds; the local feature fusion operation comprises the following specific steps: the point cloud i is characterized in that 3-dimensional x, y and z coordinates are converted into higher-dimensional representation through convolution operation; the aggregation operation represents splicing the high-dimensional features; the values in matrix a are learned through a neural network.
6. The point cloud completion method based on the atlas neural network model according to claim 1, wherein: in step S3, the G-Net network model includes an encoder part and a decoder part, and the specific goals of the encoder part are: extracting the coordinate characteristics of the input incomplete point cloud; the specific structure of the encoder part is as follows: the input point cloud coordinate firstly passes through a layer of convolution layer, then passes through the stacking of three layers of local feature fusion operation, and finally passes through a layer of maximum pooling layer to obtain high-dimensional features; the specific goals of the decoder part are: generating point cloud coordinates of the missing part through the high-dimensional features extracted by the decoder part; the specific structure of the decoder part is as follows: the input high-dimensional features firstly pass through a linear layer and are used for generating sparse point cloud prediction, then the features of sparse point cloud are extracted through three convolutional layers, then the high-dimensional features, sparse point cloud coordinates, sparse point cloud features and fixed-size 2D grid coordinates are aggregated, and finally complete point cloud is generated through a convolutional layer and a linear layer.
7. The point cloud completion method based on the atlas neural network model according to claim 1, wherein: in step S4, the specific formula of the loss function is:
Figure FDA0003024044090000013
wherein CD (S)1,S2) Point cloud set S representing predictive generation1And a real point cloud set S2The Euclidean space average closest distance between; and x and y respectively represent one point cloud in the point cloud set and consist of three coordinates of x, y and z.
CN202110410965.8A 2021-04-16 2021-04-16 Point cloud completion method based on graph convolution neural network model Pending CN113256543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110410965.8A CN113256543A (en) 2021-04-16 2021-04-16 Point cloud completion method based on graph convolution neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110410965.8A CN113256543A (en) 2021-04-16 2021-04-16 Point cloud completion method based on graph convolution neural network model

Publications (1)

Publication Number Publication Date
CN113256543A true CN113256543A (en) 2021-08-13

Family

ID=77220992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110410965.8A Pending CN113256543A (en) 2021-04-16 2021-04-16 Point cloud completion method based on graph convolution neural network model

Country Status (1)

Country Link
CN (1) CN113256543A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642712A (en) * 2021-08-17 2021-11-12 成都视海芯图微电子有限公司 Point cloud data processor and method based on deep learning
CN114004871A (en) * 2022-01-04 2022-02-01 山东大学 Point cloud registration method and system based on point cloud completion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950467A (en) * 2020-08-14 2020-11-17 清华大学 Fusion network lane line detection method based on attention mechanism and terminal equipment
CN112365456A (en) * 2020-10-29 2021-02-12 杭州富阳富创大数据产业创新研究院有限公司 Transformer substation equipment classification method based on three-dimensional point cloud data
CN112435239A (en) * 2020-11-25 2021-03-02 南京农业大学 Scindapsus aureus leaf shape parameter estimation method based on MRE-PointNet and self-encoder model
CN112488210A (en) * 2020-12-02 2021-03-12 北京工业大学 Three-dimensional point cloud automatic classification method based on graph convolution neural network
CN112529015A (en) * 2020-12-17 2021-03-19 深圳先进技术研究院 Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping
CN112614174A (en) * 2020-12-07 2021-04-06 深兰人工智能(深圳)有限公司 Point cloud complementing and point cloud dividing method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950467A (en) * 2020-08-14 2020-11-17 清华大学 Fusion network lane line detection method based on attention mechanism and terminal equipment
CN112365456A (en) * 2020-10-29 2021-02-12 杭州富阳富创大数据产业创新研究院有限公司 Transformer substation equipment classification method based on three-dimensional point cloud data
CN112435239A (en) * 2020-11-25 2021-03-02 南京农业大学 Scindapsus aureus leaf shape parameter estimation method based on MRE-PointNet and self-encoder model
CN112488210A (en) * 2020-12-02 2021-03-12 北京工业大学 Three-dimensional point cloud automatic classification method based on graph convolution neural network
CN112614174A (en) * 2020-12-07 2021-04-06 深兰人工智能(深圳)有限公司 Point cloud complementing and point cloud dividing method and device, electronic equipment and storage medium
CN112529015A (en) * 2020-12-17 2021-03-19 深圳先进技术研究院 Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642712A (en) * 2021-08-17 2021-11-12 成都视海芯图微电子有限公司 Point cloud data processor and method based on deep learning
CN113642712B (en) * 2021-08-17 2023-08-08 成都视海芯图微电子有限公司 Point cloud data processor and method based on deep learning
CN114004871A (en) * 2022-01-04 2022-02-01 山东大学 Point cloud registration method and system based on point cloud completion

Similar Documents

Publication Publication Date Title
CN111489358B (en) Three-dimensional point cloud semantic segmentation method based on deep learning
CN109377448B (en) Face image restoration method based on generation countermeasure network
CN110188228B (en) Cross-modal retrieval method based on sketch retrieval three-dimensional model
CN110390638B (en) High-resolution three-dimensional voxel model reconstruction method
CN114092832B (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
CN111127538B (en) Multi-view image three-dimensional reconstruction method based on convolution cyclic coding-decoding structure
CN110212528B (en) Power distribution network measurement data missing reconstruction method
CN112085072B (en) Cross-modal retrieval method of sketch retrieval three-dimensional model based on space-time characteristic information
CN107194378B (en) Face recognition method and device based on mixed dictionary learning
CN110728219A (en) 3D face generation method based on multi-column multi-scale graph convolution neural network
CN109978786A (en) A kind of Kinect depth map restorative procedure based on convolutional neural networks
CN109359534B (en) Method and system for extracting geometric features of three-dimensional object
CN109598676A (en) A kind of single image super-resolution method based on Hadamard transform
CN113256543A (en) Point cloud completion method based on graph convolution neural network model
CN112329780B (en) Depth image semantic segmentation method based on deep learning
CN114332302A (en) Point cloud completion system and method based on multi-scale self-attention network
CN109461177B (en) Monocular image depth prediction method based on neural network
CN108242074A (en) A kind of three-dimensional exaggeration human face generating method based on individual satire portrait painting
CN110516724A (en) Visualize the high-performance multilayer dictionary learning characteristic image processing method of operation scene
CN108428234B (en) Interactive segmentation performance optimization method based on image segmentation result evaluation
CN116416161A (en) Image restoration method for improving generation of countermeasure network
CN114494284B (en) Scene analysis model and method based on explicit supervision area relation
CN112837420B (en) Shape complement method and system for terracotta soldiers and horses point cloud based on multi-scale and folding structure
CN114359510A (en) Point cloud completion method based on anchor point detection
CN114092653A (en) Method, device and equipment for reconstructing 3D image based on 2D image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210813

RJ01 Rejection of invention patent application after publication