CN112967219B - Two-stage dental point cloud completion method and system based on deep learning network - Google Patents

Two-stage dental point cloud completion method and system based on deep learning network Download PDF

Info

Publication number
CN112967219B
CN112967219B CN202110287374.6A CN202110287374A CN112967219B CN 112967219 B CN112967219 B CN 112967219B CN 202110287374 A CN202110287374 A CN 202110287374A CN 112967219 B CN112967219 B CN 112967219B
Authority
CN
China
Prior art keywords
point cloud
data
point
stage
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110287374.6A
Other languages
Chinese (zh)
Other versions
CN112967219A (en
Inventor
于泽宽
张慧贤
郭向华
耿道颖
韩方凯
刘杰
王俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huashan Hospital of Fudan University
Original Assignee
Huashan Hospital of Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huashan Hospital of Fudan University filed Critical Huashan Hospital of Fudan University
Priority to CN202110287374.6A priority Critical patent/CN112967219B/en
Publication of CN112967219A publication Critical patent/CN112967219A/en
Application granted granted Critical
Publication of CN112967219B publication Critical patent/CN112967219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Abstract

The invention provides a two-stage dental point cloud completion method and system based on a deep learning network, and relates to the technical field of medical image processing, wherein the method comprises the following steps: the first part is to construct initial point cloud data based on CBCT data and mouth scan data, three-dimensional reconstruction is carried out on CBCT data of a patient mainly by using third party software chemicals, and the generated CBCT reconstructed three-dimensional tooth model point cloud data is registered with laser scanning point cloud data to obtain three-dimensional tooth model point cloud data serving as a gold standard; the second part is training the deep learning network MSN, and the laser scanning point cloud data constructed in the first part is input into the trained deep learning network MSN, and the MSN complement network performs two-stage processing on the input point cloud. In the first stage, the MSN network predicts a complete but coarse-grained point cloud; in the second stage, the coarse-granularity prediction point cloud and the input point cloud are fused through a sampling algorithm and residual error connection, so that the uniformly distributed fine-granularity prediction point cloud is obtained, and the dental point cloud is completed.

Description

Two-stage dental point cloud completion method and system based on deep learning network
Technical Field
The invention relates to the technical field of medical image processing, in particular to a two-stage dental point cloud completion method and system based on a deep learning network.
Background
Dental imaging can enable dentists to more accurately discover and intervene in focus areas and discover potential problems, so that dental care and repair are actively performed. Currently, there are three main types of dental imaging techniques used for diagnosis: traditional computed tomography CT, cone beam computed tomography (Cone Beam Computred Tomography, CBCT) and intraoral scanners. But the scan data acquired by the oral scanning device is often incomplete due to occlusion and sensor resolution.
For such limited raw data, it is necessary to perfect the raw data to compensate for the structural loss of the raw data, and to improve the data quality for subsequent clinical application. After point cloud segmentation and point cloud classification are achieved using a deep learning network by PointNet and PointNet++, point cloud deep learning is becoming a popular research field. Meanwhile, as an important three-dimensional data, the point cloud is also increasingly widely applied in the medical field. However, the point cloud obtained from the laser radar and other devices is often missing, which brings a certain difficulty to the subsequent processing of the point cloud. The point cloud completion (Point Cloud Completion) technology has been developed, and the complete point cloud is estimated from the missing point cloud, so that the point cloud with higher quality is obtained, and the purpose of repairing is achieved.
The invention discloses a three-dimensional point cloud completion method, a device and a computer-readable storage medium, wherein the publication number of the Chinese patent is CN111383355A, and the method comprises the following steps: acquiring a three-dimensional point cloud corresponding to a target room, wherein the three-dimensional point cloud comprises a top point cloud, a ground point cloud and a placement object point cloud; generating a top point cloud projection image based on the top point cloud, and generating a ground point cloud projection image based on the ground point cloud and the object point cloud; determining at least one point cloud area to be complemented from the top surface point cloud projection image and the ground surface point cloud projection image; and for each point cloud region to be complemented in at least one point cloud region to be complemented, based on the position of the point cloud region to be complemented, utilizing the point clouds around the point cloud region to be complemented to complement the point cloud region to be complemented. The embodiment of the disclosure can improve the accuracy of hole completion in the three-dimensional point cloud and increase the integrity and the aesthetic degree of the house model.
The traditional point cloud completion method is based on prior information (such as symmetry information or semantic information) of a certain object basic structure, and the missing point cloud is repaired through certain prior information. The method can only process some missing point clouds with very low point cloud missing rate and very obvious structural characteristics. With the advancement of deep learning methods for point cloud analysis and generation, some more rational 3D point cloud completion works such as LGAN-AE, PCN, and 3D-Capsule have been proposed nowadays. The point cloud completion work based on the deep learning method takes missing point cloud as input and complete point cloud as output, so that the problems of occupation of a large amount of memory and artifacts caused by discretization can be effectively prevented. However, because of the disorder and irregularity of the point cloud, we cannot directly apply the conventional convolution on the point cloud, and deep learning on the irregular point cloud still faces many challenges. For example, the deep learning network is too focused on the overall characteristics of the object and ignores the geometric information of the missing region; the network may generate a point cloud that favors the common characteristics of some type of object, while losing individual characteristics of some object.
The increasingly perfect point cloud acquisition equipment can rapidly scan and acquire a large amount of point cloud data from the surface of an object, but due to the limited capability of analyzing and generating the point cloud at present, the large-scale point cloud data has low equipment storage and calculation processing efficiency, so that the reconstructed model surface is distorted, the whole effect is fuzzy, and a series of problems of uneven distribution, holes and the like of the reconstructed point cloud occur. Therefore, in practical application, the realization of efficient point cloud completion to improve the quality of the point cloud has important clinical significance and research value in the field of stomatology.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a two-stage dental point cloud completion method and system based on a deep learning network.
According to the two-stage dental point cloud completion method and system based on the deep learning network, the scheme is as follows:
in a first aspect, a two-stage dental point cloud completion method based on a deep learning network is provided, the method comprising:
step S1: constructing initial three-dimensional dental model point cloud data based on CBCT data and mouth scan data;
step S2: constructing a deep learning network MSN, and completing training of the MSN by using the existing training set and test set;
Step S3: and (3) inputting the initial three-dimensional dental model point cloud data obtained in the step (S1) into a trained MSN to obtain the completed complete dental point cloud data.
Preferably, the step S1 includes:
step S1.1: extracting a dental crown three-dimensional model by a laser scanner, and converting the dental crown three-dimensional model into high-resolution laser scanning first three-dimensional dental model point cloud data;
step S1.2: acquiring CBCT data through a cone beam computer tomography scanner, extracting a tooth complete model from the CBCT data according to a region growing method, and converting the tooth complete model into CBCT reconstructed second three-dimensional tooth model point cloud data;
step S1.3: and registering the first three-dimensional tooth model point cloud data and the second three-dimensional tooth model point cloud data by using a CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion to obtain initial three-dimensional tooth model point cloud data.
Preferably, the step S2 includes:
step S2.1: building a deep learning network MSN: the MSN takes the point cloud as input, and the point cloud complement is realized through two-stage processing:
the first stage: the deformation prediction stage, in which the network presents an encoder-decoder structure, the automatic encoder predicts a coarser complete point cloud by extracting global feature vectors (Generalized Feature Vector, GFV) and prevents overlap between surface elements with an expansion penalty;
And a second stage: fusing and refining, namely fusing a rough point cloud and an input point cloud;
step S2.2: constructing a joint loss function and optimizing the MSN through the loss function: penalty loss function of expansion L expansion And bulldozer distance loss function L EMD Combining the MSNs as a joint loss function, and optimizing the MSNs through the joint loss function;
step S2.3: training and evaluating an MSN network using existing training and test sets: and obtaining dentition CBCT data and corresponding oral scanning data through clinical scanning, and taking the three-dimensional dental model point cloud data obtained after registration as a gold standard.
Preferably, the deformation prediction stage in step S2.1 specifically includes:
extracting point cloud data characteristics by adopting a graph convolution encoder:
inputting input point cloud X, namely laser scanning three-dimensional tooth model point cloud data, into a graph rolling encoder, wherein the point cloud X is positioned in Euclidean space, and the size of each data point X in X is 1 multiplied by 3;
for each data point x, taking n points around the data point as a neighborhood Nx of the data point, wherein the size is n multiplied by 3, and the coordinate is defined as x in
The coordinates of n neighborhood points in Nx relative to data point x by a point convolution operation with maximum poolingConverted into a feature vector f of size 1 xc in Wherein->c is the number of channels of the convolution kernel; will x in And f in Input into a graph convolution network, output a global feature vector, denoted as f out
The method comprises the steps that a graph rolling network consists of a correction linear unit ReLu activation function and a graph rolling G-conv, wherein the ReLu activation function maps the input of each neuron in the graph rolling network to an output end; the characteristic value of the data point of the layer (tau+1) is calculated by the characteristic value of the data point x of the layer (tau), wherein tau is the number of layers of the convolution of the graph (G-conv), the value range is 1-15, and the calculation formula is as follows:
in the formulaFor the eigenvalue of data point x at the τ th layer, +.>Is the characteristic value of the data point at the (tau+1) th layer, w 0 And w 1 Respectively determining the characteristic value of a data point x of the τ layer and the weight of the neighborhood characteristic value of the data point;
predicting a coarse point cloud using a deformation-based decoder: the decoder learns the mapping from the 2D unit square to the 3D surface by using the multi-layer perceptron, thereby simulating the deformation from the 2D square to the 3D surface and generating K surface surfaces i I=1, 2, …, K; forming a rough point cloud with complex shape information;
regularization of coarse point clouds: taking 3D points generated by each multi-layer perceptron in each forward transmission process as a vertex set, and constructing a minimum spanning tree T according to Euclidean distance between vertexes i
Select T i The middle diameter, i.e. the middle vertex of the simple path containing the most vertices, is taken as the root node, and all sides point to the direction of the root, i.e. the minimum spanning tree T i Is a direction of (2);
the longer the edge of the minimum spanning tree, the more distant the two points are, the greater the probability of mixing with the points of other surfaces, the expansion penalty makes the root nodes shrink toward a more compact area along the edge, and finally the purpose of optimizing the generation of the point cloud is achieved, and the formula is expressed as follows:
wherein dis (u, v) is the Euclidean distance between vertices u and v; n is the sampling point number of the local area; (u, v) ε T i For minimum spanning tree T i A directional edge taking u and v as vertexes; l (L) i Representing a minimum spanning tree T i Average length of the middle edge;for indicating a function, for screening side lengths less than λl i Is a side of (2); lambda is an superparameter, and the size of a threshold value is determined;
since the motion of the midpoints of the construction tree can be regarded as infinitesimal, i.e. the motion of the smallest spanning tree midpoints is unchanged, the expansion penalty function L expansion Almost everywhere can be made micro for the minimum spanning tree T i Each directed edge (u, v) e T i If its length is greater than λl i Only giving u a backward gradient to shrink the u toward v to form a more compact surface i
Preferably, in the step S2.1, the fusing and fine phase further optimizes the coarse point cloud, which specifically includes:
the input point cloud and the rough point cloud are fused and then subjected to minimum density sampling:
the minimum density sampling algorithm adopts a Gaussian weight summation method to estimate the density of points, acquires the points with the minimum density from the fusion point cloud, and obtains the sub-point cloud with uniform distribution, wherein the specific formula is expressed as follows:
wherein p is t Is the t sampling point; p (P) t-1 For the first t-1 sampling point sets, P t ={p j |1≤j≤t};Is a point other than the first t-1 sampling points; sigma is positive, used for determining the neighborhood size of algorithm application;
refining the sub-point cloud obtained after sampling: inputting the sub-point cloud into a residual map convolution network, and generating a fine-granularity point cloud through point-by-point residual;
the residual graph rolling network is based on a graph rolling network structure in a graph rolling encoder, a residual connecting part is added, an output result of the residual graph rolling network is added with sub-point clouds obtained through MDS algorithm sampling point by point, and finally smooth point clouds with the complete shape of the predictable tooth body are generated;
since the input point cloud data is more reliable than the predicted coarse point cloud data, a binary channel is added to the coordinates of the sub point cloud to distinguish the source of each point, wherein 0 represents the source of the input point cloud and 1 represents the source of the coarse point cloud.
Preferably, in the step S2.2, the expansion penalty loss function L expansion And bulldozer distance loss function L EMD Taken together as a joint loss function, wherein the expansion penalty loss function L expansion As a regularization term to promote shrinkage of each surface element, while bulldozer distance loss function L EMD The point cloud is promoted to describe the complete shape of the tooth, and each surface element is more accurately and compactly concentrated in a local area by mutual constraint of the point cloud and the tooth, and the method specifically comprises the following steps of:
calculating a loss function L EMD
Wherein S is coarse The rough point cloud is output in the first stage; s is(s) c Is a data point in the coarse point cloud; s is S final The output point cloud after the second-stage refinement treatment is obtained; s is(s) f To output data points in the fine point cloud; s is S gt Is a gold standard; phi is a bijective function;
calculating a joint loss function L:
L=L EMD (S coarse ,S gt )+αL expansion +βL EMD (S final ,S gt )
wherein S is coarse The rough point cloud is output in the first stage; s is S final The output point cloud after the second-stage refinement treatment is obtained; s is S gt Is a gold standard; alpha and beta are weight-average parameters, and the value range is 0.1-1.0.
In a second aspect, a two-stage dental point cloud completion system based on a deep learning network is provided, the system comprising:
module M1: constructing initial three-dimensional dental model point cloud data based on CBCT data and mouth scan data;
Module M2: constructing a deep learning network MSN, and completing training of the MSN by using the existing training set and test set;
module M3: and inputting the initial three-dimensional dental model point cloud data obtained in the module M1 into the trained MSN to obtain the completed complete dental point cloud data.
Preferably, the module M1 comprises:
module M1.1: extracting a dental crown three-dimensional model by a laser scanner, and converting the dental crown three-dimensional model into high-resolution laser scanning first three-dimensional dental model point cloud data;
module M1.2: acquiring CBCT data through a cone beam computer tomography scanner, extracting a tooth complete model from the CBCT data according to a region growing method, and converting the tooth complete model into CBCT reconstructed second three-dimensional tooth model point cloud data;
module M1.3: and registering the first three-dimensional tooth model point cloud data and the second three-dimensional tooth model point cloud data by using a CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion to obtain initial three-dimensional tooth model point cloud data.
Preferably, the module M2 comprises:
module M2.1: building a deep learning network MSN: the MSN takes the point cloud as input, and the point cloud complement is realized through two-stage processing:
The first stage: the deformation prediction stage, in which the network presents an encoder-decoder structure, the automatic encoder predicts a coarser complete point cloud by extracting global feature vectors (Generalized Feature Vector, GFV) and prevents overlap between surface elements with an expansion penalty;
and a second stage: fusing and refining, namely fusing a rough point cloud and an input point cloud;
module M2.2: constructing a joint loss function and optimizing the MSN through the loss function: penalty loss function of expansion L expansion And bulldozer distance loss function L EMD Combining the MSNs as a joint loss function, and optimizing the MSNs through the joint loss function;
module M2.3: training and evaluating an MSN network using existing training and test sets: and obtaining dentition CBCT data and corresponding oral scanning data through clinical scanning, and taking the three-dimensional dental model point cloud data obtained after registration as a gold standard.
Preferably, the deformation prediction stage in the module M2.1 specifically includes:
module M2.1.1: extracting point cloud data characteristics by adopting a graph convolution encoder:
inputting input point cloud X, namely laser scanning three-dimensional tooth model point cloud data, into a graph rolling encoder, wherein the point cloud X is positioned in Euclidean space, and the size of each data point X in X is 1 multiplied by 3;
For each data point x, taking n points around the data point as a neighborhood Nx of the data point, wherein the size is n multiplied by 3, and the coordinate is defined as x in
The coordinates of n neighborhood points in Nx relative to data point x by a point convolution operation with maximum poolingConverted into a feature vector f of size 1 xc in Wherein->c is the number of channels of the convolution kernel; will x in And f in Input into a graph convolution network, output a global feature vector, denoted as f out
The method comprises the steps that a graph rolling network consists of a correction linear unit ReLu activation function and a graph rolling G-conv, wherein the ReLu activation function maps the input of each neuron in the graph rolling network to an output end; the characteristic value of the data point of the layer (tau+1) is calculated by the characteristic value of the data point x of the layer (tau), wherein tau is the number of layers of the convolution of the graph (G-conv), the value range is 1-15, and the calculation formula is as follows:
in the formulaFor the eigenvalue of data point x at the τ th layer, +.>Is the characteristic value of the data point at the (tau+1) th layer, w 0 And w 1 Respectively determining the characteristic value of a data point x of the τ layer and the weight of the neighborhood characteristic value of the data point;
module M2.1.2: predicting a coarse point cloud using a deformation-based decoder: the decoder learns the mapping from the 2D unit square to the 3D surface by using the multi-layer perceptron, thereby simulating the deformation from the 2D square to the 3D surface and generating K surface surfaces i I=1, 2, …, K; forming a rough point cloud with complex shape information;
module M2.1.3: regularization of coarse point clouds: taking 3D points generated by each multi-layer perceptron in each forward transmission process as a vertex set, and constructing a minimum spanning tree T according to Euclidean distance between vertexes i
Select T i Intermediate diameter, i.e. the middle of a simple path containing the most verticesThe vertex is taken as a root node, and all edges point to the direction of the root, namely the minimum spanning tree T i Is a direction of (2);
the longer the edge of the minimum spanning tree, the more distant the two points are, the greater the probability of mixing with the points of other surfaces, the expansion penalty makes the root nodes shrink toward a more compact area along the edge, and finally the purpose of optimizing the generation of the point cloud is achieved, and the formula is expressed as follows:
wherein dis (u, v) is the Euclidean distance between vertices u and v; n is the sampling point number of the local area; (u, v) ε T i For minimum spanning tree T i A directional edge taking u and v as vertexes; l (L) i Representing a minimum spanning tree T i Average length of the middle edge;for indicating a function, for screening side lengths less than λl i Is a side of (2); lambda is an superparameter, and the size of a threshold value is determined;
since the motion of the midpoints of the construction tree can be regarded as infinitesimal, i.e. the motion of the smallest spanning tree midpoints is unchanged, the expansion penalty function L expansion Almost everywhere can be made micro for the minimum spanning tree T i Each directed edge (u, v) e T i If its length is greater than λl i Only giving u a backward gradient to shrink the u toward v to form a more compact surface i
Compared with the prior art, the invention has the following beneficial effects:
1. the method adopts a method based on deformation and sampling network to complement the dental point cloud data in two stages, and in the first stage, the method can predict a complete but coarse-grained point cloud; in the second stage, the coarse-granularity prediction point cloud and the input point cloud are fused through a sampling algorithm, so that the uniformly distributed fine-granularity prediction point cloud can be obtained;
2. the joint loss function in the invention ensures the concentrated distribution of points in a local area, the MDS of the minimum density sampling algorithm reserves the known tooth structure, and the prediction result can effectively avoid the problems of uneven distribution, fuzzy details, lost structure and the like.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a two-stage dental point cloud completion method based on a deep learning network of the present invention;
FIG. 2 is a schematic diagram of a MSN deep learning network structure of a two-stage dental point cloud completion method based on a deep learning network according to the present invention;
FIG. 3 is a diagram of a graph rolling network structure in an MSN deep learning network of the two-stage dental point cloud completion method based on the deep learning network of the present invention;
FIG. 4 is a block diagram of a deformation-based decoder in a MSN deep learning network of the two-stage dental point cloud completion method based on the deep learning network of the present invention;
fig. 5 is a diagram of a residual map convolution network in an MSN deep learning network of the two-stage dental point cloud completion method based on the deep learning network of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
The embodiment of the invention provides a two-stage dental point cloud completion method based on a deep learning network, which is shown by referring to fig. 1, firstly, step S1: constructing initial three-dimensional dental model point cloud data based on CBCT data and mouth scan data, and specifically:
Extracting a dental crown three-dimensional model by a laser scanner (mouth scanning device), and converting the dental crown three-dimensional model into high-resolution laser scanning first three-dimensional dental model point cloud data; acquiring CBCT data through a cone beam computer tomography scanner, extracting a tooth complete model from the CBCT data according to a region growing method, and converting the tooth complete model into CBCT reconstructed second three-dimensional tooth model point cloud data; and registering the first three-dimensional tooth model point cloud data and the second three-dimensional tooth model point cloud data by using a CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion to obtain initial three-dimensional tooth model point cloud data.
Referring to fig. 2 and 3, step S2: constructing a deep learning network MSN, and completing training of the MSN by using the existing training set and test set, wherein the training method comprises the following steps:
building a deep learning network MSN: the MSN takes the point cloud as input, and the point cloud complement is realized through two-stage processing: the first stage: the deformation prediction stage, the network presents an encoder-decoder structure, the auto-encoder predicts a coarser complete point cloud by extracting global feature vectors (Generalized Feature Vector, GFV) and uses the expansion penalty to prevent overlap between surface elements.
And a second stage: fusion and refinement stage we fuse the coarse point cloud with the input point cloud. Sub-point clouds of the fusion point cloud are obtained through a minimum density sampling algorithm (Minimum Density Sampling, MDS), and then fed into a residual error network for point-by-point residual error prediction. And meanwhile, connecting the point-by-point residual prediction result with the sub-point cloud by utilizing jump connection, and finally outputting the complement point cloud with the surface uniformly distributed.
Referring to fig. 3 and 4, in the deformation prediction stage, a graph convolution encoder is used to extract point cloud data features:
inputting input point cloud X, namely laser scanning three-dimensional tooth model point cloud data, into a graph rolling encoder, wherein the point cloud X is positioned in Euclidean space, and the size of each data point X in X is 1 multiplied by 3. For each data point x, take n points around it as the numberA neighborhood Nx of data points, the size of which is n×3, and the coordinates are defined as x in . The coordinates of n neighborhood points in Nx relative to data point x by a point convolution operation with maximum poolingConverted into a feature vector f of size 1 xc in Whereinc is the number of channels of the convolution kernel; will x in And f in Input into a graph convolution network, output a global feature vector GFV, denoted as f out
The graph convolution network consists of a modified linear unit (Rectified Linear Unit, reLU) activation function that maps the input to the output of each neuron in the graph convolution network, and a graph convolution G-conv; the characteristic value of the data point of the layer (tau+1) is calculated by the characteristic value of the data point x of the layer (tau), wherein tau is the number of layers of the convolution of the graph (G-conv), the value range is 1-15, and the calculation formula is as follows:
In the formulaFor the eigenvalue of data point x at the τ th layer, +.>Is the characteristic value of the data point at the (tau+1) th layer, w 0 And w 1 Are all learnable parameters, and respectively determine the characteristic value of the data point x of the τ layer and the weight of the neighborhood characteristic value of the data point.
Predicting a coarse point cloud using a deformation-based decoder: the decoder learns the mapping from the 2D unit square to the 3D surface using a Multi-Layer Perceptron (MLP) to simulate the deformation of the 2D square to the 3D surface, generating K surface surfaces i ,i=1,2,…,K, performing K; a rough point cloud with complex shape information is formed.
To facilitate the generation of a local surface, each surface element should be concentrated in a relatively simple local area. In each forward pass, N points are first randomly sampled from a unit square (local area), N being set to 128-512. The coordinates of the N sample points are connected to the obtained feature vector GFV and then passed as input to K multi-layer perceptrons, K being set to 4-16. In this way, each sampled 2D point generates corresponding K3D points on K different surfaces, and KN 3D points can be output for each forward transfer, so as to finally realize continuous mapping from 2D to 3D. The denser the decoder samples on a 2D square, the smoother the resulting 3D surface.
Regularization of coarse point clouds: the combined use of MLPs may cause overlapping and mixing between surface elements, which on the one hand may lead to a non-uniform predicted point cloud density distribution; on the other hand, the expansion area and the coverage area of each surface element are increased, so that the 2D local area is deformed, and the capturing difficulty of local detail information is increased. According to the invention, an expansion punishment item is introduced, regularization treatment is only carried out on sparse distribution points in the generated surface, so that the compactness and the concentrativity of different surfaces in a local area are ensured, and the mutual coverage and the overlapping of the different surfaces are prevented.
Specifically, the invention regards the 3D points generated by each multi-layer perceptron in each forward pass as a vertex set, and constructs a minimum spanning tree T according to Euclidean distance between the vertices i
Select T i The middle diameter, i.e. the middle vertex of the simple path containing the most vertices, is taken as the root node, and all sides point to the direction of the root, i.e. the minimum spanning tree T i Thus, for K MLPs there are K directed minimum spanning trees describing the distribution of their points.
The longer the edge of the minimum spanning tree, the more distant the two points are, the greater the probability of mixing with the points of other surfaces, the expansion penalty makes the root nodes shrink toward a more compact area along the edge, and finally the purpose of optimizing the generation of the point cloud is achieved, and the formula is expressed as follows:
Wherein dis (u, v) is the Euclidean distance between vertices u and v; n is the sampling point number of the local area; (u, v) ε T i For minimum spanning tree T i A directional edge taking u and v as vertexes; l (L) i Representing a minimum spanning tree T i Average length of the middle edge;for indicating a function, for screening side lengths less than λl i Is a side of (2); lambda is an excess parameter, and the magnitude of the threshold is determined (the value range is 1.0-2.0), in this example, 1.5.
Since the motion of the midpoints of the construction tree can be regarded as infinitesimal, i.e. the motion of the smallest spanning tree midpoints is unchanged, the expansion penalty function L expansion Almost everywhere can be made micro for the minimum spanning tree T i Each directed edge (u, v) e T i If its length is greater than λl i Only giving u a backward gradient to shrink the u toward v to form a more compact surface i
Secondly, in the fusion and fine stage, the rough point cloud is further optimized, which specifically comprises the following steps:
the input point cloud and the rough point cloud are fused and then subjected to minimum density sampling:
the densities of the input point cloud and the rough point cloud may be different, and direct fusion may cause problems of overlapping and merging between the point clouds, so that the fused point clouds are unevenly distributed. The minimum density sampling MDS algorithm adopts a Gaussian weight summation method to estimate the density of points, the points with the minimum density are acquired from the fusion point cloud, and the evenly distributed sub-point cloud is obtained, wherein the specific formula is expressed as follows:
Wherein p is t Is the t sampling point; p (P) t-1 For the first t-1 sampling point sets, P t ={p j |1≤j≤t};Is a point other than the first t-1 sampling points; sigma is positive for determining the neighborhood size of the algorithm application.
Referring to fig. 5, refinement processing is performed on the sub-point cloud obtained after sampling: and inputting the sub-point cloud into a residual map convolution network, and generating a fine-granularity point cloud through point-by-point residual. The residual graph rolling network is based on a graph rolling network structure in a graph rolling encoder, a residual connecting part is added, the output result of the residual graph rolling network is added with the sub-point cloud obtained through MDS algorithm sampling point by point, and finally a smooth point cloud with the complete shape of the predictable tooth body is generated.
Since the input point cloud data is more reliable than the predicted coarse point cloud data, a binary channel is added to the coordinates of the sub point cloud to distinguish the source of each point, wherein 0 represents the source of the input point cloud and 1 represents the source of the coarse point cloud.
Constructing a joint loss function and optimizing the MSN through the loss function: penalty loss function of expansion L expansion And Earth Mover Distance (EMD) loss function L EMD Taken together as a joint loss function, the MSN is optimized by the joint loss function. The method comprises the following steps:
Penalty loss function of expansion L expansion And bulldozer distance loss function L EMD Taken together as a joint loss function, wherein the expansion penalty loss function L expansion As a regularization term to promote shrinkage of each surface element, while bulldozer distance loss function L EMD Causing the point cloud to describe the complete shape of the tooth, the mutual constraint of the two causes each surface element to be concentrated in a local area as accurately and compactly as possible, comprising the following steps:
calculating a loss function L EMD
Wherein S is coarse The rough point cloud is output in the first stage; s is(s) c Is a data point in the coarse point cloud; s is S final The output point cloud after the second-stage refinement treatment is obtained; s is(s) f To output data points in the fine point cloud; s is S gt Is a gold standard; phi is a bijective function;
calculating a joint loss function L:
L=L EMD (S coarse ,S gt )+αL expansion +βL EMD (S final ,S gt )
wherein S is coarse The rough point cloud is output in the first stage; s is S final The output point cloud after the second-stage refinement treatment is obtained; s is S gt Is a gold standard; alpha and beta are weight-average parameters, the value range is 0.1-1.0, and in the embodiment, alpha takes the value of 0.1 and beta takes the value of 1.0.
Standard Chamfer Distance (CD) and bulldozer Distance EMD are used as indexes for evaluating similarity between the complement point cloud data and the gold standard, and the smaller the index is, the better the performance is;
And the uniformity of the distribution of the full point cloud is measured by adopting a standardized uniformity coefficient (Normalized Uniformity Coefficient, NUC), and the smaller the index is, the better the performance is.
Training and evaluating an MSN network using existing training and test sets: and obtaining dentition CBCT data and corresponding oral scanning data through clinical scanning, and taking the three-dimensional dental model point cloud data obtained after registration as a gold standard. One group of data in the data set consists of laser scanning three-dimensional dental model point cloud data of a patient and corresponding three-dimensional dental model point cloud data (gold standard), wherein most of the data is selected as a training set, 50% -90% of the total data, and the other part is selected as a test set, and the data accounts for 10% -50% of the total data. The training set is used for training the MSN network, and after the network training is completed, the generalization capability of the MSN network is estimated by the testing set.
Finally, step S3: and (3) inputting the initial three-dimensional dental model point cloud data obtained in the step (S1) into a trained MSN to obtain the completed complete dental point cloud data.
The embodiment of the invention provides a two-stage dental point cloud filling method based on a deep learning network, which adopts a method based on a deformation and sampling network (Morphing and Sampling Network, MSN) to fill dental point cloud data in two stages. In the first stage, the method predicts a complete but coarse-grained point cloud; in the second stage, the coarse-granularity prediction point cloud and the input point cloud are fused through a sampling algorithm, and the fine-granularity prediction point cloud which is uniformly distributed is obtained. The joint loss function ensures the concentrated distribution of points in a local area, the minimum density sampling algorithm MDS reserves the known tooth structure, and the problems of uneven distribution, fuzzy details, lost structure and the like can be effectively avoided by the prediction result.
Those skilled in the art will appreciate that the application provides a system and its individual devices, modules, units, etc. that can be implemented entirely by logic programming of method steps, in addition to being implemented as pure computer readable program code, in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Therefore, the system and various devices, modules and units thereof provided by the application can be regarded as a hardware component, and the devices, modules and units for realizing various functions included in the system can also be regarded as structures in the hardware component; means, modules, and units for implementing the various functions may also be considered as either software modules for implementing the methods or structures within hardware components.
The foregoing describes specific embodiments of the present application. It is to be understood that the application is not limited to the particular embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without affecting the spirit of the application. The embodiments of the application and the features of the embodiments may be combined with each other arbitrarily without conflict.

Claims (7)

1. A two-stage dental point cloud completion method based on a deep learning network is characterized by comprising the following steps of:
step S1: constructing initial three-dimensional dental model point cloud data based on CBCT data and mouth scan data;
step S2: constructing a deep learning network MSN, and completing training of the MSN by using the existing training set and test set;
step S3: inputting the initial three-dimensional dental model point cloud data obtained in the step S1 into a trained MSN to obtain full dental point cloud data after filling;
the step S2 includes:
step S2.1: building a deep learning network MSN: the MSN takes the point cloud as input, and the point cloud complement is realized through two-stage processing:
the first stage: the deformation prediction stage, in which the network presents an encoder-decoder structure, the automatic encoder predicts a coarser complete point cloud by extracting global feature vectors (Generalized Feature Vector, GFV) and prevents overlap between surface elements with an expansion penalty;
and a second stage: fusing and refining, namely fusing a rough point cloud and an input point cloud;
step S2.2: constructing a joint loss function and optimizing the MSN through the loss function: penalty loss function of expansionAnd bulldozer distance loss function- >Combining the MSNs as a joint loss function, and optimizing the MSNs through the joint loss function;
step S2.3: training and evaluating an MSN network using existing training and test sets: obtaining dentition CBCT data and corresponding mouth scanning data through clinical scanning, and taking the three-dimensional dental model point cloud data obtained after registration as a gold standard;
the fusing and fine phase in step S2.1 further optimizes the coarse point cloud, which specifically includes:
1) The input point cloud and the rough point cloud are fused and then subjected to minimum density sampling:
the minimum density sampling algorithm adopts a Gaussian weight summation method to estimate the density of points, acquires the points with the minimum density from the fusion point cloud, and obtains the sub-point cloud with uniform distribution, wherein the specific formula is expressed as follows:
wherein,is->Sampling points; />For front->Sample point set, +.>;/>To remove->Points other than the sampling points; />Taking the positive number for determiningDetermining the size of a neighborhood to which the algorithm is applied;
2) Refining the sub-point cloud obtained after sampling: inputting the sub-point cloud into a residual map convolution network, and generating a fine-granularity point cloud through point-by-point residual;
the residual graph rolling network is based on a graph rolling network structure in a graph rolling encoder, a residual connecting part is added, an output result of the residual graph rolling network is added with sub-point clouds obtained through MDS algorithm sampling point by point, and finally smooth point clouds with the complete shape of the predictable tooth body are generated;
Since the input point cloud data is more reliable than the predicted coarse point cloud data, a binary channel is added to the coordinates of the sub point cloud to distinguish the source of each point, wherein 0 represents the source of the input point cloud and 1 represents the source of the coarse point cloud.
2. The two-stage dental point cloud completion method based on the deep learning network according to claim 1, wherein the step S1 comprises:
step S1.1: extracting a dental crown three-dimensional model by a laser scanner, and converting the dental crown three-dimensional model into high-resolution laser scanning first three-dimensional dental model point cloud data;
step S1.2: acquiring CBCT data through a cone beam computer tomography scanner, extracting a tooth complete model from the CBCT data according to a region growing method, and converting the tooth complete model into CBCT reconstructed second three-dimensional tooth model point cloud data;
step S1.3: and registering the first three-dimensional tooth model point cloud data and the second three-dimensional tooth model point cloud data by using a CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion to obtain initial three-dimensional tooth model point cloud data.
3. The two-stage dental point cloud completion method based on the deep learning network according to claim 1, wherein the deformation prediction stage in step S2.1 specifically comprises:
Step S2.1.1: extracting point cloud data characteristics by adopting a graph convolution encoder:
will input a point cloudNamely, inputting laser scanning three-dimensional dental model point cloud data into a graph convolution encoder, wherein the point cloud is +.>Is located in the Euclidean space and +.>Each data point->Is +.>
For each data pointTaking the surrounding +.>A point is taken as the neighborhood of the data point +.>The size is +.>The coordinates are defined as +.>
By point convolution operation with maximum pooling, the methodMiddle->The neighborhood points are relative to the dataPoint->Coordinates of->Converted into a size +.>Feature vector +.>Wherein->=/>-/>,/>The number of channels that are convolution kernels; will->And->Input into the graph convolution network, output global feature vector, recorded as +.>
The method comprises the steps that a graph rolling network consists of a correction linear unit ReLu activation function and a graph rolling G-conv, wherein the ReLu activation function maps the input of each neuron in the graph rolling network to an output end; graph convolution G-conv utilizationLayer data point->Calculating the characteristic value of +.>Characteristic values of +1 layer data points, wherein +.>The number of layers of the convolution of the G-conv diagram is 1-15, and the calculation formula is as follows:
in the formulaIn the +.>Layer data point->Characteristic value of>In the +.>Characteristic value of +1 layer data point, +. >And->Are all learnable parameters, respectively determine +.>Layer data point->Is determined by the method, and the weight of the neighborhood characteristic value of the data point is calculated;
step S2.1.2: predicting a coarse point cloud using a deformation-based decoder: the decoder learns the mapping from the 2D unit square to the 3D surface using the multi-layer perceptron to simulate the deformation of the 2D square to the 3D surface, generatingSurface of the surface ii=1,2,…,/>The method comprises the steps of carrying out a first treatment on the surface of the Forming a rough point cloud with complex shape information;
step S2.1.3: regularization of coarse point clouds: taking 3D points generated by each multi-layer perceptron in each forward transmission process as a vertex set, and constructing a minimum spanning tree according to Euclidean distance between vertices
Selection ofMiddle diameter, i.e. the middle vertex of the simple path containing the most vertices, is taken as root node, all sides point in the direction of the root, i.e. the minimum spanning tree +.>Is a direction of (2);
the longer the edge of the minimum spanning tree, the more distant the two points are, the greater the probability of mixing with the points of other surfaces, the expansion penalty makes the root nodes shrink toward a more compact area along the edge, and finally the purpose of optimizing the generation of the point cloud is achieved, and the formula is expressed as follows:
wherein,is vertex->And->Euclidean distance between them; / >Sampling points for the local area; />Is +.>Middle->、/>A directed edge that is a vertex; />Representing minimum spanning tree->Average length of the middle edge; />For indication function, for screening out side lengths smaller than +.>Is a side of (2); />Determining the size of a threshold value for the super parameter;
since the motion at the midpoints of the construction tree can be seen as infinitesimal, i.e. the motion at the midpoints of the minimum spanning tree is unchanged, the expansion penalty functionAlmost everywhere can be made micro, for the minimum spanning tree +.>Is directed to each of the edges +>If its length is greater than +.>Only give +.>A backward gradient, such that +.>To->Is contracted in the direction of the surface to form a more compact surface i
4. The two-stage dental point cloud completion method based on deep learning network according to claim 1, wherein in the step S2.2, an expansion penalty loss function is usedAnd bulldozer distance loss function->Taken together as a joint loss function, wherein the expansion penalty loss function +.>As a regularization term, each surface element is caused to shrink, while bulldozer distance loss function +.>The point cloud is promoted to describe the complete shape of the tooth, and each surface element is more accurately and compactly concentrated in a local area by mutual constraint of the point cloud and the tooth, and the method specifically comprises the following steps of:
Step S2.2.1: calculating a loss function
Wherein,the rough point cloud is output in the first stage; />Is a data point in the coarse point cloud; />The output point cloud after the second-stage refinement treatment is obtained; />To output data points in the fine point cloud; />Is a gold standard; />Is a bijective function;
step S2.2.2: calculating joint loss function
Wherein,the rough point cloud is output in the first stage; />The output point cloud after the second-stage refinement treatment is obtained; />Is a gold standard; />And->The value range is 0.1-1.0 for the weight-average parameter.
5. Two-stage dental point cloud completion system based on deep learning network, which is characterized by comprising:
module M1: constructing initial three-dimensional dental model point cloud data based on CBCT data and mouth scan data;
module M2: constructing a deep learning network MSN, and completing training of the MSN by using the existing training set and test set;
module M3: inputting the initial three-dimensional dental model point cloud data obtained in the module M1 into a trained MSN to obtain full dental point cloud data after filling;
the module M2 includes:
module M2.1: building a deep learning network MSN: the MSN takes the point cloud as input, and the point cloud complement is realized through two-stage processing:
The first stage: the deformation prediction stage, in which the network presents an encoder-decoder structure, the automatic encoder predicts a coarser complete point cloud by extracting global feature vectors (Generalized Feature Vector, GFV) and prevents overlap between surface elements with an expansion penalty;
and a second stage: fusing and refining, namely fusing a rough point cloud and an input point cloud;
module M2.2: constructing a joint loss function and optimizing the MSN through the loss function: penalty loss function of expansionAnd bulldozer distance loss function->Combining the MSNs as a joint loss function, and optimizing the MSNs through the joint loss function;
module M2.3: training and evaluating an MSN network using existing training and test sets: obtaining dentition CBCT data and corresponding mouth scanning data through clinical scanning, and taking the three-dimensional dental model point cloud data obtained after registration as a gold standard;
the fusing and fine phase in the module M2.1 further optimizes the coarse point cloud, which specifically includes:
1) The input point cloud and the rough point cloud are fused and then subjected to minimum density sampling:
the minimum density sampling algorithm adopts a Gaussian weight summation method to estimate the density of points, acquires the points with the minimum density from the fusion point cloud, and obtains the sub-point cloud with uniform distribution, wherein the specific formula is expressed as follows:
Wherein,is->Sampling points; />For front->Sample point set, +.>;/>To remove->Points other than the sampling points; />Taking a positive number for determining the neighborhood size of algorithm application;
2) Refining the sub-point cloud obtained after sampling: inputting the sub-point cloud into a residual map convolution network, and generating a fine-granularity point cloud through point-by-point residual;
the residual graph rolling network is based on a graph rolling network structure in a graph rolling encoder, a residual connecting part is added, an output result of the residual graph rolling network is added with sub-point clouds obtained through MDS algorithm sampling point by point, and finally smooth point clouds with the complete shape of the predictable tooth body are generated;
since the input point cloud data is more reliable than the predicted coarse point cloud data, a binary channel is added to the coordinates of the sub point cloud to distinguish the source of each point, wherein 0 represents the source of the input point cloud and 1 represents the source of the coarse point cloud.
6. The deep learning network based two-stage dental point cloud completion system of claim 5, wherein the module M1 comprises:
module M1.1: extracting a dental crown three-dimensional model by a laser scanner, and converting the dental crown three-dimensional model into high-resolution laser scanning first three-dimensional dental model point cloud data;
Module M1.2: acquiring CBCT data through a cone beam computer tomography scanner, extracting a tooth complete model from the CBCT data according to a region growing method, and converting the tooth complete model into CBCT reconstructed second three-dimensional tooth model point cloud data;
module M1.3: and registering the first three-dimensional tooth model point cloud data and the second three-dimensional tooth model point cloud data by using a CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion to obtain initial three-dimensional tooth model point cloud data.
7. The two-stage dental point cloud completion system based on a deep learning network of claim 5, wherein the deformation prediction stage in the module M2.1 specifically comprises:
module M2.1.1: extracting point cloud data characteristics by adopting a graph convolution encoder:
will input a point cloudNamely, inputting laser scanning three-dimensional dental model point cloud data into a graph convolution encoder, wherein the point cloud is +.>Is located in the Euclidean space and +.>Each data point->Is +.>
For each ofData pointsTaking the surrounding +.>A point is taken as the neighborhood of the data point +.>The size is +.>The coordinates are defined as +.>
By point convolution operation with maximum pooling, the methodMiddle->The neighborhood points are relative to the data points >Coordinates of->Converted into a size +.>Feature vector +.>Wherein->=/>-/>,/>The number of channels that are convolution kernels; will->And->Input into the graph convolution network, output global feature vector, recorded as +.>
The method comprises the steps that a graph rolling network consists of a correction linear unit ReLu activation function and a graph rolling G-conv, wherein the ReLu activation function maps the input of each neuron in the graph rolling network to an output end; graph convolution G-conv utilizationLayer data point->Calculating the characteristic value of +.>Characteristic values of +1 layer data points, wherein +.>The number of layers of the convolution of the G-conv diagram is 1-15, and the calculation formula is as follows:
in the formulaTo at the first/>Layer data point->Characteristic value of>In the +.>Characteristic value of +1 layer data point, +.>And->Are all learnable parameters, respectively determine +.>Layer data point->Is determined by the method, and the weight of the neighborhood characteristic value of the data point is calculated;
module M2.1.2: predicting a coarse point cloud using a deformation-based decoder: the decoder learns the mapping from the 2D unit square to the 3D surface using the multi-layer perceptron to simulate the deformation of the 2D square to the 3D surface, generatingSurface of the surface ii=1,2,…,/>The method comprises the steps of carrying out a first treatment on the surface of the Forming a rough point cloud with complex shape information;
module M2.1.3: regularization of coarse point clouds: the 3D points generated by each multi-layer perceptron in each forward transmission process are regarded as a vertex set according to the between vertices Is used for constructing a minimum spanning tree
Selection ofMiddle diameter, i.e. the middle vertex of the simple path containing the most vertices, is taken as root node, all sides point in the direction of the root, i.e. the minimum spanning tree +.>Is a direction of (2);
the longer the edge of the minimum spanning tree, the more distant the two points are, the greater the probability of mixing with the points of other surfaces, the expansion penalty makes the root nodes shrink toward a more compact area along the edge, and finally the purpose of optimizing the generation of the point cloud is achieved, and the formula is expressed as follows:
wherein,is vertex->And->Euclidean distance between them; />Sampling points for the local area; />Is +.>Middle->、/>A directed edge that is a vertex; />Representing minimum spanning tree->Average length of the middle edge; />For indication function, for screening out side lengths smaller than +.>Is a side of (2); />Determining the size of a threshold value for the super parameter;
since the motion at the midpoints of the construction tree can be seen as infinitesimal, i.e. the motion at the midpoints of the minimum spanning tree is unchanged, the expansion penalty functionAlmost everywhere can be made micro, for the minimum spanning tree +.>Is directed to each of the edges +>If its length is greater than +.>Only give +.>A backward gradient, such that +.>To->Is contracted in the direction of the surface to form a more compact surface i
CN202110287374.6A 2021-03-17 2021-03-17 Two-stage dental point cloud completion method and system based on deep learning network Active CN112967219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110287374.6A CN112967219B (en) 2021-03-17 2021-03-17 Two-stage dental point cloud completion method and system based on deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110287374.6A CN112967219B (en) 2021-03-17 2021-03-17 Two-stage dental point cloud completion method and system based on deep learning network

Publications (2)

Publication Number Publication Date
CN112967219A CN112967219A (en) 2021-06-15
CN112967219B true CN112967219B (en) 2023-12-05

Family

ID=76279024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110287374.6A Active CN112967219B (en) 2021-03-17 2021-03-17 Two-stage dental point cloud completion method and system based on deep learning network

Country Status (1)

Country Link
CN (1) CN112967219B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610956A (en) * 2021-06-17 2021-11-05 深圳市菲森科技有限公司 Method and device for characteristic matching of implant in intraoral scanning and related equipment
CN113538261A (en) * 2021-06-21 2021-10-22 昆明理工大学 Shape repairing method for incomplete stalactite point cloud based on deep learning
CN113269152B (en) * 2021-06-25 2022-07-01 北京邮电大学 Non-equidistant discrete depth completion method
CN113397585B (en) * 2021-07-27 2022-08-05 朱涛 Tooth body model generation method and system based on oral CBCT and oral scan data
CN113705631B (en) * 2021-08-10 2024-01-23 大庆瑞昂环保科技有限公司 3D point cloud target detection method based on graph convolution
CN113808097B (en) * 2021-09-14 2024-04-12 北京主导时代科技有限公司 Method and system for detecting loss of key parts of train
CN113888610B (en) * 2021-10-14 2023-11-07 雅客智慧(北京)科技有限公司 Dental preparation effect evaluation method, detection apparatus, and storage medium
CN114092469B (en) * 2021-12-02 2022-08-26 四川大学 Method and device for determining repair area of blade and readable storage medium
TWI799181B (en) * 2022-03-10 2023-04-11 國立臺中科技大學 Method of establishing integrate network model to generate complete 3d point clouds from sparse 3d point clouds and segment parts
CN116258835B (en) * 2023-05-04 2023-07-28 武汉大学 Point cloud data three-dimensional reconstruction method and system based on deep learning
CN116863432B (en) * 2023-09-04 2023-12-22 之江实验室 Weak supervision laser travelable region prediction method and system based on deep learning
CN116883246B (en) * 2023-09-06 2023-11-14 感跃医疗科技(成都)有限公司 Super-resolution method for CBCT image

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875386A (en) * 2017-02-13 2017-06-20 苏州江奥光电科技有限公司 A kind of method for carrying out dental health detection automatically using deep learning
KR20190082066A (en) * 2017-12-29 2019-07-09 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 Method and apparatus for restoring point cloud data
CN110222580A (en) * 2019-05-09 2019-09-10 中国科学院软件研究所 A kind of manpower 3 d pose estimation method and device based on three-dimensional point cloud
CN110443842A (en) * 2019-07-24 2019-11-12 大连理工大学 Depth map prediction technique based on visual angle fusion
CN110998602A (en) * 2017-06-30 2020-04-10 普罗马顿控股有限责任公司 Classification and 3D modeling of 3D dento-maxillofacial structures using deep learning methods
CN111862171A (en) * 2020-08-04 2020-10-30 万申(北京)科技有限公司 CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion
CN112085821A (en) * 2020-08-17 2020-12-15 万申(北京)科技有限公司 Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method
CN112087985A (en) * 2018-05-10 2020-12-15 3M创新有限公司 Simulated orthodontic treatment via real-time enhanced visualization
CN112120810A (en) * 2020-09-29 2020-12-25 深圳市深图医学影像设备有限公司 Three-dimensional data generation method of tooth orthodontic concealed appliance
CN112184556A (en) * 2020-10-28 2021-01-05 万申(北京)科技有限公司 Super-resolution imaging method based on oral CBCT (cone beam computed tomography) reconstruction point cloud
CN112200843A (en) * 2020-10-09 2021-01-08 福州大学 CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10521927B2 (en) * 2017-08-15 2019-12-31 Siemens Healthcare Gmbh Internal body marker prediction from surface data in medical imaging
US10832084B2 (en) * 2018-08-17 2020-11-10 Nec Corporation Dense three-dimensional correspondence estimation with multi-level metric learning and hierarchical matching

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875386A (en) * 2017-02-13 2017-06-20 苏州江奥光电科技有限公司 A kind of method for carrying out dental health detection automatically using deep learning
CN110998602A (en) * 2017-06-30 2020-04-10 普罗马顿控股有限责任公司 Classification and 3D modeling of 3D dento-maxillofacial structures using deep learning methods
KR20190082066A (en) * 2017-12-29 2019-07-09 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 Method and apparatus for restoring point cloud data
CN112087985A (en) * 2018-05-10 2020-12-15 3M创新有限公司 Simulated orthodontic treatment via real-time enhanced visualization
CN110222580A (en) * 2019-05-09 2019-09-10 中国科学院软件研究所 A kind of manpower 3 d pose estimation method and device based on three-dimensional point cloud
CN110443842A (en) * 2019-07-24 2019-11-12 大连理工大学 Depth map prediction technique based on visual angle fusion
CN111862171A (en) * 2020-08-04 2020-10-30 万申(北京)科技有限公司 CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion
CN112085821A (en) * 2020-08-17 2020-12-15 万申(北京)科技有限公司 Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method
CN112120810A (en) * 2020-09-29 2020-12-25 深圳市深图医学影像设备有限公司 Three-dimensional data generation method of tooth orthodontic concealed appliance
CN112200843A (en) * 2020-10-09 2021-01-08 福州大学 CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels
CN112184556A (en) * 2020-10-28 2021-01-05 万申(北京)科技有限公司 Super-resolution imaging method based on oral CBCT (cone beam computed tomography) reconstruction point cloud

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SRF-Net:Spatial Ralationship Feature Network for Tooth Point Cloud Classification;Qian Ma;Computer Graphics Forum;第第39卷卷(第第7期期);267-277 *
基于GCNN的CBCT模拟口扫点云数据牙齿分割算法;张雅玲,于泽宽;计算机辅助设计与图形学学报;全文 *
高分辨率深度生成网络的缺失牙体形态设计;郭闯;戴宁;田素坤;孙玉春;俞青;刘浩;程筱胜;;中国图象图形学报(第10期);全文 *

Also Published As

Publication number Publication date
CN112967219A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN112967219B (en) Two-stage dental point cloud completion method and system based on deep learning network
JP7150166B2 (en) CT image generation method and apparatus, computer equipment and computer program
CN110930421B (en) Segmentation method for CBCT (Cone Beam computed tomography) tooth image
US20230186476A1 (en) Object detection and instance segmentation of 3d point clouds based on deep learning
US20210174543A1 (en) Automated determination of a canonical pose of a 3d objects and superimposition of 3d objects using deep learning
Azad et al. Transnorm: Transformer provides a strong spatial normalization mechanism for a deep segmentation model
Wang et al. RAR-U-Net: a residual encoder to attention decoder by residual connections framework for spine segmentation under noisy labels
CN110648331B (en) Detection method for medical image segmentation, medical image segmentation method and device
CN113272869A (en) Three-dimensional shape reconstruction from topograms in medical imaging
WO2023142781A1 (en) Image three-dimensional reconstruction method and apparatus, electronic device, and storage medium
CN110599444B (en) Device, system and non-transitory readable storage medium for predicting fractional flow reserve of a vessel tree
WO2022232559A1 (en) Neural network margin proposal
CN116993926B (en) Single-view human body three-dimensional reconstruction method
Yin et al. CoT-UNet++: A medical image segmentation method based on contextual Transformer and dense connection
CN113936138A (en) Target detection method, system, equipment and medium based on multi-source image fusion
CN116485809B (en) Tooth example segmentation method and system based on self-attention and receptive field adjustment
WO2023108526A1 (en) Medical image segmentation method and system, and terminal and storage medium
CN115100306A (en) Four-dimensional cone-beam CT imaging method and device for pancreatic region
CN113158970B (en) Action identification method and system based on fast and slow dual-flow graph convolutional neural network
CN115359005A (en) Image prediction model generation method, device, computer equipment and storage medium
Japes et al. Multi-view semantic labeling of 3D point clouds for automated plant phenotyping
Hosseinimanesh et al. Improving the quality of dental crown using a transformer-based method
CN113205521A (en) Image segmentation method of medical image data
CN112488178A (en) Network model training method and device, image processing method and device, and equipment
CN115151951A (en) Image similarity determination by analysis of registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant