CN112967219A - Two-stage dental point cloud completion method and system based on deep learning network - Google Patents

Two-stage dental point cloud completion method and system based on deep learning network Download PDF

Info

Publication number
CN112967219A
CN112967219A CN202110287374.6A CN202110287374A CN112967219A CN 112967219 A CN112967219 A CN 112967219A CN 202110287374 A CN202110287374 A CN 202110287374A CN 112967219 A CN112967219 A CN 112967219A
Authority
CN
China
Prior art keywords
point cloud
data
point
stage
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110287374.6A
Other languages
Chinese (zh)
Other versions
CN112967219B (en
Inventor
于泽宽
张慧贤
郭向华
耿道颖
韩方凯
刘杰
王俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huashan Hospital of Fudan University
Original Assignee
Huashan Hospital of Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huashan Hospital of Fudan University filed Critical Huashan Hospital of Fudan University
Priority to CN202110287374.6A priority Critical patent/CN112967219B/en
Publication of CN112967219A publication Critical patent/CN112967219A/en
Application granted granted Critical
Publication of CN112967219B publication Critical patent/CN112967219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a two-stage dental point cloud completion method and a two-stage dental point cloud completion system based on a deep learning network, which relate to the technical field of medical image processing and comprise the following steps: the first part is that initial point cloud data is constructed based on CBCT data and oral scanning data, three-dimensional reconstruction is carried out on the CBCT data of a patient by mainly utilizing third-party software Mimics, and the generated CBCT reconstructed three-dimensional dental model point cloud data is registered with laser scanning point cloud data to obtain three-dimensional dental model point cloud data serving as a golden standard; the second part is a training deep learning network MSN, the laser scanning point cloud data constructed in the first part is input into the trained deep learning network MSN, and the MSN completion network carries out two-stage processing on the input point cloud. In the first stage, the MSN predicts a complete but coarse-grained point cloud; in the second stage, the coarse-grained prediction point cloud and the input point cloud are fused through the connection of a sampling algorithm and a residual error, so that uniformly distributed fine-grained prediction point cloud is obtained, and the completion of the tooth point cloud is realized.

Description

Two-stage dental point cloud completion method and system based on deep learning network
Technical Field
The invention relates to the technical field of medical image processing, in particular to a two-stage dental point cloud completion method and system based on a deep learning network.
Background
Dental imaging enables dentists to more accurately find and intervene in lesion areas and find potential problems, thereby actively carrying out dental health care and restoration. There are three main types of dental imaging techniques currently used for diagnosis: conventional computed Tomography CT, Cone Beam Computed Tomography (CBCT), and intraoral scanners. However, the scan data acquired by the oral device is often incomplete due to the resolution of the mask and the sensor.
For such limited raw data, we need to perfect it to compensate for the structural loss of the raw data and improve the data quality for subsequent clinical application. After PointNet and PointNet + + achieve point cloud segmentation and point cloud classification using a deep learning network, point cloud deep learning gradually becomes a popular research field. Meanwhile, the point cloud is used as important three-dimensional data, and the application of the point cloud in the medical field is more and more extensive. However, the point cloud obtained from the laser radar and other devices is often lost, which brings certain difficulties to the subsequent processing of the point cloud. The Point Cloud Completion (Point Cloud Completion) technique is developed, and estimates complete Point Cloud from missing Point Cloud, thereby obtaining Point Cloud with higher quality and achieving the purpose of repairing.
Chinese patent publication No. CN111383355A discloses a three-dimensional point cloud complementing method, device and computer-readable storage medium, the method comprising: acquiring a three-dimensional point cloud corresponding to a target room, wherein the three-dimensional point cloud comprises a top surface point cloud, a ground point cloud and a placed article point cloud; generating a top surface point cloud projection image based on the top surface point cloud, and generating a ground point cloud projection image based on the ground point cloud and the put article point cloud; determining at least one point cloud area to be complemented from the top surface point cloud projection image and the ground point cloud projection image; and for each point cloud area to be compensated in at least one point cloud area to be compensated, based on the position of the point cloud area to be compensated, utilizing the point clouds around the point cloud area to be compensated to compensate the point cloud area to be compensated. The embodiment of the disclosure can improve the accuracy of completing the cavity in the three-dimensional point cloud, and increase the integrity and the aesthetic degree of the house model.
The traditional point cloud completion method is based on the prior information (such as symmetry information or semantic information) of a certain object basic structure, and the missing point cloud is repaired through certain prior information. The method can only process some missing point clouds with low point cloud missing rate and obvious structural features. With the progress of deep learning methods for point cloud analysis and generation, some more reasonable 3D point cloud completion work such as LGAN-AE, PCN, and 3D-Capsule have been proposed nowadays. The point cloud completion work based on the deep learning method takes the missing point cloud as input and takes the complete point cloud as output, so that the problems of a large amount of memory occupation and artifacts caused by discretization can be effectively prevented. However, due to the disorder and irregularity of the point cloud, we cannot directly apply the conventional convolution on the point cloud, and deep learning on the irregular point cloud still faces many challenges. For example, the deep learning network focuses too much on the overall characteristics of the object and ignores the geometric information of the missing region; the network generates a point cloud that favors the common features of objects of a certain class and loses the individual features of an object.
Increasingly perfect point cloud collection equipment can quickly scan and obtain a large amount of point cloud data from the surface of an object, but because the current capability of analyzing and generating point clouds is limited, the large-scale point cloud data causes low equipment storage and calculation processing efficiency, the reconstructed model surface is distorted, the overall effect is fuzzy, and a series of problems of uneven distribution, holes and the like of the reconstructed point clouds also occur. Therefore, in practical application, the method realizes high-efficiency point cloud completion so as to improve the point cloud quality, and has important clinical significance and research value in the field of oral medicine.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a two-stage dental body point cloud completion method and system based on a deep learning network.
According to the two-stage dental point cloud completion method and system based on the deep learning network, the scheme is as follows:
in a first aspect, a two-stage dental body point cloud completion method based on a deep learning network is provided, and the method includes:
step S1: constructing initial three-dimensional tooth model point cloud data based on CBCT data and mouth scanning data;
step S2: constructing a deep learning network (MSN), and completing MSN training by using an existing training set and a test set;
step S3: inputting the initial three-dimensional tooth model point cloud data obtained in the step S1 into the trained MSN to obtain complete tooth point cloud data after completion.
Preferably, the step S1 includes:
step S1.1: extracting a crown three-dimensional model through a laser scanner, and converting the crown three-dimensional model into high-resolution laser scanning first three-dimensional dental model point cloud data;
step S1.2: acquiring CBCT data through a cone beam computed tomography scanner, extracting a complete tooth model from the CBCT data according to a region growing method, and converting the complete tooth model into CBCT to reconstruct second three-dimensional tooth model point cloud data;
step S1.3: and registering the first three-dimensional dental model point cloud data and the second three-dimensional dental model point cloud data by using a CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion to obtain initial three-dimensional dental model point cloud data.
Preferably, the step S2 includes:
step S2.1: building a deep learning network MSN: the MSN takes point cloud as input, and the point cloud completion is realized through two stages of processing:
the first stage is as follows: in the deformation prediction stage, a network presents an encoder-decoder structure, an automatic encoder predicts a coarser complete point cloud by extracting a Global Feature Vector (GFV), and prevents the overlapping of surface elements by using an expansion penalty;
and a second stage: in the fusion and refinement stage, the rough point cloud and the input point cloud are fused;
step S2.2: constructing a joint loss function and optimizing the MSN through the loss function: penalty for inflation function LexpansionDistance loss function L from bulldozerEMDCombining the functions to serve as a joint loss function, and optimizing the MSN through the joint loss function;
step S2.3: training and evaluating the MSN network by adopting the existing training set and test set: obtaining dentition CBCT data and corresponding mouth scanning data through clinical scanning, and taking three-dimensional tooth body model point cloud data obtained after registration as a golden standard.
Preferably, the deformation prediction stage in step S2.1 specifically includes:
extracting point cloud data characteristics by adopting a graph convolution encoder:
inputting an input point cloud X, namely point cloud data of a laser scanning three-dimensional dental model, into a graph convolution encoder, wherein the point cloud X is positioned in an Euclidean space, and the size of each data point X in the X, X belongs to X and is 1 multiplied by 3;
for each data point x, taking n points around the data point x as a neighborhood Nx of the data point, wherein the size of the neighborhood is n multiplied by 3, and the coordinate is defined as xin
Coordinates of n neighborhood points in Nx with respect to data point x by point convolution operation with maximum pooling
Figure BDA0002981045300000031
Conversion into a feature vector f of size 1 × cinWherein
Figure BDA0002981045300000032
c is the number of channels of the convolution kernel; x is to beinAnd finInputting the global feature vector into the graph convolution network, and outputting the global feature vector which is recorded as fout
The graph convolution network consists of a modified linear unit ReLu activating function and a graph convolution G-conv, wherein the ReLu activating function maps the input of each neuron in the graph convolution network to the output end; the graph convolution G-conv utilizes the characteristic value of the data point x on the layer τ +1 to calculate the characteristic value of the data point x on the data point x:
Figure BDA0002981045300000041
in the formula
Figure BDA0002981045300000042
For the feature value of the data point x at the τ th layer,
Figure BDA0002981045300000043
is a characteristic value, w, at a layer τ +1 data point0And w1All parameters are learnable parameters, and the characteristic value of the data point x at the Tth layer and the weight of the characteristic value of the neighborhood of the data point are respectively determined;
predicting a coarse point cloud with a deformation-based decoder: the decoder learns the mapping from the 2D unit square to the 3D surface by using the multi-layer perceptron, thereby simulating the deformation of the 2D square to the 3D surface and generating K surface surfacesiI ═ 1, 2, …, K; forming a coarse point cloud having complex shape information;
regularization of the coarse point cloud: regarding the 3D points generated by each multi-layer perceptron in each forward transmission process as a vertex set, and constructing a minimum spanning tree T according to the Euclidean distance between the verticesi
Selection of TiMiddle diameter, i.e. the middle vertex of the simple path containing the most vertices, as the root nodeThen all edges point to the root direction, i.e. the minimum spanning tree TiThe direction of (a);
the longer the edge of the minimum spanning tree is, the longer the distance between two points is, the higher the probability of mixing with the points on other surfaces is, the expansion punishment causes the root nodes to shrink to a more compact area along the edge, and finally, the purpose of optimally generating the point cloud is achieved, and the formula is expressed as follows:
Figure BDA0002981045300000044
Figure BDA0002981045300000045
where dis (u, v) is the Euclidean distance between vertices u and v; n is the number of sampling points in the local area; (u, v) ∈ TiIs a minimum spanning tree TiA directed edge with u and v as vertexes; liRepresenting a minimum spanning tree TiThe average length of the middle edge;
Figure BDA0002981045300000046
for indicating functions, for screening out side lengths smaller than λ liThe edge of (1); lambda is a hyper-parameter, and the size of the threshold is determined;
the dilation penalty function L is due to the fact that the motion of the points in the construction tree can be seen as infinitesimal, i.e. the motion of the points in the minimum spanning tree is constantexpansionCan be microminiature almost everywhere, for minimum spanning tree TiEach directed edge (u, v) of (c) is e.tiIf it is longer than λ liThen, only a backward gradient is given to u, so that u is contracted towards v, and a more compact surface is formedi
Preferably, the fusion and refinement stage in step S2.1 further optimizes the rough point cloud, which specifically includes:
fusing the input point cloud and the rough point cloud, and then sampling at the minimum density:
the minimum density sampling algorithm adopts a Gaussian weight summation method to estimate the density of points, the point with the minimum density is collected from the fused point cloud, and uniformly distributed sub-point clouds are obtained, wherein the specific formula is expressed as follows:
Figure BDA0002981045300000051
wherein p istIs the t-th sampling point; pt-1Is the first t-1 sample point set, Pt={pj|1≤j≤t};
Figure BDA0002981045300000052
Points other than the first t-1 sampling points; sigma is a positive number and is used for determining the size of a neighborhood to which the algorithm is applied;
refining the sub-point cloud obtained after sampling: inputting the sub-point cloud into a residual image convolution network, and generating a fine-grained point cloud through point-by-point residual errors;
the residual image convolution network is based on an image convolution network structure in an image convolution encoder, a residual connecting part is added, the output result of the residual image convolution network is added with the sub-point cloud obtained by sampling through an MDS algorithm point by point, and finally, a smooth point cloud capable of predicting the complete shape of a tooth body is generated;
because the input point cloud data is more reliable than the predicted rough point cloud data, a binary channel is added to the coordinates of the sub-point clouds to distinguish the source of each point, where 0 represents the source from the input point cloud and 1 represents the source from the rough point cloud.
Preferably, in step S2.2, the inflation penalty loss function L is usedexpansionDistance loss function L from bulldozerEMDCombined as a joint loss function, wherein the dilation penalty loss function LexpansionAs a regularization term to force each surface element to shrink, and a bulldozer distance loss function LEMDThe method is characterized by comprising the following steps of enabling point cloud to describe the complete shape of a tooth body, enabling each surface element to be more accurately and compactly concentrated in a local area due to the mutual constraint of the point cloud and the tooth body:
calculating a loss function LEMD
Figure BDA0002981045300000053
Figure BDA0002981045300000054
Wherein S iscoarseA coarse point cloud output for the first stage; scData points in the coarse point cloud; sfinalThe output point cloud after the second stage of thinning processing is obtained; sfOutputting data points in the fine point cloud; sgtIs a gold standard; phi is a bijective function;
calculating a joint loss function L:
L=LEMD(Scoarse,Sgt)+αLexpansion+βLEMD(Sfinal,Sgt)
wherein S iscoarseA coarse point cloud output for the first stage; sfinalThe output point cloud after the second stage of thinning processing is obtained; sgtIs a gold standard; alpha and beta are equal weight parameters, and the value range is 0.1-1.0.
In a second aspect, a two-stage dental point cloud completion system based on a deep learning network is provided, the system comprising:
module M1: constructing initial three-dimensional tooth model point cloud data based on CBCT data and mouth scanning data;
module M2: constructing a deep learning network (MSN), and completing MSN training by using an existing training set and a test set;
module M3: inputting the initial three-dimensional tooth model point cloud data obtained in the module M1 into the trained MSN to obtain the complete tooth point cloud data after completion.
Preferably, the module M1 includes:
module M1.1: extracting a crown three-dimensional model through a laser scanner, and converting the crown three-dimensional model into high-resolution laser scanning first three-dimensional dental model point cloud data;
module M1.2: acquiring CBCT data through a cone beam computed tomography scanner, extracting a complete tooth model from the CBCT data according to a region growing method, and converting the complete tooth model into CBCT to reconstruct second three-dimensional tooth model point cloud data;
module M1.3: and registering the first three-dimensional dental model point cloud data and the second three-dimensional dental model point cloud data by using a CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion to obtain initial three-dimensional dental model point cloud data.
Preferably, the module M2 includes:
module M2.1: building a deep learning network MSN: the MSN takes point cloud as input, and the point cloud completion is realized through two stages of processing:
the first stage is as follows: in the deformation prediction stage, a network presents an encoder-decoder structure, an automatic encoder predicts a coarser complete point cloud by extracting a Global Feature Vector (GFV), and prevents the overlapping of surface elements by using an expansion penalty;
and a second stage: in the fusion and refinement stage, the rough point cloud and the input point cloud are fused;
module M2.2: constructing a joint loss function and optimizing the MSN through the loss function: penalty for inflation function LexpansionDistance loss function L from bulldozerEMDCombining the functions to serve as a joint loss function, and optimizing the MSN through the joint loss function;
module M2.3: training and evaluating the MSN network by adopting the existing training set and test set: obtaining dentition CBCT data and corresponding mouth scanning data through clinical scanning, and taking three-dimensional tooth body model point cloud data obtained after registration as a golden standard.
Preferably, the deformation prediction stage in the module M2.1 specifically includes:
module M2.1.1: extracting point cloud data characteristics by adopting a graph convolution encoder:
inputting an input point cloud X, namely point cloud data of a laser scanning three-dimensional dental model, into a graph convolution encoder, wherein the point cloud X is positioned in an Euclidean space, and the size of each data point X in the X, X belongs to X and is 1 multiplied by 3;
for each data point x, taking n points around the data point x as a neighborhood Nx of the data point, wherein the size of the neighborhood is n multiplied by 3, and the coordinate is defined as xin
Coordinates of n neighborhood points in Nx with respect to data point x by point convolution operation with maximum pooling
Figure BDA0002981045300000071
Conversion into a feature vector f of size 1 × cinWherein
Figure BDA0002981045300000072
c is the number of channels of the convolution kernel; x is to beinAnd finInputting the global feature vector into the graph convolution network, and outputting the global feature vector which is recorded as fout
The graph convolution network consists of a modified linear unit ReLu activating function and a graph convolution G-conv, wherein the ReLu activating function maps the input of each neuron in the graph convolution network to the output end; the graph convolution G-conv utilizes the characteristic value of the data point x on the layer τ +1 to calculate the characteristic value of the data point x on the data point x:
Figure BDA0002981045300000073
in the formula
Figure BDA0002981045300000074
For the feature value of the data point x at the τ th layer,
Figure BDA0002981045300000075
is a characteristic value, w, at a layer τ +1 data point0And w1All parameters are learnable parameters, and the characteristic value of the data point x at the Tth layer and the weight of the characteristic value of the neighborhood of the data point are respectively determined;
module M2.1.2: predicting a coarse point cloud with a deformation-based decoder: the decoder learns the mapping from 2D unit squares to 3D surfaces using a multi-layered perceptron,thereby simulating the deformation of a 2D square to a 3D surface, generating K surface facesiI ═ 1, 2, …, K; forming a coarse point cloud having complex shape information;
module M2.1.3: regularization of the coarse point cloud: regarding the 3D points generated by each multi-layer perceptron in each forward transmission process as a vertex set, and constructing a minimum spanning tree T according to the Euclidean distance between the verticesi
Selection of TiIf the middle diameter, i.e. the middle vertex of the simple path containing the most vertices, is used as the root node, all the edges point to the root direction, i.e. the minimum spanning tree TiThe direction of (a);
the longer the edge of the minimum spanning tree is, the longer the distance between two points is, the higher the probability of mixing with the points on other surfaces is, the expansion punishment causes the root nodes to shrink to a more compact area along the edge, and finally, the purpose of optimally generating the point cloud is achieved, and the formula is expressed as follows:
Figure BDA0002981045300000076
Figure BDA0002981045300000077
where dis (u, v) is the Euclidean distance between vertices u and v; n is the number of sampling points in the local area; (u, v) ∈ TiIs a minimum spanning tree TiA directed edge with u and v as vertexes; liRepresenting a minimum spanning tree TiThe average length of the middle edge;
Figure BDA0002981045300000081
for indicating functions, for screening out side lengths smaller than λ liThe edge of (1); lambda is a hyper-parameter, and the size of the threshold is determined;
the dilation penalty function L is due to the fact that the motion of the points in the construction tree can be seen as infinitesimal, i.e. the motion of the points in the minimum spanning tree is constantexpansionCan be microminiature almost everywhere, for minimum spanning tree TiEach of the directed edges (u,v)∈Tiif it is longer than λ liThen, only a backward gradient is given to u, so that u is contracted towards v, and a more compact surface is formedi
Compared with the prior art, the invention has the following beneficial effects:
1. the method adopts a method based on deformation and a sampling network, dental point cloud data are supplemented in two stages, and in the first stage, the method can predict a complete point cloud with coarse granularity; in the second stage, the coarse-grained prediction point cloud and the input point cloud are fused through a sampling algorithm, and uniformly distributed fine-grained prediction point cloud can be obtained;
2. the joint loss function in the invention ensures the centralized distribution of points in a local area, the minimum density sampling algorithm MDS reserves the known tooth structure, and the prediction result can effectively avoid the problems of uneven distribution, fuzzy details or structure loss and the like.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of a two-stage dental point cloud completion method based on a deep learning network according to the present invention;
FIG. 2 is a MSN deep learning network structure diagram of the two-stage dental point cloud completion method based on the deep learning network of the present invention;
FIG. 3 is a graph convolution network structure diagram in the MSN deep learning network of the two-stage dental point cloud completion method based on the deep learning network of the present invention;
FIG. 4 is a structural diagram of a decoder based on deformation in the MSN deep learning network of the two-stage dental point cloud completion method based on the deep learning network of the present invention;
fig. 5 is a graph of a residual error map convolution network structure in the MSN deep learning network of the two-stage dental point cloud completion method based on the deep learning network of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The embodiment of the invention provides a two-stage dental point cloud completion method based on a deep learning network, and as shown in fig. 1, the method comprises the following steps of S1: constructing initial three-dimensional tooth body model point cloud data based on CBCT data and mouth scan data, specifically:
extracting a dental crown three-dimensional model through a laser scanner (oral scanning device), and converting the dental crown three-dimensional model into high-resolution laser scanning first three-dimensional dental model point cloud data; acquiring CBCT data through a cone beam computed tomography scanner, extracting a complete tooth model from the CBCT data according to a region growing method, and converting the complete tooth model into CBCT to reconstruct second three-dimensional tooth model point cloud data; and registering the first three-dimensional dental model point cloud data and the second three-dimensional dental model point cloud data by using a CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion to obtain initial three-dimensional dental model point cloud data.
Referring to fig. 2 and 3, step S2: constructing a deep learning network MSN, and completing the MSN training by using the existing training set and test set, wherein the method specifically comprises the following steps:
building a deep learning network MSN: the MSN takes point cloud as input, and the point cloud completion is realized through two stages of processing: the first stage is as follows: in the deformation prediction stage, the network presents an encoder-decoder structure, and the automatic encoder predicts a coarser complete point cloud by extracting Global Feature Vectors (GFV) and prevents overlap between surface elements with dilation penalties.
And a second stage: in the fusion and refinement stage, we fuse the coarse point cloud with the input point cloud. Obtaining a sub-point cloud of the fused point cloud through a Minimum Density Sampling (MDS) algorithm, and then feeding the sub-point cloud into a residual error network to perform point-by-point residual error prediction. And meanwhile, connecting the point-by-point residual error prediction result with the sub-point cloud by utilizing jump connection, and finally outputting the complete point cloud with uniformly distributed surface.
Referring to fig. 3 and 4, in the deformation prediction stage, a graph convolution encoder is used to extract the point cloud data features:
inputting an input point cloud X, namely the point cloud data of the laser scanning three-dimensional dental model into a graph convolution encoder, wherein the point cloud X is positioned in an Euclidean space, and the size of each data point X in the X, X belongs to X and is 1 multiplied by 3. For each data point x, taking n points around the data point x as a neighborhood Nx of the data point, wherein the size of the neighborhood is n multiplied by 3, and the coordinate is defined as xin. Coordinates of n neighborhood points in Nx with respect to data point x by point convolution operation with maximum pooling
Figure BDA0002981045300000091
Conversion into a feature vector f of size 1 × cinWherein
Figure BDA0002981045300000092
c is the number of channels of the convolution kernel; x is to beinAnd finInputting the global feature vector GFV into the graph convolution network, and outputting the global feature vector GFV which is recorded as fout
The graph convolution network is composed of a corrected Linear Unit (ReLU) activation function and a graph convolution G-conv, wherein the ReLu activation function maps the input of each neuron in the graph convolution network to the output end; the graph convolution G-conv utilizes the characteristic value of the data point x on the layer τ +1 to calculate the characteristic value of the data point x on the data point x:
Figure BDA0002981045300000101
in the formula
Figure BDA0002981045300000102
For the feature value of the data point x at the τ th layer,
Figure BDA0002981045300000103
is a characteristic value, w, at a layer τ +1 data point0And w1All the parameters are learnable parameters, and the eigenvalue of the data point x at the τ th layer and the weight of the eigenvalue of the neighborhood of the data point are respectively determined.
Predicting a coarse point cloud with a deformation-based decoder: the decoder learns the mapping from the 2D unit squares to the 3D surface using a Multi-Layer Perceptron (MLP), thereby simulating the deformation of the 2D squares to the 3D surface, generating K surface surfacesiI ═ 1, 2, …, K; a coarse point cloud with complex shape information is formed.
To facilitate the generation of a local surface, each surface element should be concentrated in a relatively simple local area. In each forward transfer process, firstly randomly sampling N points from a unit square (local area), wherein N is set to be 128-512. And connecting the coordinates of the N sampling points with the obtained characteristic vector GFV, and then transmitting the coordinates as input to K multilayer perceptrons, wherein K is set to be 4-16. Therefore, each sampled 2D point generates corresponding K3D points on K different surfaces, KN 3D points can be output in each forward transfer, and finally continuous mapping from 2D to 3D is achieved. The denser the decoder samples on the 2D squares, the smoother the resulting 3D surface.
Regularization of the coarse point cloud: the joint use of MLPs may cause overlap and mixing between surface elements, which on the one hand may lead to an uneven distribution of the predicted point cloud density; on the other hand, the expansion area and the coverage area of each surface element are increased, so that the 2D local area is deformed, and the capturing difficulty of the local detail information is increased. The invention introduces an expansion penalty term, only carries out regularization treatment on sparsely distributed points in the generated surface, ensures the compactness and the concentration of different surfaces in a local area, and prevents the different surfaces from being covered and overlapped mutually.
Specifically, the invention regards the 3D points generated by each multi-layer perceptron in each forward transfer process as a vertex set, and constructs the minimum spanning tree T according to the Euclidean distance between the verticesi
Selection of TiMedium diameter is that ofThe middle vertex of the simple path with the most vertices is used as the root node, and all the edges point to the root direction, namely the minimum spanning tree TiSo that for K MLPs there are K directed minimum spanning trees to describe the distribution of its points.
The longer the edge of the minimum spanning tree is, the longer the distance between two points is, the higher the probability of mixing with the points on other surfaces is, the expansion punishment causes the root nodes to shrink to a more compact area along the edge, and finally, the purpose of optimally generating the point cloud is achieved, and the formula is expressed as follows:
Figure BDA0002981045300000104
Figure BDA0002981045300000105
where dis (u, v) is the Euclidean distance between vertices u and v; n is the number of sampling points in the local area; (u, v) ∈ TiIs a minimum spanning tree TiA directed edge with u and v as vertexes; liRepresenting a minimum spanning tree TiThe average length of the middle edge;
Figure BDA0002981045300000113
for indicating functions, for screening out side lengths smaller than λ liThe edge of (1); λ is a hyper-parameter, which determines the size of the threshold (the value range is 1.0-2.0), and in this embodiment, the value is 1.5.
The dilation penalty function L is because the motion of points in the construction tree can be seen as infinitesimal, i.e. the motion of points in the minimum spanning tree is constantexpansionCan be microminiature almost everywhere, for minimum spanning tree TiEach directed edge (u, v) of (c) is e.tiIf it is longer than λ liThen, only a backward gradient is given to u, so that u is contracted towards v, and a more compact surface is formedi
Secondly, further optimizing the rough point cloud in the fusion and fine stage, which specifically comprises the following steps:
fusing the input point cloud and the rough point cloud, and then sampling at the minimum density:
the densities of the input point cloud and the rough point cloud may be different, and direct fusion may cause problems of overlapping and merging between the point clouds, so that the fused point clouds are not uniformly distributed. The minimum density sampling MDS algorithm adopts a Gaussian weight summation method to estimate the density of points, collects the point with the minimum density from the fused point cloud, and obtains evenly distributed sub-point clouds, wherein the specific formula is expressed as follows:
Figure BDA0002981045300000111
wherein p istIs the t-th sampling point; pt-1Is the first t-1 sample point set, Pt={pj|1≤j≤t};
Figure BDA0002981045300000112
Points other than the first t-1 sampling points; σ is a positive number used to determine the neighborhood size for which the algorithm applies.
Referring to fig. 5, the sub-point cloud obtained after sampling is refined: and inputting the sub-point cloud into a residual image convolution network, and generating a fine-grained point cloud through point-by-point residual errors. The residual image convolution network is based on the image convolution network structure in the image convolution encoder, a residual connecting part is added, the output result of the residual image convolution network is added with the sub-point cloud obtained by sampling through the MDS algorithm point by point, and finally the smooth point cloud capable of predicting the complete shape of the tooth body is generated.
Because the input point cloud data is more reliable than the predicted rough point cloud data, a binary channel is added to the coordinates of the sub-point clouds to distinguish the source of each point, where 0 represents the source from the input point cloud and 1 represents the source from the rough point cloud.
Constructing a joint loss function and optimizing the MSN through the loss function: penalty for inflation function LexpansionDistance to bulldozer (EMD) loss function LEMDThe MSNs are optimized by the joint loss function in combination as the joint loss function. In particular asThe following:
penalty for inflation function LexpansionDistance loss function L from bulldozerEMDCombined as a joint loss function, wherein the dilation penalty loss function LexpansionAs a regularization term to force each surface element to shrink, and a bulldozer distance loss function LEMDThe point cloud is prompted to describe the complete shape of the tooth body, and the point cloud and the tooth body are mutually constrained to enable each surface element to be concentrated in a local area as accurately and compactly as possible, and the method comprises the following steps:
calculating a loss function LEMD
Figure BDA0002981045300000121
Figure BDA0002981045300000122
Wherein S iscoarseA coarse point cloud output for the first stage; scData points in the coarse point cloud; sfinalThe output point cloud after the second stage of thinning processing is obtained; sfOutputting data points in the fine point cloud; sgtIs a gold standard; phi is a bijective function;
calculating a joint loss function L:
L=LEMD(Scoarse,Sgt)+αLexpansion+βLEMD(Sfinal,Sgt)
wherein S iscoarseA coarse point cloud output for the first stage; sfinalThe output point cloud after the second stage of thinning processing is obtained; sgtIs a gold standard; alpha and beta are equal weight parameters, and the value range is 0.1-1.0, in the embodiment, alpha is 0.1, and beta is 1.0.
The standard Chamfering Distance (CD) and the bulldozer Distance EMD are used as indexes for evaluating the similarity between the completion point cloud data and the gold standard, and the smaller the index is, the better the performance is;
and (3) measuring the Uniformity of the distribution of the complete point cloud by adopting a Normalized Uniformity Coefficient (NUC), wherein the smaller the index is, the better the performance is.
Training and evaluating the MSN network by adopting the existing training set and test set: obtaining dentition CBCT data and corresponding mouth scanning data through clinical scanning, and taking three-dimensional tooth body model point cloud data obtained after registration as a golden standard. One group of data in the data set consists of laser scanning three-dimensional dental model point cloud data of one patient and corresponding three-dimensional dental model point cloud data (gold standard), wherein most of the data are selected as a training set and account for 50% -90% of total data, and the small part of the data are selected as a testing set and account for 10% -50% of the total data. And the training set is used for training the MSN, and after the network training is finished, the generalization capability of the MSN is evaluated by the test set.
Finally, step S3: inputting the initial three-dimensional tooth model point cloud data obtained in the step S1 into the trained MSN to obtain complete tooth point cloud data after completion.
The embodiment of the invention provides a two-stage dental point cloud completion method based on a deep learning Network. In the first stage, the method predicts a complete but coarse-grained point cloud; and in the second stage, fusing the coarse-grained prediction point cloud and the input point cloud through a sampling algorithm to obtain uniformly distributed fine-grained prediction point cloud. The joint loss function ensures the centralized distribution of points in a local area, the minimum density sampling algorithm MDS reserves the known tooth structure, and the prediction result can effectively avoid the problems of uneven distribution, fuzzy details, structural loss and the like.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A two-stage dental point cloud completion method based on a deep learning network is characterized by comprising the following steps:
step S1: constructing initial three-dimensional tooth model point cloud data based on CBCT data and mouth scanning data;
step S2: constructing a deep learning network (MSN), and completing MSN training by using an existing training set and a test set;
step S3: inputting the initial three-dimensional tooth model point cloud data obtained in the step S1 into the trained MSN to obtain complete tooth point cloud data after completion.
2. The two-stage dental point cloud completion method based on the deep learning network of claim 1, wherein the step S1 comprises:
step S1.1: extracting a crown three-dimensional model through a laser scanner, and converting the crown three-dimensional model into high-resolution laser scanning first three-dimensional dental model point cloud data;
step S1.2: acquiring CBCT data through a cone beam computed tomography scanner, extracting a complete tooth model from the CBCT data according to a region growing method, and converting the complete tooth model into CBCT to reconstruct second three-dimensional tooth model point cloud data;
step S1.3: and registering the first three-dimensional dental model point cloud data and the second three-dimensional dental model point cloud data by using a CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion to obtain initial three-dimensional dental model point cloud data.
3. The two-stage dental point cloud completion method based on the deep learning network of claim 1, wherein the step S2 comprises:
step S2.1: building a deep learning network MSN: the MSN takes point cloud as input, and the point cloud completion is realized through two stages of processing:
the first stage is as follows: in the deformation prediction stage, a network presents an encoder-decoder structure, an automatic encoder predicts a coarser complete point cloud by extracting a Global Feature Vector (GFV), and prevents the overlapping of surface elements by using an expansion penalty;
and a second stage: in the fusion and refinement stage, the rough point cloud and the input point cloud are fused;
step S2.2: constructing a joint loss function and optimizing the MSN through the loss function: penalty for inflation function LexpansionDistance loss function L from bulldozerEMDCombining the functions to serve as a joint loss function, and optimizing the MSN through the joint loss function;
step S2.3: training and evaluating the MSN network by adopting the existing training set and test set: obtaining dentition CBCT data and corresponding mouth scanning data through clinical scanning, and taking three-dimensional tooth body model point cloud data obtained after registration as a golden standard.
4. The two-stage dental point cloud completion method based on the deep learning network as claimed in claim 3, wherein the deformation prediction stage in the step S2.1 specifically comprises:
step S2.1.1: extracting point cloud data characteristics by adopting a graph convolution encoder:
inputting an input point cloud X, namely point cloud data of a laser scanning three-dimensional dental model, into a graph convolution encoder, wherein the point cloud X is positioned in an Euclidean space, and the size of each data point X in the X, X belongs to X and is 1 multiplied by 3;
for each data point x, taking n points around the data point x as a neighborhood Nx of the data point, wherein the size of the neighborhood is n multiplied by 3, and the coordinate is defined as xin
Coordinates of n neighborhood points in Nx with respect to data point x by point convolution operation with maximum pooling
Figure FDA0002981045290000021
Conversion into a feature vector f of size 1 × cinWherein
Figure FDA0002981045290000022
c is the number of channels of the convolution kernel; x is to beinAnd finInputting the global feature vector into the graph convolution network, and outputting the global feature vector which is recorded as fout
The graph convolution network consists of a modified linear unit ReLu activating function and a graph convolution G-conv, wherein the ReLu activating function maps the input of each neuron in the graph convolution network to the output end; the graph convolution G-conv utilizes the characteristic value of the data point x on the layer τ +1 to calculate the characteristic value of the data point x on the data point x:
Figure FDA0002981045290000023
in the formula
Figure FDA0002981045290000024
For the feature value of the data point x at the τ th layer,
Figure FDA0002981045290000025
is a characteristic value, w, at a layer τ +1 data point0And w1All parameters are learnable parameters, and the characteristic value of the data point x at the Tth layer and the weight of the characteristic value of the neighborhood of the data point are respectively determined;
step S2.1.2: predicting coarse points using a deformation-based decoderCloud: the decoder learns the mapping from the 2D unit square to the 3D surface by using the multi-layer perceptron, thereby simulating the deformation of the 2D square to the 3D surface and generating K surface surfacesiI ═ 1, 2, …, K; forming a coarse point cloud having complex shape information;
step S2.1.3: regularization of the coarse point cloud: regarding the 3D points generated by each multi-layer perceptron in each forward transmission process as a vertex set, and constructing a minimum spanning tree T according to the Euclidean distance between the verticesi
Selection of TiIf the middle diameter, i.e. the middle vertex of the simple path containing the most vertices, is used as the root node, all the edges point to the root direction, i.e. the minimum spanning tree TiThe direction of (a);
the longer the edge of the minimum spanning tree is, the longer the distance between two points is, the higher the probability of mixing with the points on other surfaces is, the expansion punishment causes the root nodes to shrink to a more compact area along the edge, and finally, the purpose of optimally generating the point cloud is achieved, and the formula is expressed as follows:
Figure FDA0002981045290000031
Figure FDA0002981045290000032
where dis (u, v) is the Euclidean distance between vertices u and v; n is the number of sampling points in the local area; (u, v) ∈ TiIs a minimum spanning tree TiA directed edge with u and v as vertexes; liRepresenting a minimum spanning tree TiThe average length of the middle edge;
Figure FDA0002981045290000033
for indicating functions, for screening out side lengths smaller than λ liThe edge of (1); lambda is a hyper-parameter, and the size of the threshold is determined;
since the motion of the points in the construction tree can be seen as infinitesimal, i.e. the motion of the points in the minimum spanning tree is unchanged, the motion of the points in the construction tree is not changedInflation penalty function LexpansionCan be microminiature almost everywhere, for minimum spanning tree TiEach directed edge (u, v) of (c) is e.tiIf it is longer than λ liThen, only a backward gradient is given to u, so that u is contracted towards v, and a more compact surface is formedi
5. The two-stage dental body point cloud completion method based on the deep learning network as claimed in claim 3, wherein the fusion and refinement stage in the step S2.1 further optimizes the rough point cloud, specifically comprising:
step S2.1.4: fusing the input point cloud and the rough point cloud, and then sampling at the minimum density:
the minimum density sampling algorithm adopts a Gaussian weight summation method to estimate the density of points, the point with the minimum density is collected from the fused point cloud, and uniformly distributed sub-point clouds are obtained, wherein the specific formula is expressed as follows:
Figure FDA0002981045290000034
wherein p istIs the t-th sampling point; pt-1Is the first t-1 sample point set, Pt={pj|1≤j≤t};
Figure FDA0002981045290000035
Points other than the first t-1 sampling points; sigma is a positive number and is used for determining the size of a neighborhood to which the algorithm is applied;
step S2.1.5: refining the sub-point cloud obtained after sampling: inputting the sub-point cloud into a residual image convolution network, and generating a fine-grained point cloud through point-by-point residual errors;
the residual image convolution network is based on an image convolution network structure in an image convolution encoder, a residual connecting part is added, the output result of the residual image convolution network is added with the sub-point cloud obtained by sampling through an MDS algorithm point by point, and finally, a smooth point cloud capable of predicting the complete shape of a tooth body is generated;
because the input point cloud data is more reliable than the predicted rough point cloud data, a binary channel is added to the coordinates of the sub-point clouds to distinguish the source of each point, where 0 represents the source from the input point cloud and 1 represents the source from the rough point cloud.
6. The two-stage dental point cloud completion method based on deep learning network as claimed in claim 3, wherein in step S2.2, a swelling penalty loss function L is appliedexpansionDistance loss function L from bulldozerEMDCombined as a joint loss function, wherein the dilation penalty loss function LexpansionAs a regularization term to force each surface element to shrink, and a bulldozer distance loss function LEMDThe method is characterized by comprising the following steps of enabling point cloud to describe the complete shape of a tooth body, enabling each surface element to be more accurately and compactly concentrated in a local area due to the mutual constraint of the point cloud and the tooth body:
step S2.2.1: calculating a loss function LEMD
Figure FDA0002981045290000041
Figure FDA0002981045290000042
Wherein S iscoarseA coarse point cloud output for the first stage; scData points in the coarse point cloud; sfinalThe output point cloud after the second stage of thinning processing is obtained; sfOutputting data points in the fine point cloud; sgtIs a gold standard; phi is a bijective function;
step S2.2.2: calculating a joint loss function L:
L=LEMD(Scoarse,Sgt)+αLexpansion+βLEMD(Sfinal,Sgt)
wherein S iscoarseIs output for the first stageThe coarse point cloud of (a); sfinalThe output point cloud after the second stage of thinning processing is obtained; sgtIs a gold standard; alpha and beta are equal weight parameters, and the value range is 0.1-1.0.
7. A two-stage dental point cloud completion system based on a deep learning network is characterized by comprising:
module M1: constructing initial three-dimensional tooth model point cloud data based on CBCT data and mouth scanning data;
module M2: constructing a deep learning network (MSN), and completing MSN training by using an existing training set and a test set;
module M3: inputting the initial three-dimensional tooth model point cloud data obtained in the module M1 into the trained MSN to obtain the complete tooth point cloud data after completion.
8. The deep learning network-based two-stage dental point cloud completion system according to claim 6, wherein the module M1 comprises:
module M1.1: extracting a crown three-dimensional model through a laser scanner, and converting the crown three-dimensional model into high-resolution laser scanning first three-dimensional dental model point cloud data;
module M1.2: acquiring CBCT data through a cone beam computed tomography scanner, extracting a complete tooth model from the CBCT data according to a region growing method, and converting the complete tooth model into CBCT to reconstruct second three-dimensional tooth model point cloud data;
module M1.3: and registering the first three-dimensional dental model point cloud data and the second three-dimensional dental model point cloud data by using a CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion to obtain initial three-dimensional dental model point cloud data.
9. The deep learning network-based two-stage dental point cloud completion system according to claim 6, wherein the module M2 comprises:
module M2.1: building a deep learning network MSN: the MSN takes point cloud as input, and the point cloud completion is realized through two stages of processing:
the first stage is as follows: in the deformation prediction stage, a network presents an encoder-decoder structure, an automatic encoder predicts a coarser complete point cloud by extracting a Global Feature Vector (GFV), and prevents the overlapping of surface elements by using an expansion penalty;
and a second stage: in the fusion and refinement stage, the rough point cloud and the input point cloud are fused;
module M2.2: constructing a joint loss function and optimizing the MSN through the loss function: penalty for inflation function LexpansionDistance loss function L from bulldozerEMDCombining the functions to serve as a joint loss function, and optimizing the MSN through the joint loss function;
module M2.3: training and evaluating the MSN network by adopting the existing training set and test set: obtaining dentition CBCT data and corresponding mouth scanning data through clinical scanning, and taking three-dimensional tooth body model point cloud data obtained after registration as a golden standard.
10. The deep learning network-based two-stage dental point cloud completion system according to claim 6, wherein the deformation prediction stage in the module M2.1 specifically comprises:
module M2.1.1: extracting point cloud data characteristics by adopting a graph convolution encoder:
inputting an input point cloud X, namely point cloud data of a laser scanning three-dimensional dental model, into a graph convolution encoder, wherein the point cloud X is positioned in an Euclidean space, and the size of each data point X in the X, X belongs to X and is 1 multiplied by 3;
for each data point x, taking n points around the data point x as a neighborhood Nx of the data point, wherein the size of the neighborhood is n multiplied by 3, and the coordinate is defined as xin
Coordinates of n neighborhood points in Nx with respect to data point x by point convolution operation with maximum pooling
Figure FDA0002981045290000051
Conversion into a feature vector f of size 1 × cinWherein
Figure FDA0002981045290000052
c is the number of channels of the convolution kernel; x is to beinAnd finInputting the global feature vector into the graph convolution network, and outputting the global feature vector which is recorded as fout
The graph convolution network consists of a modified linear unit ReLu activating function and a graph convolution G-conv, wherein the ReLu activating function maps the input of each neuron in the graph convolution network to the output end; the graph convolution G-conv utilizes the characteristic value of the data point x on the layer τ +1 to calculate the characteristic value of the data point x on the data point x:
Figure FDA0002981045290000053
in the formula fτ xFor the feature value of the data point x at the τ th layer,
Figure FDA0002981045290000061
is a characteristic value, w, at a layer τ +1 data point0And w1All parameters are learnable parameters, and the characteristic value of the data point x at the Tth layer and the weight of the characteristic value of the neighborhood of the data point are respectively determined;
module M2.1.2: predicting a coarse point cloud with a deformation-based decoder: the decoder learns the mapping from the 2D unit square to the 3D surface by using the multi-layer perceptron, thereby simulating the deformation of the 2D square to the 3D surface and generating K surface surfacesiI ═ 1, 2, …, K; forming a coarse point cloud having complex shape information;
module M2.1.3: regularization of the coarse point cloud: regarding the 3D points generated by each multi-layer perceptron in each forward transmission process as a vertex set, and constructing a minimum spanning tree T according to the Euclidean distance between the verticesi
Selection of TiIf the middle diameter, i.e. the middle vertex of the simple path containing the most vertices, is used as the root node, all the edges point to the root direction, i.e. the minimum spanning tree TiThe direction of (a);
the longer the edge of the minimum spanning tree is, the longer the distance between two points is, the higher the probability of mixing with the points on other surfaces is, the expansion punishment causes the root nodes to shrink to a more compact area along the edge, and finally, the purpose of optimally generating the point cloud is achieved, and the formula is expressed as follows:
Figure FDA0002981045290000062
Figure FDA0002981045290000063
where dis (u, v) is the Euclidean distance between vertices u and v; n is the number of sampling points in the local area; (u, v) ∈ TiIs a minimum spanning tree TiA directed edge with u and v as vertexes; liRepresenting a minimum spanning tree TiThe average length of the middle edge;
Figure FDA0002981045290000064
for indicating functions, for screening out side lengths smaller than λ liThe edge of (1); lambda is a hyper-parameter, and the size of the threshold is determined;
the dilation penalty function L is due to the fact that the motion of the points in the construction tree can be seen as infinitesimal, i.e. the motion of the points in the minimum spanning tree is constantexpansionCan be microminiature almost everywhere, for minimum spanning tree TiEach directed edge (u, v) of (c) is e.tiIf it is longer than λ liThen, only a backward gradient is given to u, so that u is contracted towards v, and a more compact surface is formedi
CN202110287374.6A 2021-03-17 2021-03-17 Two-stage dental point cloud completion method and system based on deep learning network Active CN112967219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110287374.6A CN112967219B (en) 2021-03-17 2021-03-17 Two-stage dental point cloud completion method and system based on deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110287374.6A CN112967219B (en) 2021-03-17 2021-03-17 Two-stage dental point cloud completion method and system based on deep learning network

Publications (2)

Publication Number Publication Date
CN112967219A true CN112967219A (en) 2021-06-15
CN112967219B CN112967219B (en) 2023-12-05

Family

ID=76279024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110287374.6A Active CN112967219B (en) 2021-03-17 2021-03-17 Two-stage dental point cloud completion method and system based on deep learning network

Country Status (1)

Country Link
CN (1) CN112967219B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269152A (en) * 2021-06-25 2021-08-17 北京邮电大学 Non-equidistant discrete depth completion method
CN113397585A (en) * 2021-07-27 2021-09-17 朱涛 Tooth body model generation method and system based on oral CBCT and oral scan data
CN113538261A (en) * 2021-06-21 2021-10-22 昆明理工大学 Shape repairing method for incomplete stalactite point cloud based on deep learning
CN113610956A (en) * 2021-06-17 2021-11-05 深圳市菲森科技有限公司 Method and device for characteristic matching of implant in intraoral scanning and related equipment
CN113705631A (en) * 2021-08-10 2021-11-26 重庆邮电大学 3D point cloud target detection method based on graph convolution
CN113808097A (en) * 2021-09-14 2021-12-17 北京主导时代科技有限公司 Method and system for detecting loss of key components of train
CN113888610A (en) * 2021-10-14 2022-01-04 雅客智慧(北京)科技有限公司 Dental preparation effect evaluation method, detection device and storage medium
CN114092469A (en) * 2021-12-02 2022-02-25 四川大学 Method and device for determining repair area of blade and readable storage medium
CN114897692A (en) * 2022-05-06 2022-08-12 广州紫为云科技有限公司 Handheld device carrying integral point cloud up-sampling algorithm based on zero sample learning
CN115186005A (en) * 2022-06-16 2022-10-14 上海船舶运输科学研究所有限公司 Working condition division method and system for ship main engine
TWI799181B (en) * 2022-03-10 2023-04-11 國立臺中科技大學 Method of establishing integrate network model to generate complete 3d point clouds from sparse 3d point clouds and segment parts
CN116258835A (en) * 2023-05-04 2023-06-13 武汉大学 Point cloud data three-dimensional reconstruction method and system based on deep learning
CN116863432A (en) * 2023-09-04 2023-10-10 之江实验室 Weak supervision laser travelable region prediction method and system based on deep learning
CN116883246A (en) * 2023-09-06 2023-10-13 感跃医疗科技(成都)有限公司 Super-resolution method for CBCT image

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875386A (en) * 2017-02-13 2017-06-20 苏州江奥光电科技有限公司 A kind of method for carrying out dental health detection automatically using deep learning
US20190057515A1 (en) * 2017-08-15 2019-02-21 Siemens Healthcare Gmbh Internal Body Marker Prediction From Surface Data In Medical Imaging
KR20190082066A (en) * 2017-12-29 2019-07-09 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 Method and apparatus for restoring point cloud data
CN110222580A (en) * 2019-05-09 2019-09-10 中国科学院软件研究所 A kind of manpower 3 d pose estimation method and device based on three-dimensional point cloud
CN110443842A (en) * 2019-07-24 2019-11-12 大连理工大学 Depth map prediction technique based on visual angle fusion
US20200058156A1 (en) * 2018-08-17 2020-02-20 Nec Laboratories America, Inc. Dense three-dimensional correspondence estimation with multi-level metric learning and hierarchical matching
CN110998602A (en) * 2017-06-30 2020-04-10 普罗马顿控股有限责任公司 Classification and 3D modeling of 3D dento-maxillofacial structures using deep learning methods
CN111862171A (en) * 2020-08-04 2020-10-30 万申(北京)科技有限公司 CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion
CN112085821A (en) * 2020-08-17 2020-12-15 万申(北京)科技有限公司 Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method
CN112087985A (en) * 2018-05-10 2020-12-15 3M创新有限公司 Simulated orthodontic treatment via real-time enhanced visualization
CN112120810A (en) * 2020-09-29 2020-12-25 深圳市深图医学影像设备有限公司 Three-dimensional data generation method of tooth orthodontic concealed appliance
CN112184556A (en) * 2020-10-28 2021-01-05 万申(北京)科技有限公司 Super-resolution imaging method based on oral CBCT (cone beam computed tomography) reconstruction point cloud
CN112200843A (en) * 2020-10-09 2021-01-08 福州大学 CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875386A (en) * 2017-02-13 2017-06-20 苏州江奥光电科技有限公司 A kind of method for carrying out dental health detection automatically using deep learning
CN110998602A (en) * 2017-06-30 2020-04-10 普罗马顿控股有限责任公司 Classification and 3D modeling of 3D dento-maxillofacial structures using deep learning methods
US20190057515A1 (en) * 2017-08-15 2019-02-21 Siemens Healthcare Gmbh Internal Body Marker Prediction From Surface Data In Medical Imaging
KR20190082066A (en) * 2017-12-29 2019-07-09 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 Method and apparatus for restoring point cloud data
CN112087985A (en) * 2018-05-10 2020-12-15 3M创新有限公司 Simulated orthodontic treatment via real-time enhanced visualization
US20200058156A1 (en) * 2018-08-17 2020-02-20 Nec Laboratories America, Inc. Dense three-dimensional correspondence estimation with multi-level metric learning and hierarchical matching
CN110222580A (en) * 2019-05-09 2019-09-10 中国科学院软件研究所 A kind of manpower 3 d pose estimation method and device based on three-dimensional point cloud
CN110443842A (en) * 2019-07-24 2019-11-12 大连理工大学 Depth map prediction technique based on visual angle fusion
CN111862171A (en) * 2020-08-04 2020-10-30 万申(北京)科技有限公司 CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion
CN112085821A (en) * 2020-08-17 2020-12-15 万申(北京)科技有限公司 Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method
CN112120810A (en) * 2020-09-29 2020-12-25 深圳市深图医学影像设备有限公司 Three-dimensional data generation method of tooth orthodontic concealed appliance
CN112200843A (en) * 2020-10-09 2021-01-08 福州大学 CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels
CN112184556A (en) * 2020-10-28 2021-01-05 万申(北京)科技有限公司 Super-resolution imaging method based on oral CBCT (cone beam computed tomography) reconstruction point cloud

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIAN MA: "SRF-Net:Spatial Ralationship Feature Network for Tooth Point Cloud Classification", COMPUTER GRAPHICS FORUM, vol. 39, no. 7, pages 267 - 277 *
张雅玲,于泽宽: "基于GCNN的CBCT模拟口扫点云数据牙齿分割算法", 计算机辅助设计与图形学学报 *
郭闯;戴宁;田素坤;孙玉春;俞青;刘浩;程筱胜;: "高分辨率深度生成网络的缺失牙体形态设计", 中国图象图形学报, no. 10 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610956B (en) * 2021-06-17 2024-05-28 深圳市菲森科技有限公司 Method, device and related equipment for feature matching implant in intraoral scanning
CN113610956A (en) * 2021-06-17 2021-11-05 深圳市菲森科技有限公司 Method and device for characteristic matching of implant in intraoral scanning and related equipment
CN113538261A (en) * 2021-06-21 2021-10-22 昆明理工大学 Shape repairing method for incomplete stalactite point cloud based on deep learning
CN113269152B (en) * 2021-06-25 2022-07-01 北京邮电大学 Non-equidistant discrete depth completion method
CN113269152A (en) * 2021-06-25 2021-08-17 北京邮电大学 Non-equidistant discrete depth completion method
CN113397585B (en) * 2021-07-27 2022-08-05 朱涛 Tooth body model generation method and system based on oral CBCT and oral scan data
CN113397585A (en) * 2021-07-27 2021-09-17 朱涛 Tooth body model generation method and system based on oral CBCT and oral scan data
CN113705631A (en) * 2021-08-10 2021-11-26 重庆邮电大学 3D point cloud target detection method based on graph convolution
CN113705631B (en) * 2021-08-10 2024-01-23 大庆瑞昂环保科技有限公司 3D point cloud target detection method based on graph convolution
CN113808097A (en) * 2021-09-14 2021-12-17 北京主导时代科技有限公司 Method and system for detecting loss of key components of train
CN113808097B (en) * 2021-09-14 2024-04-12 北京主导时代科技有限公司 Method and system for detecting loss of key parts of train
CN113888610A (en) * 2021-10-14 2022-01-04 雅客智慧(北京)科技有限公司 Dental preparation effect evaluation method, detection device and storage medium
CN113888610B (en) * 2021-10-14 2023-11-07 雅客智慧(北京)科技有限公司 Dental preparation effect evaluation method, detection apparatus, and storage medium
CN114092469A (en) * 2021-12-02 2022-02-25 四川大学 Method and device for determining repair area of blade and readable storage medium
TWI799181B (en) * 2022-03-10 2023-04-11 國立臺中科技大學 Method of establishing integrate network model to generate complete 3d point clouds from sparse 3d point clouds and segment parts
CN114897692A (en) * 2022-05-06 2022-08-12 广州紫为云科技有限公司 Handheld device carrying integral point cloud up-sampling algorithm based on zero sample learning
CN114897692B (en) * 2022-05-06 2024-04-26 广州紫为云科技有限公司 Handheld device carrying integral point cloud up-sampling algorithm based on zero sample learning
CN115186005A (en) * 2022-06-16 2022-10-14 上海船舶运输科学研究所有限公司 Working condition division method and system for ship main engine
CN116258835A (en) * 2023-05-04 2023-06-13 武汉大学 Point cloud data three-dimensional reconstruction method and system based on deep learning
CN116863432B (en) * 2023-09-04 2023-12-22 之江实验室 Weak supervision laser travelable region prediction method and system based on deep learning
CN116863432A (en) * 2023-09-04 2023-10-10 之江实验室 Weak supervision laser travelable region prediction method and system based on deep learning
CN116883246A (en) * 2023-09-06 2023-10-13 感跃医疗科技(成都)有限公司 Super-resolution method for CBCT image
CN116883246B (en) * 2023-09-06 2023-11-14 感跃医疗科技(成都)有限公司 Super-resolution method for CBCT image

Also Published As

Publication number Publication date
CN112967219B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN112967219A (en) Two-stage dental point cloud completion method and system based on deep learning network
CN109410273B (en) Topogram prediction from surface data in medical imaging
CN111627019B (en) Liver tumor segmentation method and system based on convolutional neural network
US20210322136A1 (en) Automated orthodontic treatment planning using deep learning
US10849585B1 (en) Anomaly detection using parametrized X-ray images
JP2022505587A (en) CT image generation method and its equipment, computer equipment and computer programs
Zanjani et al. Mask-MCNet: tooth instance segmentation in 3D point clouds of intra-oral scans
Azad et al. Transnorm: Transformer provides a strong spatial normalization mechanism for a deep segmentation model
EP3818500A1 (en) Automated determination of a canonical pose of a 3d objects and superimposition of 3d objects using deep learning
EP3874457B1 (en) Three-dimensional shape reconstruction from a topogram in medical imaging
Wang et al. RAR-U-Net: a residual encoder to attention decoder by residual connections framework for spine segmentation under noisy labels
CN112641457A (en) Synthetic parametric computed tomography from surface data in medical imaging
CN112598649B (en) 2D/3D spine CT non-rigid registration method based on generation of countermeasure network
WO2023044605A1 (en) Three-dimensional reconstruction method and apparatus for brain structure in extreme environments, and readable storage medium
CN114792326A (en) Surgical navigation point cloud segmentation and registration method based on structured light
Yin et al. CoT-UNet++: A medical image segmentation method based on contextual Transformer and dense connection
WO2022232559A1 (en) Neural network margin proposal
Tian et al. RGB oralscan video-based orthodontic treatment monitoring
CN114066772A (en) Tooth body point cloud completion method and system based on transform encoder
CN113283373A (en) Method for enhancing detection of limb motion parameters by depth camera
CN116485809B (en) Tooth example segmentation method and system based on self-attention and receptive field adjustment
CN113706684A (en) Three-dimensional blood vessel image reconstruction method, system, medical device and storage medium
CN115100306A (en) Four-dimensional cone-beam CT imaging method and device for pancreatic region
Hu et al. Mpcnet: Improved meshsegnet based on position encoding and channel attention
Hosseinimanesh et al. Improving the quality of dental crown using a transformer-based method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant