CN112085821A - Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method - Google Patents

Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method Download PDF

Info

Publication number
CN112085821A
CN112085821A CN202010825031.6A CN202010825031A CN112085821A CN 112085821 A CN112085821 A CN 112085821A CN 202010825031 A CN202010825031 A CN 202010825031A CN 112085821 A CN112085821 A CN 112085821A
Authority
CN
China
Prior art keywords
point cloud
cloud data
model
dental crown
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010825031.6A
Other languages
Chinese (zh)
Inventor
于泽宽
陈斌赫
张洁
王俊杰
金冠男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wanshen Beijing Technology Co Ltd
Original Assignee
Wanshen Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanshen Beijing Technology Co Ltd filed Critical Wanshen Beijing Technology Co Ltd
Priority to CN202010825031.6A priority Critical patent/CN112085821A/en
Publication of CN112085821A publication Critical patent/CN112085821A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The invention discloses a CBCT and laser scanning point cloud data registration method based on semi-supervision. The method comprises the steps of acquiring point cloud data of different sources, respectively obtaining complete model point cloud data and crown model point cloud data, then denoising, down-sampling the denoised point cloud data, carrying out point cloud data registration through a semi-supervised network to obtain complete point cloud data, and finally carrying out semi-supervised correction on the complete point cloud data by utilizing a constructed loss function; the invention adopts a semi-supervised or unsupervised mode to train, solves the registration problem by minimizing the projection error on the feature space without searching corresponding items; the method has better precision than the traditional registration method, the advanced feature learning method and the deep learning registration method. Obvious noise, density difference, partial overlapping of point clouds and the like can be processed.

Description

Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method
Technical Field
The invention relates to the technical field of medical image processing, in particular to a CBCT and laser scanning point cloud data registration method based on semi-supervision.
Background
In the field of oral medicine, orthodontic treatment requires a complete three-dimensional dental model for diagnosis, making an orthodontic treatment plan, monitoring tooth movement and other processes, and meanwhile, the complete and accurate three-dimensional dental model is also a necessary premise of a dental implant or orthognathic surgical plan, and a doctor can understand the three-dimensional space of a focus part of a patient more deeply through the three-dimensional model. There are two ways to obtain a three-dimensional tooth model in the oral cavity: based on Cone Beam Computed Tomography (CBCT) image reconstruction and direct intra-patient intraoral scan or optical scan plaster model acquisition. The CBCT image can quickly obtain an anatomical structure of an image in the oral cavity of a patient, wherein the anatomical structure comprises three visual planes (a sagittal plane, a coronal plane and a transverse plane), and the construction of a complete three-dimensional tooth model through the CBCT image is also a common three-dimensional tooth model visualization means in the existing digital orthodontic software. With the development of optical equipment, many scholars directly acquire a triangular mesh model of a three-dimensional tooth model by using a point laser beam, a line laser beam or an image sensor CCD (charge coupled device) technology for research, the digital virtual three-dimensional tooth model acquired by the method has higher precision, but the scanning result can only acquire surface information, namely information of the surfaces of a dental crown and a gum, but cannot acquire complete tooth body data containing the dental root. Therefore, the fusion registration of the CBCT and the intraoral laser scanning point cloud data tooth is necessary, and a complete three-dimensional tooth model is constructed so as to assist tooth implantation and orthodontic treatment.
Point cloud registration is a process of converting different scans of the same three-dimensional scene or object into a coordinate system, and the registration is very important for many tasks such as robot vision and augmented reality. The existing point cloud registration methods are mainly classified into the following types: a registration method based on deep learning, a registration method based on improved iterative closest point ICP, a registration method based on hyper-voxels, and a registration method based on multi-view fusion. Among many registration methods, the Iterative Closest Point (ICP) algorithm introduced in the early 90 s of the 20 th century is the most notable algorithm for efficiently registering two-dimensional or three-dimensional point sets under the euclidean (rigid) transformation, and its concept is simple and intuitive. While ICP reduces some objective function measurement alignment, ICP often falls into sub-optimal local extrema due to the non-convexity of the problem and the need for good initialization, otherwise it easily falls into local minima. To address the local optimization of ICP and other difficulties, a number of improved algorithms based on ICP algorithms have been derived. The algorithms are optimized from the aspects of removing mismatching points, constructing an error measurement function, solving the error function and the like. The color point cloud data registration method based on the hyper-voxel is used for assisting the initial geometric alignment of the point cloud by adding the color information of the point cloud, so that the robust registration is more accurately carried out by transformation estimation. The registration method based on multi-view fusion realizes multi-view rendering in a network, has optimizable viewpoints, can be used for performing combined training with a later stage, and integrates the convolution characteristic among the views through a soft viewpoint pool, but the method is not suitable for outdoor scenes and cannot be used for real-time registration.
Although the existing registration method is continuously optimized, most of the existing registration methods only estimate single characteristics of point cloud data, and cannot summarize the performance and the state of the point cloud data based on the whole situation, and when the amount of the point cloud data is large, the estimation of the characteristics of all the point cloud data is long in time consumption and low in precision, and meanwhile, the registration methods are influenced by factors such as noise and point cloud overlapping, so that the problems of poor robustness, low accuracy and the like are caused.
Disclosure of Invention
In order to meet the complete tooth or dentition model of clinical requirements, the invention provides a CBCT and laser scanning point cloud data registration method based on semi-supervision, so as to overcome the defects of the existing point cloud registration technology.
The invention discloses a CBCT and laser scanning point cloud data registration method based on semi-supervision, which comprises the following steps:
1) acquiring point cloud data of different sources:
extracting a tooth complete model from CBCT data according to a region growing method, converting the tooth complete model into complete model point cloud data, extracting a dental crown three-dimensional model by adopting a laser scanner, and converting the dental crown three-dimensional model into dental crown model point cloud data;
2) denoising the complete model point cloud data and the dental crown model point cloud data respectively by using a statistical outlier elimination filter to obtain denoised complete model point cloud data and dental crown model point cloud data;
3) respectively carrying out down-sampling on the denoised complete model point cloud data and the dental crown model point cloud data to reduce the point cloud data volume;
4) point cloud data registration is carried out through a semi-supervised network:
a) performing initial feature extraction on the point cloud data:
respectively embedding the point cloud data of the complete model and the point cloud data of the dental crown model after down sampling into a high-dimensional space point by point through a deep learning network DGCNN, searching matching point pairs of the complete model point cloud data and the dental crown model point cloud data, and searching the characteristics of each matching point pair on the point cloud data so as to generate a mapping relation m (-) and perform rigid conversion to generate a global feature vector, wherein the global feature vector comprises local neighborhood information of the feature points;
b) encoding the point cloud data:
converting complete model point cloud data and crown model point cloud data embedded in a high-dimensional space into secondary images through an encoder, and extracting characteristics containing rotation information from the secondary images, so that rotation difference in a conversion estimation process can be reflected, and compared with a PointNet network which adopts two mlp layers and one max-pool layer to extract characteristics, the method abandons input conversion and a characteristic conversion layer, so that the characteristics contain point cloud rotation difference;
c) decoding the features:
training without supervision by adopting a decoder, and respectively restoring the global feature vector generated by the complete model point cloud data and the dental crown model point cloud data and the features containing the rotation information to the corresponding point cloud data; this helps the encoder module learn to generate unique features knowing the rotation difference;
d) and (3) registering by using the features to realize fusion:
the projection error is measured by minimizing the characteristic, and the complete model point cloud data and the dental crown model point cloud data are registered by utilizing the global characteristic vector and the characteristics containing the rotation information, so that the complete model point cloud data and the dental crown model point cloud data are fused to obtain complete point cloud data, and the registration is optimized;
5) constructing a loss function:
respectively constructing a chamfering loss function and a geometric loss function, and overlapping the chamfering loss function and the geometric loss function to obtain a loss function for semi-supervised correction;
6) and carrying out semi-supervised correction on the complete point cloud data in the step 4) by using the constructed loss function.
In step 1), the tooth complete model extracted from the CBCT data includes a crown and a root part, and is a complete tooth model, but the resolution is low. The laser scanning provides a crown three-dimensional model with high resolution, but only provides crown three-dimensional information and mucosal surface information.
In the step 2), denoising the point cloud data by using a statistical outlier elimination filter, comprising the following steps:
a) calculating the average distance d from each data point in the point cloud data to the nearest k neighborhood points by adopting a k nearest neighbor algorithm, wherein k is the number of the selected neighborhood points and is set to be 1-10;
b) calculating an expected value dm and a standard deviation s of the average distance d;
c) calculating a formula for calculating the distance threshold dt according to the expected value dm and the standard deviation s,
dt=dm+λ×s
wherein lambda is a standard deviation parameter, and the value of lambda is 0.1-0.3;
d) and comparing the average distance d of each characteristic point with a distance threshold dt, if d > dt, filtering the point, and otherwise, keeping the point.
In the step 3), the denoised complete model point cloud data and crown model point cloud data are respectively subjected to down-sampling, and the method comprises the following steps:
(1) search space partitioning of point cloud data
Determining the sizes of the complete model point cloud data and the crown model point cloud data in space, obtaining X, Y and the minimum and maximum values of the Z coordinate axes are Min _ x, Max _ x, Min _ y, Max _ y, Min _ Z and Max _ Z respectively, and constructing the maximum space range of the tooth point cloud: [ Min _ x, Max _ x ] × [ Min _ y, Max _ y ] × [ Min _ z, Max _ z ], maximum range for tooth point cloud
Performing space segmentation to obtain a maximum bounding box L of the tooth point cloud:
Figure BDA0002635836650000031
wherein beta is a size factor for adjusting the maximum tooth bounding box, the value is 0.5-0.8, and k is the number of selected field points and is set to be 1-10;
(2) normal vector estimation of point cloud data
Taking the normal vector of the data point of each complete model point cloud data and the dental crown model point cloud data in the maximum bounding box L as the normal vector of the fitting tangent plane of the data point, calculating the fitting tangent plane of each data point, and finishing the plane fitting of the local area of the data point of each complete model point cloud data and the dental crown model point cloud data;
(3) curvature estimation of point cloud data
a) Curvature estimation is respectively carried out on the complete model point cloud data and the dental crown model point cloud data in the maximum bounding box L by utilizing a paraboloid fitting algorithm, the main curvature of the curved surface of the complete model point cloud data and the main curvature of the dental crown model point cloud data are the same as the main curvature of the paraboloid, and the paraboloid equation is as follows:
Z=bx2+cxy+dy2
b) let vertex x of parabolic equationlEstablishing a coordinate system of the local coordinate system and k adjacent points thereof for enabling x to be xlThe normal vector and the Z axis are merged, and the rotation is transformed into a local coordinate system of k nearest points to form a linear equation system:
AX=Z
wherein the content of the first and second substances,
Figure BDA0002635836650000041
X=[b c d]T,Z=[z1 … zk]T
c) solving a formula AX (Z) by using a Singular Value Decomposition (SVD) method to obtain coefficients b, c and d, thereby obtaining a paraboloid equation;
(4) calculating the average curvature H of the complete model point cloud data and the dental crown model point cloud data by using the b, c and d coefficients:
H=b+d
comparing the average curvature of the point cloud data of the complete model and the point cloud data of the dental crown model with a set threshold, and dividing the dental crown and the dental root in the point cloud data into: peaks, valleys and ridges, wherein the peaks correspond to cusp characteristics of the dental crown surface of the tooth body, the average curvature H >0 (i.e. the local area is convex); the valley corresponds to the groove on the tooth crown surface of the tooth body, and the average curvature H is less than 0, namely the local area is concave; the ridges correspond to various ridge lines on the tooth body, and the concave-convex shape is determined according to the curvature of adjacent points; setting a corresponding peak threshold value to be 30-50, and filtering point cloud data of which the average curvature of a peak is lower than the peak threshold value range in the point cloud data; and setting the corresponding valley threshold interval to be-50 to-30, and filtering the point cloud data of which the average curvature of the valleys is larger than the valley threshold range in the point cloud data.
In the step (2), k neighbor points of the data point x are recorded as nb (x), n is a normal vector of the data point x, the normal vector n of the data point is used as a normal vector of a fitting tangent plane of the data point, a least square method is used to obtain the fitting tangent plane tp (x), a covariance matrix of the k neighbor points nb (x) is calculated, and normal vectors corresponding to the k neighbor points of the data point x are calculated according to the covariance matrix of nb (x); and unifying the directions of the normal vectors of the obtained fitting tangent planes, keeping the directions of the normal vectors of the adjacent planes consistent, and finishing the plane fitting of the data point local areas of each complete model point cloud data and the dental crown model point cloud data.
In step 5), the chamfer loss function losscfComprises the following steps:
Figure BDA0002635836650000051
wherein A is a unit cube [0,1 ]]3P ∈ A is a set of points sampled from the unit cube, x is a property that contains rotation information, φ θiThe method comprises the steps that (1) S is the ith element in a multilayer perceptron model MLP, and S is the originally input dental crown model point cloud data and the complete model point cloud data; j is a point sampled in the original input point cloud;
geometric loss function losspeComprises the following steps:
Figure BDA0002635836650000052
wherein, gestTo minimize the estimated transformation matrix, ggtThe method comprises the following steps of (1) setting a standard transformation matrix, wherein P is random point cloud data, and M is the total number of the point cloud data; f (g)estP) Point cloud data obtained by subjecting input dental crown model Point cloud data and complete model Point cloud data to minimum estimation transformation matrix transformation, f (g)gtP) point cloud data obtained by transforming the input dental crown model point cloud data and the complete model point cloud data through a standard transformation matrix;
the loss function for semi-supervised training is the superposition of the chamfer loss function and the geometric loss function:
loss=losscf+losspe
the invention has the advantages that:
the invention uses semi-supervised or unsupervised training to solve the registration problem by minimizing the projection error in the feature space without searching for the corresponding terms. The method has better precision than the traditional registration method, the advanced feature learning method and the deep learning registration method. Obvious noise, density difference, partial overlapping of point clouds and the like can be processed.
Drawings
FIG. 1 is a flow chart of a semi-supervised based CBCT and laser scanning point cloud data registration method of the present invention;
fig. 2 is a tooth complete model and a crown three-dimensional model reconstructed by a CBCT and laser scanning method.
Detailed Description
The invention will be further elucidated by means of specific embodiments in the following with reference to the drawing.
As shown in fig. 1, the tooth registration method based on CBCT and laser scanning point cloud data of multi-view fusion of the present embodiment includes the following steps:
1) acquiring point cloud data of different sources:
a tooth complete model is extracted from the CBCT data according to a region growing method, the tooth complete model is converted into complete model point cloud data, a crown three-dimensional model is extracted by a laser scanner, and the crown three-dimensional model is converted into crown model point cloud data, as shown in fig. 2, the tooth complete model and the crown three-dimensional model are reconstructed based on the CBCT and the laser scanning method.
2) And denoising the complete model point cloud data and the dental crown model point cloud data by using a statistical outlier elimination filter to obtain denoised complete model point cloud data and dental crown model point cloud data.
3) And respectively carrying out downsampling on the denoised complete model point cloud data and the dental crown model point cloud data to reduce the point cloud data volume.
4) Point cloud data registration is carried out through a semi-supervised network:
a) performing initial feature extraction on the point cloud data:
respectively embedding the point cloud data of the complete model and the point cloud data of the dental crown model after down sampling into a high-dimensional space point by point through a deep learning network DGCNN, searching matching point pairs of the complete model point cloud data and the dental crown model point cloud data, and searching the characteristics of each matching point pair on the point cloud data so as to generate a mapping relation m (-) and perform rigid conversion to generate a global feature vector, wherein the global feature vector comprises local neighborhood information of the feature points;
Figure BDA0002635836650000061
wherein, L represents the number of network layers,
Figure BDA0002635836650000062
and
Figure BDA0002635836650000063
and M and N points which represent the M and N points of the complete model point cloud data and the dental crown model point cloud data embedded in the l layer are respectively the total number of the complete model point cloud data and the total number of the dental crown model point cloud data after down sampling.
b) Encoding the point cloud data:
the purpose of the encoder module is to learn a feature extraction function F, which can generate a unique feature for the input point cloud. The main principle of encoder network design is that the generated features must take into account the point cloud rotation factor so as to reflect the rotation difference in the estimation process of point cloud transform. The method comprises the steps of converting complete model point cloud data and crown model point cloud data embedded into a high-dimensional space into secondary images through an encoder, extracting characteristics containing rotation information from the secondary images, and reflecting rotation difference in the conversion estimation process.
c) Decoding the features:
and training by adopting a decoder under the unsupervised condition, and respectively restoring the global feature vector generated by the complete model point cloud data and the dental crown model point cloud data and the features containing the rotation information to the corresponding point cloud data. After the encoding module generates the unique features, the features are restored into the point cloud data using the decoder module. Such a decoder branch can be trained unsupervised, which helps the encoder module learn to generate unique features knowing the rotation difference. For two rotated copies of a point cloud, PC1 and PC2, the principle of this branch is that the encoder generates different features for P1 and P2, and the decoder can restore the different features back to the corresponding rotated point cloud copies. The decoder block consists of four fully-connected layers, activated by the LeakyReLU function. The output of the decoder module is the same dimension as the input point cloud.
d) Constructing a characteristic measure for registration:
by minimizing the feature measurement projection error, the complete model point cloud data and the dental crown model point cloud data are registered by using the global feature vector and the feature pair point complete model point cloud data containing rotation information, so that the complete model point cloud data and the dental crown model point cloud data are subjected to image fusion to obtain complete point cloud data, and the registration optimization is realized; the transformation parameters are estimated using an inverse combinatorial algorithm (non-linear optimization) to minimize the feature metric projection error. The feature metric projection error is defined as:
Figure BDA0002635836650000071
wherein P and Q are complete model point cloud data and dental crown model point cloud data, respectively, F is a feature extraction function learned by the encoder module,
Figure BDA0002635836650000072
is the global feature (P or g · Q) of the point cloud data,
Figure BDA0002635836650000073
for a non-linear function, K is the feature dimension and g is the transformation matrix.
5) Constructing a loss function:
the chamfer loss function is:
Figure BDA0002635836650000074
wherein A is a unit cube [0,1 ]]3P ∈ A is a set of points sampled from the unit cube, x is a property that contains rotation information, φ θiThe ith element in the multilayer perceptron model MLP is S, and S is the point cloud data of the original input; j is a point sampled in the original input point cloud;
the geometric loss function is:
Figure BDA0002635836650000075
wherein, gestTo minimize the estimated transformation matrix, ggtThe method comprises the following steps of (1) setting a standard transformation matrix, wherein P is random point cloud data, and M is the total number of the point cloud data; f (g)estP) Point cloud data obtained by subjecting input dental crown model Point cloud data and complete model Point cloud data to minimum estimation transformation matrix transformation, f (g)gtP) point cloud data obtained by transforming the input dental crown model point cloud data and the complete model point cloud data through a standard transformation matrix;
the loss function for semi-supervised training is the superposition of the chamfer loss function and the geometric loss function:
loss=losscf+losspe
6) and carrying out semi-supervised correction on the complete point cloud data in the step 4) by using the constructed loss function.
Finally, it is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and the appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.

Claims (6)

1. A semi-supervised CBCT and laser scanning point cloud data registration method is characterized by comprising the following steps:
1) acquiring point cloud data of different sources:
extracting a tooth complete model from CBCT data according to a region growing method, converting the tooth complete model into complete model point cloud data, extracting a dental crown three-dimensional model by adopting a laser scanner, and converting the dental crown three-dimensional model into dental crown model point cloud data;
2) denoising the complete model point cloud data and the dental crown model point cloud data respectively by using a statistical outlier elimination filter to obtain denoised complete model point cloud data and dental crown model point cloud data;
3) respectively carrying out down-sampling on the denoised complete model point cloud data and the dental crown model point cloud data to reduce the point cloud data volume;
4) point cloud data registration is carried out through a semi-supervised network:
a) performing initial feature extraction on the point cloud data:
respectively embedding the point cloud data of the complete model and the point cloud data of the dental crown model after down sampling into a high-dimensional space point by point through a deep learning network DGCNN, searching matching point pairs of the complete model point cloud data and the dental crown model point cloud data, and searching the characteristics of each matching point pair on the point cloud data so as to generate a mapping relation, carrying out rigid conversion and generating a global feature vector, wherein the global feature vector contains local neighborhood information of the feature points;
b) encoding the point cloud data:
converting the complete model point cloud data and the dental crown model point cloud data which are embedded into a high-dimensional space into a secondary image through an encoder, and extracting the characteristics containing rotation information from the secondary image;
c) decoding the features:
training without supervision by adopting a decoder, and respectively restoring the global feature vector generated by the complete model point cloud data and the dental crown model point cloud data and the features containing the rotation information to the corresponding point cloud data;
d) and (3) registering by using the features to realize fusion:
the projection error is measured by minimizing the characteristic, and the complete model point cloud data and the dental crown model point cloud data are registered by utilizing the global characteristic vector and the characteristic containing the rotation information, so that the complete model point cloud data and the dental crown model point cloud data are fused to obtain complete point cloud data;
5) constructing a loss function:
respectively constructing a chamfering loss function and a geometric loss function, and overlapping the chamfering loss function and the geometric loss function to obtain a loss function for semi-supervised correction;
6) and carrying out semi-supervised correction on the complete point cloud data obtained in the step 4) by using the constructed loss function.
2. The semi-supervised based CBCT and laser scanning point cloud data registration method of claim 1, wherein in the step 2), the point cloud data is denoised by using a statistical outlier elimination filter, comprising the following steps:
a) calculating the average distance d from each data point in the point cloud data to the nearest k neighborhood points by adopting a k nearest neighbor algorithm, wherein k is the number of the selected neighborhood points;
b) calculating an expected value dm and a standard deviation s of the average distance d;
c) calculating a formula for calculating the distance threshold dt according to the expected value dm and the standard deviation s,
dt=dm+λ×s
wherein lambda is a standard deviation parameter, and the value of lambda is 0.1-0.3;
d) and comparing the average distance d of each characteristic point with a distance threshold dt, if d > dt, filtering the point, and otherwise, keeping the point.
3. The semi-supervised CBCT and laser scanning point cloud data registration method as claimed in claim 1, wherein in step 3), the denoised full model point cloud data and crown model point cloud data are respectively downsampled, comprising the steps of:
(1) search space partitioning of point cloud data
Determining the sizes of the complete model point cloud data and the crown model point cloud data in space, obtaining X, Y and the minimum and maximum values of the Z coordinate axes are Min _ x, Max _ x, Min _ y, Max _ y, Min _ Z and Max _ Z respectively, and constructing the maximum space range of the tooth point cloud: and [ Min _ x, Max _ x ] × [ Min _ y, Max _ y ] × [ Min _ z, Max _ z ], carrying out spatial segmentation on the maximum range of the tooth point cloud to obtain a maximum bounding box L of the tooth point cloud:
Figure FDA0002635836640000021
wherein beta is a size factor for adjusting the maximum tooth bounding box, the value is 0.5-0.8, and k is the number of selected field points and is set to be 1-10;
(2) normal vector estimation of point cloud data
Taking the normal vector of the data point of each complete model point cloud data and the dental crown model point cloud data in the maximum bounding box L as the normal vector of the fitting tangent plane of the data point, calculating the fitting tangent plane of each data point, and finishing the plane fitting of the local area of the data point of each complete model point cloud data and the dental crown model point cloud data;
(3) curvature estimation of point cloud data
a) Respectively carrying out curvature estimation on the complete model surface point cloud data and the dental crown model surface point cloud data in the maximum bounding box L by utilizing a paraboloid fitting algorithm, wherein the main curvature of the curved surface of the complete model surface point cloud data and the main curvature of the dental crown model surface point cloud data are the same as the main curvature of the paraboloid, and the paraboloid equation is as follows:
Z=bx2+cxy+dy2
b) let vertex x of parabolic equationlEstablishing a coordinate system of the local coordinate system and k adjacent points thereof for enabling x to be xlThe normal vector and the Z axis are merged, and the rotation is transformed into a local coordinate system of k nearest points to form a linear equation system:
AX=Z
wherein the content of the first and second substances,
Figure FDA0002635836640000031
X=[b c d]T,Z=[z1…zk]T
c) solving a formula AX-Z by using a singular value decomposition method to obtain coefficients b, c and d, thereby obtaining a paraboloid equation;
(4) calculating the average curvature H of the complete model point cloud data and the dental crown model point cloud data by using the b, c and d coefficients:
H=b+d
comparing the average curvature of the point cloud data of the complete model and the point cloud data of the dental crown model with a set threshold, and dividing the dental crown and the dental root in the point cloud data into: peaks, valleys and ridges, wherein the peaks correspond to cusp characteristics of the dental crown surface of the tooth body, the average curvature H >0 (i.e. the local area is convex); the valley corresponds to the groove on the tooth crown surface of the tooth body, and the average curvature H is less than 0, namely the local area is concave; the ridges correspond to various ridge lines on the tooth body, and the concave-convex shape is determined according to the curvature of adjacent points; setting a corresponding peak threshold value to be 30-50, and filtering point cloud data of which the average curvature of a peak is lower than the peak threshold value range in the point cloud data; and setting the corresponding valley threshold interval to be-50 to-30, and filtering the point cloud data of which the average curvature of the valleys is larger than the valley threshold range in the point cloud data.
4. The semi-supervised based CBCT and laser scanning point cloud data registration method of claim 3, wherein in the step (2), k neighboring points of the data point x are recorded as Nb (x), n is a normal vector of the data point x, the normal vector n of the data point is taken as a normal vector of a fitting tangent plane of the data point, a least square method is used to obtain the fitting tangent plane Tp (x), a covariance matrix of the k neighboring points Nb (x) is calculated, and normal vectors corresponding to the k neighboring points of the data point x are calculated according to the covariance matrix of Nb (x); and unifying the directions of the normal vectors of the obtained fitting tangent planes, keeping the directions of the normal vectors of the adjacent planes consistent, and finishing the plane fitting of the data point local areas of each complete model point cloud data and the dental crown model point cloud data.
5. Semi-supervised-based CBCT and laser scanning point cloud data registration method as claimed in claim 1, wherein in step 5) the registration method is performedChamfer loss function losscfComprises the following steps:
Figure FDA0002635836640000032
wherein A is a unit cube [0,1 ]]3P ∈ A is a set of points sampled from the unit cube, x is a property that contains rotation information, φ θiThe method comprises the steps that (1) S is the ith element in a multilayer perceptron model MLP, and S is the originally input dental crown model point cloud data and the complete model point cloud data; j is the point sampled in the original input point cloud.
6. The semi-supervised based CBCT and laser scanning point cloud data registration method of claim 1, wherein in step 5), the geometric loss function losspeComprises the following steps:
Figure FDA0002635836640000041
wherein, gestTo minimize the estimated transformation matrix, ggtThe method comprises the following steps of (1) setting a standard transformation matrix, wherein P is random point cloud data, and M is the total number of the point cloud data; f (g)estP) point cloud data obtained by transforming the crown model point cloud data and the complete model point cloud data by a minimization estimation transformation matrix, f (g)gtAnd P) point cloud data obtained by performing standard transformation matrix transformation on the dental crown model point cloud data and the complete model point cloud data.
CN202010825031.6A 2020-08-17 2020-08-17 Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method Pending CN112085821A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010825031.6A CN112085821A (en) 2020-08-17 2020-08-17 Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010825031.6A CN112085821A (en) 2020-08-17 2020-08-17 Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method

Publications (1)

Publication Number Publication Date
CN112085821A true CN112085821A (en) 2020-12-15

Family

ID=73729022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010825031.6A Pending CN112085821A (en) 2020-08-17 2020-08-17 Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method

Country Status (1)

Country Link
CN (1) CN112085821A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967219A (en) * 2021-03-17 2021-06-15 复旦大学附属华山医院 Two-stage dental point cloud completion method and system based on deep learning network
CN113191973A (en) * 2021-04-29 2021-07-30 西北大学 Cultural relic point cloud data denoising method based on unsupervised network framework
CN113397585A (en) * 2021-07-27 2021-09-17 朱涛 Tooth body model generation method and system based on oral CBCT and oral scan data
CN113610956A (en) * 2021-06-17 2021-11-05 深圳市菲森科技有限公司 Method and device for characteristic matching of implant in intraoral scanning and related equipment
CN113657387A (en) * 2021-07-07 2021-11-16 复旦大学 Semi-supervised three-dimensional point cloud semantic segmentation method based on neural network
CN115100258A (en) * 2022-08-29 2022-09-23 杭州三坛医疗科技有限公司 Hip joint image registration method, device, equipment and storage medium
CN115578429A (en) * 2022-11-21 2023-01-06 西北工业大学 Point cloud data-based mold online precision detection method
CN118078471A (en) * 2024-04-29 2024-05-28 南京笑领科技有限公司 Three-dimensional dental crown modeling method, system and application based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447908A (en) * 2015-12-04 2016-03-30 山东山大华天软件有限公司 Dentition model generation method based on oral cavity scanning data and CBCT (Cone Beam Computed Tomography) data
CN105662608A (en) * 2015-12-18 2016-06-15 北京大学口腔医学院 Three-dimensional data tooth crown and root integrating method
CN107220928A (en) * 2017-05-31 2017-09-29 中国工程物理研究院应用电子学研究所 A kind of tooth CT image pixel datas are converted to the method for 3D printing data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447908A (en) * 2015-12-04 2016-03-30 山东山大华天软件有限公司 Dentition model generation method based on oral cavity scanning data and CBCT (Cone Beam Computed Tomography) data
CN105662608A (en) * 2015-12-18 2016-06-15 北京大学口腔医学院 Three-dimensional data tooth crown and root integrating method
CN107220928A (en) * 2017-05-31 2017-09-29 中国工程物理研究院应用电子学研究所 A kind of tooth CT image pixel datas are converted to the method for 3D printing data

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
NING DAI等: "Reasearch and primary evaluation of an automatic fusion method for multisource tooth crown data", 《INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING》 *
XIAOLEI DU等: "A Point Cloud Data Reduction Method Based on Curvature", 《2009 IEEE 10TH INTERNATIONAL CONFERENCE ON COMPUTER-AIDED INDUSTRIAL DESIGN & CONCEPTUAL DESIGN》 *
XIAOLING REN等: "An innovative segmentation method with multi-feature fusion for 3D point cloud", 《JOURNAL OF INTELLIGENT & FUZZY SYSTEMS》 *
XIAOSHUI HUANG等: "Feature-Metric Registration: A Fast Semi-Supervised Approach for Robust Point Cloud Registration Without Correspondences", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
YUE WANG等: "Deep Closest Point:Learning Representations for Point Cloud Registration", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION(ICCV)》 *
刘浩等: "《CAD技术及其应用 MATLAB版》", 28 February 2019 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967219B (en) * 2021-03-17 2023-12-05 复旦大学附属华山医院 Two-stage dental point cloud completion method and system based on deep learning network
CN112967219A (en) * 2021-03-17 2021-06-15 复旦大学附属华山医院 Two-stage dental point cloud completion method and system based on deep learning network
CN113191973B (en) * 2021-04-29 2023-09-01 西北大学 Cultural relic point cloud data denoising method based on unsupervised network framework
CN113191973A (en) * 2021-04-29 2021-07-30 西北大学 Cultural relic point cloud data denoising method based on unsupervised network framework
CN113610956B (en) * 2021-06-17 2024-05-28 深圳市菲森科技有限公司 Method, device and related equipment for feature matching implant in intraoral scanning
CN113610956A (en) * 2021-06-17 2021-11-05 深圳市菲森科技有限公司 Method and device for characteristic matching of implant in intraoral scanning and related equipment
CN113657387B (en) * 2021-07-07 2023-10-13 复旦大学 Semi-supervised three-dimensional point cloud semantic segmentation method based on neural network
CN113657387A (en) * 2021-07-07 2021-11-16 复旦大学 Semi-supervised three-dimensional point cloud semantic segmentation method based on neural network
CN113397585B (en) * 2021-07-27 2022-08-05 朱涛 Tooth body model generation method and system based on oral CBCT and oral scan data
CN113397585A (en) * 2021-07-27 2021-09-17 朱涛 Tooth body model generation method and system based on oral CBCT and oral scan data
CN115100258A (en) * 2022-08-29 2022-09-23 杭州三坛医疗科技有限公司 Hip joint image registration method, device, equipment and storage medium
CN115578429A (en) * 2022-11-21 2023-01-06 西北工业大学 Point cloud data-based mold online precision detection method
CN118078471A (en) * 2024-04-29 2024-05-28 南京笑领科技有限公司 Three-dimensional dental crown modeling method, system and application based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN111862171B (en) CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion
CN112085821A (en) Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method
CN112200843B (en) Super-voxel-based CBCT and laser scanning point cloud data tooth registration method
CN113099208B (en) Method and device for generating dynamic human body free viewpoint video based on nerve radiation field
CN111063021B (en) Method and device for establishing three-dimensional reconstruction model of space moving target
Yamany et al. Free-form surface registration using surface signatures
Yamany et al. Surface signatures: an orientation independent free-form surface representation scheme for the purpose of objects registration and matching
CN114746952A (en) Method, system and computer-readable storage medium for creating a three-dimensional dental restoration from a two-dimensional sketch
CN111899328B (en) Point cloud three-dimensional reconstruction method based on RGB data and generation countermeasure network
US20050089213A1 (en) Method and apparatus for three-dimensional modeling via an image mosaic system
CN112614169B (en) 2D/3D spine CT (computed tomography) level registration method based on deep learning network
CN112562082A (en) Three-dimensional face reconstruction method and system
CN115619773B (en) Three-dimensional tooth multi-mode data registration method and system
CN111784754A (en) Tooth orthodontic method, device, equipment and storage medium based on computer vision
CN116958437A (en) Multi-view reconstruction method and system integrating attention mechanism
CN117671138A (en) Digital twin modeling method and system based on SAM large model and NeRF
CN116958419A (en) Binocular stereoscopic vision three-dimensional reconstruction system and method based on wavefront coding
CN114612532A (en) Three-dimensional tooth registration method, system, computer equipment and storage medium
CN116205956A (en) Point cloud registration method and device and medical imaging equipment
CN116958420A (en) High-precision modeling method for three-dimensional face of digital human teacher
CN114240809A (en) Image processing method, image processing device, computer equipment and storage medium
CN112686202A (en) Human head identification method and system based on 3D reconstruction
CN113379890B (en) Character bas-relief model generation method based on single photo
CN115409811A (en) Tooth model reconstruction method, device, equipment and medium based on curvature enhancement
CN113256693A (en) Multi-view registration method based on K-means and normal distribution transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201215

WD01 Invention patent application deemed withdrawn after publication