CN112200843A - CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels - Google Patents

CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels Download PDF

Info

Publication number
CN112200843A
CN112200843A CN202011072708.XA CN202011072708A CN112200843A CN 112200843 A CN112200843 A CN 112200843A CN 202011072708 A CN202011072708 A CN 202011072708A CN 112200843 A CN112200843 A CN 112200843A
Authority
CN
China
Prior art keywords
point cloud
tooth
cloud data
voxel
cbct
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011072708.XA
Other languages
Chinese (zh)
Other versions
CN112200843B (en
Inventor
何炳蔚
陈斌赫
于泽宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202011072708.XA priority Critical patent/CN112200843B/en
Publication of CN112200843A publication Critical patent/CN112200843A/en
Application granted granted Critical
Publication of CN112200843B publication Critical patent/CN112200843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The invention provides a tooth registration method based on CBCT and laser scanning point cloud data of hyper-voxels, which comprises the following steps; firstly, extracting a tooth model from the scanning data of the oral CBCT according to a region growing method; secondly, processing the tooth model by using a laser scanner, and separating a dental crown in the tooth model from an abutment according to the characteristics of the tooth; then converting tooth models from different sources into tooth point cloud data, adding color information to the tooth point cloud data to assist initial geometric alignment of the tooth point cloud, and realizing down-sampling of the point cloud based on a hyper-voxel method; finally, an alignment metric is constructed; and proposing a mutual corresponding matching condition based on the mixed characteristics; the registration of two pieces of tooth point cloud data with color information from different angles is realized; the invention can solve the problem of tooth registration with different resolutions, manually adds tooth color information, improves the registration precision, saves the registration time and constructs a complete tooth model for implantation and orthognathic surgery.

Description

CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels
Technical Field
The invention relates to the technical field of medical image processing, in particular to a hyper-voxel-based tooth registration method based on CBCT and laser scanning point cloud data.
Background
A complete, accurate three-dimensional dental model is an essential prerequisite for dental implants or orthognathic surgical planning. The doctor can have a deeper understanding to the three-dimensional space position of the focus position of the patient through the three-dimensional model. The CBCT data shooting of the oral cavity image of a patient comprises an anatomical structure of three visual planes (a sagittal plane, a coronal plane and a transverse plane), which is a tooth model visualization means commonly used in the existing digital orthodontic software, but because of the precision, the slice thickness, the metal artifact and the occlusion state of the teeth of the patient in the shooting process, the upper dental crown and the lower dental crown are difficult to separate, so that the surface precision of the dental crown is low. Surface optical scanners have been introduced for dental and orthodontic treatment because they can directly acquire three-dimensional dental models with higher accuracy, but the scan results only contain surface information of the crown and mucosa, and complete dental volume data containing the root of the tooth cannot be obtained. The existing imaging technology cannot simultaneously acquire and integrate all anatomical tissues involved in clinical orthodontic practice, so that a three-dimensional digital model with complete data and higher precision can be obtained only by fusing the two data models. And the registration of the two three-dimensional data models is a key step in generating the complete model.
The registration is to adjust the coordinate system of the three-dimensional model obtained by measuring the same object from different sources, so that the positions of the parts belonging to the same structure in the two models in the same coordinate system are also consistent. Tooth models based on CBCT and optical scanning are two data models with different resolutions, different data volumes, and interference from many extrinsic factors (jaw, metal artifacts, gums, etc.). In the dental model reconstruction process, a registration step of two dental images is necessary to achieve a complete and accurate fusion of the teeth. The higher the registration precision is, the higher the precision of the positions of the crown and the root is, the more the common accurate segmentation of the two data can be guided, and the better the three-dimensional fusion effect of the crown and the root is, therefore, the registration contact coincidence degree of the optical scanning dentition model and the CT dentition model at the dental crown part determines the final effect of crown and root fusion to a certain extent. The current point cloud registration methods are mainly classified into the following types: the method comprises a deep learning-based registration method, an improved ICP-based registration method, a semi-supervised-based registration method and a multi-view fusion-based registration method. Among many registration methods, the Iterative Closest Point (ICP) algorithm introduced in the early 90 s of the 20 th century is the most notable algorithm for efficiently registering two-dimensional or three-dimensional point sets under the euclidean (rigid) transformation, and its concept is simple and intuitive. While ICP reduces some objective function measurement alignment, ICP often falls into sub-optimal local extrema due to the non-convexity of the problem and the need for good initialization, otherwise it easily falls into local minima. To address the local optimization of ICP and other difficulties, a number of improved algorithms based on ICP algorithms have been derived. The algorithms are optimized from the aspects of removing mismatching points, constructing an error measurement function, solving the error function and the like.
Most of the improved ICP algorithms at present have some common defects: 1. emphasis is placed on pure geometric features, ignoring important color features. (ii) a 2. For manually extracting the feature points, the artificial interference factor is large, and when the feature points are automatically selected, the algorithm is complex, and the time utilization rate is not high; 3. only aiming at rigid body transformation, scale transformation parameters are not introduced, and the registration of 2 similar objects with different sizes and resolutions cannot be solved; in recent years, color-based methods have received much attention. By introducing color information, feature matching improves the accuracy of establishing correspondence, especially in cases where the geometric information is insufficient.
Disclosure of Invention
The invention provides a CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels, which can solve the problem of tooth registration with different resolutions, manually add tooth color information, improve the registration accuracy, save the registration time and construct a complete tooth model for implantation and orthognathic surgery.
The invention adopts the following technical scheme.
A hyper-voxel based CBCT and laser scanning point cloud data tooth registration method, the method comprises the following steps;
firstly, extracting a tooth model from the scanning data of the oral CBCT according to a region growing method;
secondly, processing the tooth model by using a laser scanner, and separating a dental crown in the tooth model from an abutment according to the characteristics of the tooth;
then converting tooth models from different sources into tooth point cloud data, adding color information to the tooth point cloud data to assist initial geometric alignment of the tooth point cloud, and realizing down-sampling of the point cloud based on a hyper-voxel method;
finally, combining the mixed features of the tooth point cloud data and defining similarity measurement according to the mixed features, dynamically adjusting the weight between the color and the space information of the tooth point cloud data, and finally measuring the mixed features of the tooth point cloud data to construct alignment measurement; and proposing a mutual corresponding matching condition based on the mixed characteristics;
the registration method is an improved ICP registration algorithm carried out under an ICP framework of an improved model, and the improved model comprises a point cloud embedding network, an attention-based module and a pointer generation layer for approximate combination matching and a differentiable singular value decomposition layer for extracting final rigid transformation; the registration of two pieces of tooth point cloud data with color information from different angles can be realized.
The method for converting tooth models from different sources into tooth point cloud data comprises the following methods;
scanning tooth bodies from different sources through oral CBCT, and reconstructing a three-dimensional tooth body model according to scanned CBCT data, wherein in the step, the CBCT data are reconstructed by adopting a region growing method to obtain a comprehensive and accurate three-dimensional tooth body information model and are converted into point cloud data;
and step two, obtaining a dental crown model by a laser scanning technology.
The second step comprises the following specific steps:
collecting dental crown model point cloud data by adopting a handheld three-dimensional laser scanner; before measurement, equipment needs to be calibrated to ensure the accuracy of output data, an STL format is used as a file format of the output data to reconstruct a three-dimensional model, and finally the reconstructed three-dimensional model is converted into point cloud data.
The method for adding the color information comprises the steps of classifying teeth according to definitions of dental medical textbooks, classifying upper and lower jaw teeth by the same method, and classifying two different point cloud data by 14 different color labels.
The hyper-voxel method is a hyper-voxel segmentation method, and specifically comprises the following steps;
step A1, dividing the two dental 3D point cloud data into a plurality of voxels with the same size by using an octree, calculating the average curvature of each voxel, and taking the voxel with the minimum average curvature as a seed voxel to avoid over-segmentation;
each three-dimensional voxel is clustered according to a 39-dimensional vector, which is expressed as:
F=[x,y,z,L,a,b,FPFH1,...,33]a first formula;
wherein x, y and z are space coordinates, L, a and b are color values of CIELab space, FPFH1,...33Extracting 33 elements of the local geometric features of the point cloud from the fast point feature histogram; FPFH has the characteristics of describing a local surface model with space invariance, and the construction method is that for each extracted tooth point, a neighborhood sphere is constructed by the radius of 0.2, the neighborhood sphere is expanded along the x, y and z axes and is divided into 11 histogram equal parts, and finally a 33-dimensional descriptor is obtained;
step A2, starting from the initial seed voxel, traversing the adjacent voxels outwards, and clustering the point cloud, wherein the distance metric D of the clustering is represented as:
Figure BDA0002715665470000041
wherein λ, μ and β correspond to color and space distance, respectivelyInfluence factor of the separation and geometry, DcIs the Euclidean distance value, D, of the CIELab spacesIs the Euclidean distance value, D, of a voxel in three-dimensional spacehikIs the cross-sum of the voxel normal vector distribution histograms;
Rseedthe resolution of the seed voxels in the space determines the number of initial hyper-voxels; after the distance from the voxel in the neighborhood to the seed voxel is calculated, marking the voxel with the closest distance, and adding the adjacent voxel into a search list according to an adjacent map; iterating until a search boundary for each voxel is reached; the search completion conditions are: all leaf nodes in the adjacency graph are traversed, and hyper-voxels are obtained through segmentation.
The defining of the similarity measure according to the mixed features comprises
Setting the weight as 0.5 to balance the geometrical characteristics and the local color distribution characteristics of the point cloud, finding the corresponding points between the CBCT dental point cloud and the laser scanning dental point cloud, and defining a measurement formula of the mixed characteristic distance as follows:
Figure BDA0002715665470000042
in the formula ds()、dc() Respectively representing the geometrical characteristic distance and the local color distribution characteristic distance of the point cloud, wherein w is the weight of the color characteristic and is set to be 0.5, so that the space and the color characteristic are properly combined, and proper similarity measurement is carried out according to different point cloud positions; p is a radical ofu,qvRespectively representing the spatial corresponding points of the CBCT point cloud data and the laser scanning point cloud data.
The proposing of the mutual corresponding matching condition based on the mixed features comprises;
c (u, v) represents a bidirectional corresponding relation between the CBCT point cloud data and the laser scanning point cloud data, CDM (u) represents a corresponding point of a data point pu in a CBCT model point cloud Q, and CMD (v) represents a corresponding point of a data point qu in a laser scanning point cloud P; CDM (u) and CMD (v) may be defined as:
Figure BDA0002715665470000043
Figure BDA0002715665470000044
where d (pu, qv) and d (qv, pu) are measures of the mixture characteristic distance defined in equation (4), and the point pairs pu and qv are mutually corresponding in cdm (u) and cmd (v);
the improved ICP framework of the improved model comprises the following steps that the matching problem of corresponding parameters and conversion parameters in an objective function is solved through an improved ICP algorithm, and the improvement specifically comprises the following steps;
b1, adding an attention mechanism to predict soft matching between the point clouds; (ii) a
Firstly, CBCT point cloud data and laser scanning point cloud data, wherein an initial point cloud characteristic relation (DGCNN) generates global characteristic vectors of two pieces of point clouds by using a graph network structure, and the global characteristic vectors contain local neighborhood information of characteristic points, which plays a key role in subsequent registration;
Figure BDA0002715665470000051
where L represents the number of network layers,
Figure BDA0002715665470000052
indicating that the point i is embedded in the l layer;
then, the point cloud characteristic vector obtained by DGCNN is used as the input of a conversion module, the characteristic vectors of the point cloud sets X and Y are decoded by using a coding frame, the module increases an attention mechanism, and the mapping relation m (-) between the point clouds is modified by continuously adjusting the weight according to different self-adaptive distribution weights of the point cloud shapes, so that the matching of the point cloud sets X and Y is realized, as shown in the following formula;
Figure BDA0002715665470000053
in the formula φx and φyAre respectively asFx and FyOf the input of (1) a modified mapping function, Fx→φxTaking the target point cloud X structure as a standard, and modifying the weight of the characteristic points of the point cloud set Y according to the point cloud set X structure to generate Fx→φxWherein the learning function of the attention mechanism module φ:
Figure BDA0002715665470000054
Figure BDA0002715665470000055
is a nonlinear function, P represents the embedded dimension and is an asymmetric function input;
step B2, adding a singular value decomposition module (SVD);
the constructed deep learning network solves a transformation matrix for the mapped point cloud by using a singular value decomposition method, respectively calculates a new point cloud set and a centroid and covariance matrix after mapping, and performs probability distribution on corresponding points to the points by using a probability model in order to avoid direct matching, wherein the model is as follows:
Figure BDA0002715665470000056
and (3) carrying out mean solution on the probability model distribution probability to generate mean points of the point cloud set X matched to the point cloud set Y:
Figure BDA0002715665470000061
the upper type
Figure BDA0002715665470000062
Where Y is a dot matrix. According to xiTo yiCalculating rigid motion of the target point cloud X and the point cloud Y to be detected according to the mapping relation;
step B3, constructing a loss function combining the local similarity and the global geometric constraint:
Figure BDA0002715665470000063
Figure BDA0002715665470000064
wherein ,
Figure BDA0002715665470000065
for each keypoint x in the input point cloudiObtained by given transformation, N is the number of extracted characteristic point clouds, yiTo estimate the target corresponding point, R, T is an estimation transform. Loss is derived by directly calculating the Euclidean distance1(ii) a Considering that the registration task is also constrained by the global geometric transformation, another Loss including the global geometric constraint is introduced2
In summary, the synthetic loss function is defined as:
Loss=aLoss1+(1-a)Loss2a formula ten;
wherein a is Loss1 and Loss2Weight in between.
The invention provides a CBCT and laser scanning point cloud data tooth registration method based on superpixel, which has the advantages compared with the traditional method based on geometric characteristics: (1) and adding color information of the dental point cloud, so that the point cloud is aligned better (2) performing similarity measurement by combining the characteristics, and dynamically adjusting weight parameters between the color and the space information. (3) The registration algorithm is to establish a corresponding relation and estimate conversion parameters under the improved classic ICP iterative algorithm framework, and the algorithm model solves the problems that the classic ICP algorithm is sensitive to an initial position and is low in convergence speed. In conclusion, the invention can effectively reduce the number of the registration point data and can achieve better matching result even under the condition of poor initial condition.
Drawings
The invention is described in further detail below with reference to the following figures and detailed description:
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of the network architecture of the improved ICP registration algorithm of the present invention;
FIG. 3 is a schematic diagram of a point cloud segmentation process based on the hyper-voxel method.
Detailed Description
As shown in the figure, a tooth registration method based on CBCT of super voxel and laser scanning point cloud data comprises the following steps;
firstly, extracting a tooth model from the scanning data of the oral CBCT according to a region growing method;
secondly, processing the tooth model by using a laser scanner, and separating a dental crown in the tooth model from an abutment according to the characteristics of the tooth;
then converting tooth models from different sources into tooth point cloud data, adding color information to the tooth point cloud data to assist initial geometric alignment of the tooth point cloud, and realizing down-sampling of the point cloud based on a hyper-voxel method;
finally, combining the mixed features of the tooth point cloud data and defining similarity measurement according to the mixed features, dynamically adjusting the weight between the color and the space information of the tooth point cloud data, and finally measuring the mixed features of the tooth point cloud data to construct alignment measurement; and proposing a mutual corresponding matching condition based on the mixed characteristics;
the registration method is an improved ICP registration algorithm carried out under an ICP framework of an improved model, and the improved model comprises a point cloud embedding network, an attention-based module and a pointer generation layer for approximate combination matching and a differentiable singular value decomposition layer for extracting final rigid transformation; the registration of two pieces of tooth point cloud data with color information from different angles can be realized.
The method for converting tooth models from different sources into tooth point cloud data comprises the following methods;
scanning tooth bodies from different sources through oral CBCT, and reconstructing a three-dimensional tooth body model according to scanned CBCT data, wherein in the step, the CBCT data are reconstructed by adopting a region growing method to obtain a comprehensive and accurate three-dimensional tooth body information model and are converted into point cloud data;
and step two, obtaining a dental crown model by a laser scanning technology.
The second step comprises the following specific steps:
collecting dental crown model point cloud data by adopting a handheld three-dimensional laser scanner; before measurement, equipment needs to be calibrated to ensure the accuracy of output data, an STL format is used as a file format of the output data to reconstruct a three-dimensional model, and finally the reconstructed three-dimensional model is converted into point cloud data.
The method for adding the color information comprises the steps of classifying teeth according to definitions of dental medical textbooks, classifying upper and lower jaw teeth by the same method, and classifying two different point cloud data by 14 different color labels.
The hyper-voxel method is a hyper-voxel segmentation method, and specifically comprises the following steps;
step A1, dividing the two dental 3D point cloud data into a plurality of voxels with the same size by using an octree, calculating the average curvature of each voxel, and taking the voxel with the minimum average curvature as a seed voxel to avoid over-segmentation;
each three-dimensional voxel is clustered according to a 39-dimensional vector, which is expressed as:
F=[x,y,z,L,a,b,FPFH1,...,33]a first formula;
wherein x, y and z are space coordinates, L, a and b are color values of CIELab space, FPFH1,...33Extracting 33 elements of the local geometric features of the point cloud from the fast point feature histogram; FPFH has the characteristics of describing a local surface model with space invariance, and the construction method is that for each extracted tooth point, a neighborhood sphere is constructed by the radius of 0.2, the neighborhood sphere is expanded along the x, y and z axes and is divided into 11 histogram equal parts, and finally a 33-dimensional descriptor is obtained;
step A2, starting from the initial seed voxel, traversing the adjacent voxels outwards, and clustering the point cloud, wherein the distance metric D of the clustering is represented as:
Figure BDA0002715665470000081
where λ, μ and β correspond to the influencing factors of color, spatial distance and geometric characteristics, D, respectivelycIs the Euclidean distance value, D, of the CIELab spacesIs the Euclidean distance value, D, of a voxel in three-dimensional spacehikIs the cross-sum of the voxel normal vector distribution histograms;
Rseedthe resolution of the seed voxels in the space determines the number of initial hyper-voxels; after the distance from the voxel in the neighborhood to the seed voxel is calculated, marking the voxel with the closest distance, and adding the adjacent voxel into a search list according to an adjacent map; iterating until a search boundary for each voxel is reached; the search completion conditions are: all leaf nodes in the adjacency graph are traversed, and hyper-voxels are obtained through segmentation.
The defining of the similarity measure according to the mixed features comprises
Setting the weight as 0.5 to balance the geometrical characteristics and the local color distribution characteristics of the point cloud, finding the corresponding points between the CBCT dental point cloud and the laser scanning dental point cloud, and defining a measurement formula of the mixed characteristic distance as follows:
Figure BDA0002715665470000091
in the formula ds()、dc() Respectively representing the geometrical characteristic distance and the local color distribution characteristic distance of the point cloud, wherein w is the weight of the color characteristic and is set to be 0.5, so that the space and the color characteristic are properly combined, and proper similarity measurement is carried out according to different point cloud positions; p is a radical ofu,qvRespectively representing the spatial corresponding points of the CBCT point cloud data and the laser scanning point cloud data.
The proposing of the mutual corresponding matching condition based on the mixed features comprises;
c (u, v) represents a bidirectional corresponding relation between the CBCT point cloud data and the laser scanning point cloud data, CDM (u) represents a corresponding point of a data point pu in a CBCT model point cloud Q, and CMD (v) represents a corresponding point of a data point qu in a laser scanning point cloud P; CDM (u) and CMD (v) may be defined as:
Figure BDA0002715665470000092
Figure BDA0002715665470000093
where d (pu, qv) and d (qv, pu) are measures of the mixture characteristic distance defined in equation (4), and the point pairs pu and qv are mutually corresponding in cdm (u) and cmd (v);
the improved ICP framework of the improved model comprises the following steps that the matching problem of corresponding parameters and conversion parameters in an objective function is solved through an improved ICP algorithm, and the improvement specifically comprises the following steps;
b1, adding an attention mechanism to predict soft matching between the point clouds; (ii) a
Firstly, CBCT point cloud data and laser scanning point cloud data, wherein an initial point cloud characteristic relation (DGCNN) generates global characteristic vectors of two pieces of point clouds by using a graph network structure, and the global characteristic vectors contain local neighborhood information of characteristic points, which plays a key role in subsequent registration;
Figure BDA0002715665470000094
where L represents the number of network layers,
Figure BDA0002715665470000095
indicating that the point i is embedded in the l layer;
then, the point cloud characteristic vector obtained by DGCNN is used as the input of a conversion module, the characteristic vectors of the point cloud sets X and Y are decoded by using a coding frame, the module increases an attention mechanism, and the mapping relation m (-) between the point clouds is modified by continuously adjusting the weight according to different self-adaptive distribution weights of the point cloud shapes, so that the matching of the point cloud sets X and Y is realized, as shown in the following formula;
Figure BDA0002715665470000101
in the formula φx and φyAre respectively Fx and FyOf the input of (1) a modified mapping function, Fx→φxTaking the target point cloud X structure as a standard, and modifying the weight of the characteristic points of the point cloud set Y according to the point cloud set X structure to generate Fx→φxWherein the learning function of the attention mechanism module φ:
Figure BDA0002715665470000102
Figure BDA0002715665470000103
is a nonlinear function, P represents the embedded dimension and is an asymmetric function input;
step B2, adding a singular value decomposition module (SVD);
the constructed deep learning network solves a transformation matrix for the mapped point cloud by using a singular value decomposition method, respectively calculates a new point cloud set and a centroid and covariance matrix after mapping, and performs probability distribution on corresponding points to the points by using a probability model in order to avoid direct matching, wherein the model is as follows:
Figure BDA0002715665470000104
and (3) carrying out mean solution on the probability model distribution probability to generate mean points of the point cloud set X matched to the point cloud set Y:
Figure BDA0002715665470000105
the upper type
Figure BDA0002715665470000106
Where Y is a dot matrix. Root of herbaceous plantAccording to xiTo yiCalculating rigid motion of the target point cloud X and the point cloud Y to be detected according to the mapping relation;
step B3, constructing a loss function combining the local similarity and the global geometric constraint:
Figure BDA0002715665470000107
Figure BDA0002715665470000108
wherein ,
Figure BDA0002715665470000111
for each keypoint x in the input point cloudiObtained by given transformation, N is the number of extracted characteristic point clouds, yiTo estimate the target corresponding point, R, T is an estimation transform. Loss is derived by directly calculating the Euclidean distance1(ii) a Considering that the registration task is also constrained by the global geometric transformation, another Loss including the global geometric constraint is introduced2
In summary, the synthetic loss function is defined as:
Loss=aLoss1+(1-a)Loss2a formula ten;
wherein a is Loss1 and Loss2Weight in between.
Example (b):
as shown in fig. 1, which is a flowchart of the registration method of the present invention, firstly, dental data from different sources are obtained: in one aspect, the tooth model is extracted from CBCT data based on a region growing method using CBCT data of a patient for reconstruction. On the other hand, a three-dimensional dental model is digitized using a laser scanner, and a crown is separated from an abutment according to characteristics of a tooth. And then converting tooth models of different sources into point cloud data, adding color information of the point cloud to assist initial geometric alignment of the point cloud, and then segmenting the point cloud based on a hyper-voxel method to reduce the sampling amount of the point cloud. Similarity measurement is carried out through the combined features, weight between color and space information is dynamically adjusted, the mixed features are measured to construct alignment measurement, and finally, an improved ICP algorithm based on deep learning is adopted for registration, so that the strategy can improve the registration efficiency and accuracy.
As shown in fig. 2, the key of the ICP algorithm is to continuously iteratively modify a transformation matrix between two point sets to minimize the distance between corresponding points in the two point sets. Since ICP algorithms are sensitive to initial position and slow in convergence speed, many methods attempt to improve registration accuracy by using heuristics to improve matching accuracy or by searching for larger motion spaces, but these algorithms are slower than ICP and have limited ability to improve accuracy. Aiming at the problems of ICP, a learning theory algorithm is introduced, a transformation matrix between two pieces of point clouds is obtained quickly, and the point clouds are registered quickly. And embedding the point cloud into a high-dimensional space by applying an improved ICP (inductively coupled plasma) algorithm, encoding context information by using an attention module, and aligning the point cloud to be registered by using a singular value decomposition module.
The key modules of the network architecture shown in fig. 2 are as follows:
(1) initial point cloud characteristic relation (DGCNN)
Inputting a target point cloud X and a point cloud Y to be registered, and constructing a k-N N-based graph network by using a DGCNN point-to-point cloud set
Figure BDA0002715665470000121
By passing
Figure BDA0002715665470000122
The nonlinear function embeds the point cloud set point by point into a high-dimensional space, searches matching point pairs of two point clouds, and searches the characteristics of each point on the point clouds, thereby generating a mapping relation m (-) and carrying out rigid conversion.
(2) Attention mechanism module
And (3) taking the point cloud characteristic vector obtained by the DGCNN as the input of a conversion module, decoding the point cloud set and the characteristic vector by using a coding frame, adaptively distributing weight according to different point cloud shapes, and continuously adjusting the weight to modify the mapping relation m (-) between the point clouds, thereby realizing the matching of the point cloud set X and the point cloud set Y.
(3) Singular value decomposition module (SVD)
The ICP algorithm solves the transformation matrix among the point clouds by using a singular value decomposition method, and the decomposition process is simple and the calculation speed is high. The constructed deep learning network also utilizes a singular value decomposition method to solve a transformation matrix for the mapped point clouds and respectively calculates new point cloud sets after mapping
Figure BDA0002715665470000123
And
Figure BDA0002715665470000124
a centroid and a covariance matrix. To avoid direct matching of Y to X, point X is aligned using a probabilistic modeliAnd carrying out probability distribution on the corresponding points of the obtained Y, then carrying out mean solution on the probability distribution probability of the probability model, and generating average value points which are matched with the point cloud set Y by the point cloud set.
As shown in fig. 3, the flowchart of the segmentation process based on the point cloud of hyper-voxels of the present invention includes the following specific steps:
1. and performing over-segmentation on the input point cloud data to obtain the hyper-voxels.
And performing voxelization processing on the point cloud data, and constructing a corresponding adjacency graph in a 26-neighborhood region of the voxel by traversing the KD tree.
And carrying out gridding treatment in a three-dimensional space of the point cloud, and selecting a voxel closest to the center in a grid as an initial seed voxel. And filtering the initial seed voxels, calculating the number of voxels in the neighborhood radius of the seed points, and deleting the seed points of which the number is less than a certain threshold value. For all voxels, a 36-dimensional feature vector F is constructed, [ x, y, z, L, a, b, PPFH1..30]。
2. According to the difference value between the geodesic distance and the Euclidean distance between the center points of the hyper voxels, normalizing the difference value to obtain similarity measurement between the hyper voxels;
3. fusing the hyper-voxels according to the hyper-voxel similarity measure:
performing plane fitting on each hyper-voxel by using a least square method, sequencing all hyper-voxels according to residual values, taking the hyper-voxel with the minimum residual value as an initial seed, and acquiring adjacent hyper-voxels of the seed voxel; for each contiguous superpixel, a similarity measure between it and the superpixel seed is calculated: if the similarity is greater than a certain threshold, adding the adjacent super voxel into the current region, and simultaneously calculating a residual value of the adjacent super voxel, if the residual value is less than a certain threshold, adding the adjacent super voxel into the seed set; removing the current seed from the seed set after traversing all the adjacent hyper-voxels of the hyper-voxel seed;
the invention provides a novel color point cloud registration method, which utilizes hyper-voxel segmentation to extract sparse characteristic points so as to reduce the burden of a large-scale registration task. A feature matching similarity measure method is proposed that dynamically combines spatial information and color information. And finally, the point cloud is registered by improving the classical ICP iterative algorithm, so that the method not only can better solve the problems of data overlapping and poor initial position in the real world, but also can greatly improve the registration efficiency and reduce the registration time.
Finally, it is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and the appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.

Claims (7)

1. A CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels is characterized in that:
the method comprises the following steps;
firstly, extracting a tooth model from the scanning data of the oral CBCT according to a region growing method;
secondly, processing the tooth model by using a laser scanner, and separating a dental crown in the tooth model from an abutment according to the characteristics of the tooth;
then converting tooth models from different sources into tooth point cloud data, adding color information to the tooth point cloud data to assist initial geometric alignment of the tooth point cloud, and realizing down-sampling of the point cloud based on a hyper-voxel method;
finally, combining the mixed features of the tooth point cloud data and defining similarity measurement according to the mixed features, dynamically adjusting the weight between the color and the space information of the tooth point cloud data, and finally measuring the mixed features of the tooth point cloud data to construct alignment measurement; and proposing a mutual corresponding matching condition based on the mixed characteristics;
the registration method is an improved ICP registration algorithm carried out under an ICP framework of an improved model, and the improved model comprises a point cloud embedding network, an attention-based module and a pointer generation layer for approximate combination matching and a differentiable singular value decomposition layer for extracting final rigid transformation; the registration of two pieces of tooth point cloud data with color information from different angles can be realized.
2. The method for tooth registration based on CBCT and laser scanning point cloud data of super voxel as claimed in claim 1, wherein: the method for converting tooth models from different sources into tooth point cloud data comprises the following methods;
scanning tooth bodies from different sources through oral CBCT, and reconstructing a three-dimensional tooth body model according to scanned CBCT data, wherein in the step, the CBCT data are reconstructed by adopting a region growing method to obtain a comprehensive and accurate three-dimensional tooth body information model and are converted into point cloud data;
and step two, obtaining a dental crown model by a laser scanning technology.
3. The method of claim 2, wherein the method comprises the following steps: the second step comprises the following specific steps:
collecting dental crown model point cloud data by adopting a handheld three-dimensional laser scanner; before measurement, equipment needs to be calibrated to ensure the accuracy of output data, an STL format is used as a file format of the output data to reconstruct a three-dimensional model, and finally the reconstructed three-dimensional model is converted into point cloud data.
4. The method for tooth registration based on CBCT and laser scanning point cloud data of super voxel as claimed in claim 1, wherein: the method for adding the color information comprises the steps of classifying teeth according to definitions of dental medical textbooks, classifying upper and lower jaw teeth by the same method, and classifying two different point cloud data by 14 different color labels.
5. The method for tooth registration based on CBCT and laser scanning point cloud data of super voxel as claimed in claim 1, wherein: the hyper-voxel method is a hyper-voxel segmentation method, and specifically comprises the following steps;
step A1, dividing the two dental 3D point cloud data into a plurality of voxels with the same size by using an octree, calculating the average curvature of each voxel, and taking the voxel with the minimum average curvature as a seed voxel to avoid over-segmentation;
each three-dimensional voxel is clustered according to a 39-dimensional vector, which is expressed as:
F=[x,y,z,L,a,b,FPFH1,...,33]a first formula;
wherein x, y and z are space coordinates, L, a and b are color values of CIELab space, FPFH1,...33Extracting 33 elements of the local geometric features of the point cloud from the fast point feature histogram; FPFH has the characteristics of describing a local surface model with space invariance, and the construction method is that for each extracted tooth point, a neighborhood sphere is constructed by the radius of 0.2, the neighborhood sphere is expanded along the x, y and z axes and is divided into 11 histogram equal parts, and finally a 33-dimensional descriptor is obtained;
step A2, starting from the initial seed voxel, traversing the adjacent voxels outwards, and clustering the point cloud, wherein the distance metric D of the clustering is represented as:
Figure FDA0002715665460000021
in which λ, μ and β correspond to the influence factors of color, spatial distance and geometric characteristics, DcIs the Euclidean distance value, D, of the CIELab spacesIs the Euclidean distance value, D, of a voxel in three-dimensional spacehikIs the cross-sum of the voxel normal vector distribution histograms;
Rseedthe resolution of the seed voxels in the space determines the number of initial hyper-voxels; after the distance from the voxel in the neighborhood to the seed voxel is calculated, marking the voxel with the closest distance, and adding the adjacent voxel into a search list according to an adjacent map; iterating until a search boundary for each voxel is reached; the search completion conditions are: all leaf nodes in the adjacency graph are traversed, and hyper-voxels are obtained through segmentation.
6. The method for tooth registration based on CBCT and laser scanning point cloud data of super voxel as claimed in claim 1, wherein: the defining of the similarity measure according to the mixed features comprises
Setting the weight as 0.5 to balance the geometrical characteristics and the local color distribution characteristics of the point cloud, finding the corresponding points between the CBCT dental point cloud and the laser scanning dental point cloud, and defining a measurement formula of the mixed characteristic distance as follows:
Figure FDA0002715665460000031
in the formula ds()、dc() Respectively representing the geometrical characteristic distance and the local color distribution characteristic distance of the point cloud, setting the weight of w which is a color special product to be 0.5, properly combining space and color characteristics and carrying out proper similarity measurement according to different point cloud positions; p is a radical ofu,qvRespectively representing the spatial corresponding points of the CBCT point cloud data and the laser scanning point cloud data.
7. The method for tooth registration based on CBCT and laser scanning point cloud data of super voxel as claimed in claim 1, wherein: the proposing of the mutual corresponding matching condition based on the mixed features comprises;
c (u, v) represents a bidirectional corresponding relation between the CBCT point cloud data and the laser scanning point cloud data, CDM (u) represents a corresponding point of a data point pu in a CBCT model point cloud Q, and CMD (v) represents a corresponding point of a data point qu in a laser scanning point cloud P; CDM (u) and CMD (v) may be defined as:
Figure FDA0002715665460000032
Figure FDA0002715665460000033
where d (pu, qv) and d (qv, pu) are measures of the mixture characteristic distance defined in equation (4), and the point pairs pu and qv are mutually corresponding in cdm (u) and cmd (v);
the improved ICP framework of the improved model comprises the following steps that the matching problem of corresponding parameters and conversion parameters in an objective function is solved through an improved ICP algorithm, and the improvement specifically comprises the following steps;
b1, adding an attention mechanism to predict soft matching between the point clouds; (ii) a
Firstly, CBCT point cloud data and laser scanning point cloud data, wherein an initial point cloud characteristic relation (DGCNN) generates global characteristic vectors of two pieces of point clouds by using a graph network structure, and the global characteristic vectors contain local neighborhood information of characteristic points, which plays a key role in subsequent registration;
Figure FDA0002715665460000041
where L represents the number of network layers,
Figure FDA0002715665460000042
indicating that the point i is embedded in the l layer;
then, the point cloud characteristic vector obtained by DGCNN is used as the input of a conversion module, the characteristic vectors of the point cloud sets X and Y are decoded by using a coding frame, the module increases an attention mechanism, and the mapping relation m (-) between the point clouds is modified by continuously adjusting the weight according to different self-adaptive distribution weights of the point cloud shapes, so that the matching of the point cloud sets X and Y is realized, as shown in the following formula;
Figure FDA0002715665460000043
in the formula φx and φyAre respectively Fx and FyOf the input of (1) a modified mapping function, Fx→φxTaking the target point cloud X structure as a standard, and modifying the weight of the characteristic points of the point cloud set Y according to the point cloud set X structure to generate Fx→φxWherein the learning function of the attention mechanism module φ:
Figure FDA0002715665460000044
Figure FDA0002715665460000045
is a nonlinear function, P represents the embedded dimension and is an asymmetric function input;
step B2, adding a singular value decomposition module (sVD);
the constructed deep learning network solves a transformation matrix for the mapped point cloud by using a singular value decomposition method, respectively calculates a new point cloud set and a centroid and covariance matrix after mapping, and performs probability distribution on corresponding points to the points by using a probability model in order to avoid direct matching, wherein the model is as follows:
Figure FDA0002715665460000046
and (3) carrying out mean solution on the probability model distribution probability to generate mean points of the point cloud set X matched to the point cloud set Y:
Figure FDA0002715665460000047
the upper type
Figure FDA0002715665460000048
Where Y is a dot matrix. According to xiTo yiCalculating rigid motion of the target point cloud X and the point cloud Y to be detected according to the mapping relation;
step B3, constructing a loss function combining the local similarity and the global geometric constraint:
Figure FDA0002715665460000051
Figure FDA0002715665460000052
wherein ,
Figure FDA0002715665460000053
for each keypoint x in the input point cloudiObtained by given transformation, N is the number of extracted characteristic point clouds, yiTo estimate the target corresponding point, R, T is an estimation transform. Loss is derived by directly calculating the Euclidean distance1(ii) a Considering that the registration task is also constrained by the global geometric transformation, another Loss including the global geometric constraint is introduced2
In summary, the synthetic loss function is defined as:
Loss=aLoss1+(1-a)Loss2a formula ten;
wherein a is Loss1 and Loss2Weight in between.
CN202011072708.XA 2020-10-09 2020-10-09 Super-voxel-based CBCT and laser scanning point cloud data tooth registration method Active CN112200843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011072708.XA CN112200843B (en) 2020-10-09 2020-10-09 Super-voxel-based CBCT and laser scanning point cloud data tooth registration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011072708.XA CN112200843B (en) 2020-10-09 2020-10-09 Super-voxel-based CBCT and laser scanning point cloud data tooth registration method

Publications (2)

Publication Number Publication Date
CN112200843A true CN112200843A (en) 2021-01-08
CN112200843B CN112200843B (en) 2023-05-23

Family

ID=74012613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011072708.XA Active CN112200843B (en) 2020-10-09 2020-10-09 Super-voxel-based CBCT and laser scanning point cloud data tooth registration method

Country Status (1)

Country Link
CN (1) CN112200843B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767347A (en) * 2021-01-18 2021-05-07 上海商汤智能科技有限公司 Image registration method and device, electronic equipment and storage medium
CN112801945A (en) * 2021-01-11 2021-05-14 西北大学 Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction
CN112862974A (en) * 2021-03-12 2021-05-28 东南大学 Tooth veneering model generation and thickness measurement method based on oral scanning point cloud
CN112967219A (en) * 2021-03-17 2021-06-15 复旦大学附属华山医院 Two-stage dental point cloud completion method and system based on deep learning network
CN112989954A (en) * 2021-02-20 2021-06-18 山东大学 Three-dimensional tooth point cloud model data classification method and system based on deep learning
CN113077502A (en) * 2021-04-16 2021-07-06 中南大学 Dental space registration method based on multi-layer spherical surface generation points in marker sphere
CN113516784A (en) * 2021-07-27 2021-10-19 四川九洲电器集团有限责任公司 Tooth segmentation modeling method and device
CN113837326A (en) * 2021-11-30 2021-12-24 自然资源部第一海洋研究所 Airborne laser sounding data registration method based on characteristic curve
CN115830287A (en) * 2023-02-20 2023-03-21 汉斯夫(杭州)医学科技有限公司 Tooth point cloud fusion method, equipment and medium based on laser oral scanning and CBCT reconstruction
CN116485809A (en) * 2022-07-01 2023-07-25 山东财经大学 Tooth example segmentation method and system based on self-attention and receptive field adjustment
CN116817771A (en) * 2023-08-28 2023-09-29 南京航空航天大学 Aerospace part coating thickness measurement method based on cylindrical voxel characteristics
EP4280155A1 (en) 2022-05-16 2023-11-22 Institut Straumann AG Improved manufacturing of dental implants based on digital scan data alignment
CN117152446A (en) * 2023-10-31 2023-12-01 昆明理工大学 Improved LCCP point cloud segmentation method based on Gaussian curvature and local convexity
WO2024088360A1 (en) * 2022-10-26 2024-05-02 上海时代天使医疗器械有限公司 Method for registering three-dimensional digital models of dentition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447908A (en) * 2015-12-04 2016-03-30 山东山大华天软件有限公司 Dentition model generation method based on oral cavity scanning data and CBCT (Cone Beam Computed Tomography) data
CN106600622A (en) * 2016-12-06 2017-04-26 西安电子科技大学 Point cloud data partitioning method based on hyper voxels
CN108665533A (en) * 2018-05-09 2018-10-16 西安增材制造国家研究院有限公司 A method of denture is rebuild by tooth CT images and 3 d scan data
CN108765474A (en) * 2018-04-17 2018-11-06 天津工业大学 A kind of efficient method for registering for CT and optical scanner tooth model
CN109493372A (en) * 2018-10-24 2019-03-19 华侨大学 The product point cloud data Fast global optimization method for registering of big data quantity, few feature
US20190318536A1 (en) * 2016-06-20 2019-10-17 Ocean Maps GmbH Method for Generating 3D Data Relating to an Object

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447908A (en) * 2015-12-04 2016-03-30 山东山大华天软件有限公司 Dentition model generation method based on oral cavity scanning data and CBCT (Cone Beam Computed Tomography) data
US20190318536A1 (en) * 2016-06-20 2019-10-17 Ocean Maps GmbH Method for Generating 3D Data Relating to an Object
CN106600622A (en) * 2016-12-06 2017-04-26 西安电子科技大学 Point cloud data partitioning method based on hyper voxels
CN108765474A (en) * 2018-04-17 2018-11-06 天津工业大学 A kind of efficient method for registering for CT and optical scanner tooth model
CN108665533A (en) * 2018-05-09 2018-10-16 西安增材制造国家研究院有限公司 A method of denture is rebuild by tooth CT images and 3 d scan data
CN109493372A (en) * 2018-10-24 2019-03-19 华侨大学 The product point cloud data Fast global optimization method for registering of big data quantity, few feature

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TIAN,FUJING ET AL.: "Supervoxel based Point Cloud Segmentation Algorithm", 《OPTOELECTRONIC IMAGING AND MULTIMEDIA TECHNOLOGY VI》 *
张雅玲 等: "基于GCNN的CBCT模拟口扫点云数据牙齿分割算法", 《计算机辅助设计与图形学学报》 *
白慧鹏: "地面三维激光点云去噪算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801945A (en) * 2021-01-11 2021-05-14 西北大学 Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction
CN112767347A (en) * 2021-01-18 2021-05-07 上海商汤智能科技有限公司 Image registration method and device, electronic equipment and storage medium
CN112989954A (en) * 2021-02-20 2021-06-18 山东大学 Three-dimensional tooth point cloud model data classification method and system based on deep learning
CN112989954B (en) * 2021-02-20 2022-12-16 山东大学 Three-dimensional tooth point cloud model data classification method and system based on deep learning
CN112862974A (en) * 2021-03-12 2021-05-28 东南大学 Tooth veneering model generation and thickness measurement method based on oral scanning point cloud
CN112862974B (en) * 2021-03-12 2024-02-23 东南大学 Tooth veneering model generation and thickness measurement method based on oral cavity scanning point cloud
CN112967219A (en) * 2021-03-17 2021-06-15 复旦大学附属华山医院 Two-stage dental point cloud completion method and system based on deep learning network
CN112967219B (en) * 2021-03-17 2023-12-05 复旦大学附属华山医院 Two-stage dental point cloud completion method and system based on deep learning network
CN113077502A (en) * 2021-04-16 2021-07-06 中南大学 Dental space registration method based on multi-layer spherical surface generation points in marker sphere
CN113516784B (en) * 2021-07-27 2023-05-23 四川九洲电器集团有限责任公司 Tooth segmentation modeling method and device
CN113516784A (en) * 2021-07-27 2021-10-19 四川九洲电器集团有限责任公司 Tooth segmentation modeling method and device
CN113837326A (en) * 2021-11-30 2021-12-24 自然资源部第一海洋研究所 Airborne laser sounding data registration method based on characteristic curve
EP4280155A1 (en) 2022-05-16 2023-11-22 Institut Straumann AG Improved manufacturing of dental implants based on digital scan data alignment
CN116485809A (en) * 2022-07-01 2023-07-25 山东财经大学 Tooth example segmentation method and system based on self-attention and receptive field adjustment
CN116485809B (en) * 2022-07-01 2023-12-15 山东财经大学 Tooth example segmentation method and system based on self-attention and receptive field adjustment
WO2024088360A1 (en) * 2022-10-26 2024-05-02 上海时代天使医疗器械有限公司 Method for registering three-dimensional digital models of dentition
CN115830287A (en) * 2023-02-20 2023-03-21 汉斯夫(杭州)医学科技有限公司 Tooth point cloud fusion method, equipment and medium based on laser oral scanning and CBCT reconstruction
CN115830287B (en) * 2023-02-20 2023-12-12 汉斯夫(杭州)医学科技有限公司 Tooth point cloud fusion method, device and medium based on laser mouth scanning and CBCT reconstruction
CN116817771A (en) * 2023-08-28 2023-09-29 南京航空航天大学 Aerospace part coating thickness measurement method based on cylindrical voxel characteristics
CN116817771B (en) * 2023-08-28 2023-11-17 南京航空航天大学 Aerospace part coating thickness measurement method based on cylindrical voxel characteristics
CN117152446A (en) * 2023-10-31 2023-12-01 昆明理工大学 Improved LCCP point cloud segmentation method based on Gaussian curvature and local convexity
CN117152446B (en) * 2023-10-31 2024-02-06 昆明理工大学 Improved LCCP point cloud segmentation method based on Gaussian curvature and local convexity

Also Published As

Publication number Publication date
CN112200843B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN112200843B (en) Super-voxel-based CBCT and laser scanning point cloud data tooth registration method
JP7493464B2 (en) Automated canonical pose determination for 3D objects and 3D object registration using deep learning
CN111862171B (en) CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion
CN109785374B (en) Automatic real-time unmarked image registration method for navigation of dental augmented reality operation
CN104851123B (en) A kind of three-dimensional face change modeling method
CA3024372A1 (en) Method for estimating at least one of shape, position and orientation of a dental restoration
CN111784754B (en) Tooth orthodontic method, device, equipment and storage medium based on computer vision
CN112085821A (en) Semi-supervised-based CBCT (cone beam computed tomography) and laser scanning point cloud data registration method
CN115619773B (en) Three-dimensional tooth multi-mode data registration method and system
WO2020136243A1 (en) Tooth segmentation using tooth registration
CN112785609B (en) CBCT tooth segmentation method based on deep learning
US20230394687A1 (en) Oral image processing device and oral image processing method
CN113052902A (en) Dental treatment monitoring method
CN114549540A (en) Method for automatically fusing oral scanning tooth data and CBCT (Cone Beam computed tomography) data and application thereof
CN113706514B (en) Focus positioning method, device, equipment and storage medium based on template image
Chen et al. Hierarchical CNN-based occlusal surface morphology analysis for classifying posterior tooth type using augmented images from 3D dental surface models
Ben-Hamadou et al. 3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge
Tian et al. Efficient tooth gingival margin line reconstruction via adversarial learning
CN115830287B (en) Tooth point cloud fusion method, device and medium based on laser mouth scanning and CBCT reconstruction
CN111986216A (en) RSG liver CT image interactive segmentation algorithm based on neural network improvement
CN116152235A (en) Cross-modal synthesis method for medical image from CT (computed tomography) to PET (positron emission tomography) of lung cancer
CN114092643A (en) Soft tissue self-adaptive deformation method based on mixed reality and 3DGAN
Xie et al. Automatic Individual Tooth Segmentation in Cone-Beam Computed Tomography Based on Multi-Task CNN and Watershed Transform
Dhar et al. Automatic tracing of mandibular canal pathways using deep learning
Trabelsi et al. 3D Active Shape Model for CT-scan liver segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant