WO2021212940A1 - 牙颌三维数字模型的分割方法 - Google Patents

牙颌三维数字模型的分割方法 Download PDF

Info

Publication number
WO2021212940A1
WO2021212940A1 PCT/CN2021/073234 CN2021073234W WO2021212940A1 WO 2021212940 A1 WO2021212940 A1 WO 2021212940A1 CN 2021073234 W CN2021073234 W CN 2021073234W WO 2021212940 A1 WO2021212940 A1 WO 2021212940A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional digital
digital model
jaw
point
dental
Prior art date
Application number
PCT/CN2021/073234
Other languages
English (en)
French (fr)
Inventor
方可
Original Assignee
宁波深莱医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宁波深莱医疗科技有限公司 filed Critical 宁波深莱医疗科技有限公司
Priority to US17/614,541 priority Critical patent/US11989934B2/en
Publication of WO2021212940A1 publication Critical patent/WO2021212940A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present application generally relates to a method for segmenting a three-dimensional digital model of a dental jaw, and in particular to a method for segmenting a three-dimensional digital model of a dental jaw using a deep learning artificial neural network.
  • the existing semi-automatic three-dimensional dental model segmentation method usually includes the following steps: first, the gingiva is automatically peeled from the model by the algorithm, and then the feature points of each tooth are manually marked, and then the algorithm is used on this basis. Automatically divide between teeth. In many cases, this type of method is not thorough enough to cut the gums, and requires human intervention in the later stage, which is inefficient.
  • One aspect of this application provides a method for segmenting a three-dimensional digital model of a dental jaw, including: obtaining a three-dimensional digital model of a dental jaw to be segmented; point clouding the three-dimensional digital model of a dental jaw to be segmented to obtain a point cloud; The cloud is sampled to obtain sampling points; feature extraction is performed on the sampling points; the trained DGCNN network is used to classify the sampling points based on the extracted features; and the KNN algorithm is used based on the classified Sampling points, to classify other points in the point cloud, where classifying a point is to classify the face of the three-dimensional digital model of the tooth to be segmented represented by the point as a certain tooth or gum .
  • the sampling may be uniform sampling.
  • the method for segmenting the three-dimensional digital model of the dental jaw may further include: positioning the three-dimensional digital model of the dental jaw to be segmented to a reference position, and the classification of the sampling points by the DGCNN network is based on positioning.
  • the three-dimensional digital model of the tooth jaw to be segmented to the reference position may further include: positioning the three-dimensional digital model of the dental jaw to be segmented to a reference position, and the classification of the sampling points by the DGCNN network is based on positioning.
  • the three-dimensional digital model of the tooth jaw to be segmented to the reference position may further include: positioning the three-dimensional digital model of the dental jaw to be segmented to a reference position, and the classification of the sampling points by the DGCNN network is based on positioning.
  • the DGCNN network may be trained with a plurality of samples of the dental jaw three-dimensional digital model positioned at the reference position.
  • the method for segmenting the three-dimensional digital model of the dental jaw may further include: using the ICP algorithm to combine the three-dimensional digital model of the dental jaw to be segmented with a plurality of three-dimensional digital models of the dental jaw located at the reference position.
  • the template is registered to position the three-dimensional digital model of the tooth jaw to be segmented to the reference position.
  • the classification of each sampling point by the DGCNN network is based on the characteristics of N points adjacent thereto, where 20 ⁇ N ⁇ 30, and N is a natural number.
  • the edge convolution module in the DGCNN network can be configured with three convolution layers.
  • the method for segmenting the three-dimensional digital model of the dental jaw may further include: smoothing the classification result using a Graph-Cut algorithm, where the Graph-Cut algorithm is based on minimizing the classification loss and The weighted sum of geometric loss, where the classification loss is the loss of smoothing the classification of the current point into other classifications, and the geometric loss is the loss of smoothing the classification of the current point into the classification of adjacent points, where the tooth The weight of geometric loss between gingiva and gingiva is less than the weight of geometric loss in other cases.
  • the features extracted for each sampling point may include: the coordinates of the sampling point, the normal vector of the patch corresponding to the sampling point, and the vector from the sampling point to each vertex of the corresponding patch .
  • the segmentation method of the dental jaw three-dimensional digital model may further include: evenly dividing the sampling points into multiple groups, and the DGCNN network classifies the sampling points in groups.
  • FIG. 1 is a schematic flowchart of a method for segmenting a three-dimensional digital model of a dental jaw in an embodiment of the application;
  • FIG. 2 is a schematic block diagram of the DGCNN network in an embodiment of this application.
  • FIG. 3 is a schematic flowchart of a method for detecting segmentation results of a dental jaw three-dimensional digital model in an embodiment of the application.
  • one aspect of the present application provides a new dental jaw three-dimensional digital model segmentation method.
  • Fig. 1 is a schematic flowchart of a method 100 for segmenting a three-dimensional digital model of a dental jaw in an embodiment of the present application.
  • the three-dimensional digital model of the dental jaw to be segmented may be obtained by scanning the patient's oral cavity (for example, scanning with a laser oral scanning device). In another embodiment, the three-dimensional digital model of the tooth jaw to be segmented may also be obtained by scanning the patient's tooth jaw model (for example, a plaster model).
  • the three-dimensional digital model of the tooth jaw to be segmented is composed of triangular faces
  • the high-precision three-dimensional digital model of the tooth jaw includes a large number of triangular faces, for example, more than 100,000, or even more than 150,000.
  • the three-dimensional digital model of the tooth jaw to be segmented is positioned to a reference position.
  • the three-dimensional dental model is positioned A predefined reference position, the reference position includes direction and position.
  • a plurality of representative three-dimensional digital models of the reference dental jaws may be selected according to a priori, for example, five reference three-dimensional digital models of the dental jaws correspond to the normal dental arch, Short dental arch, long dental arch, wide dental arch and narrow dental arch. Under the enlightenment of this application, it can be understood that the selection of the reference dental jaw three-dimensional digital model is not limited to the above examples.
  • the reference three-dimensional digital model of the dental jaw is located at the reference position, and the three-dimensional digital model of the dental jaw to be segmented can be positioned to the reference position by registration with the reference three-dimensional digital model of the dental jaw.
  • the Iterative Closest Point may be used to register the three-dimensional digital model of the tooth to be segmented with the three-dimensional digital model of the reference tooth, and the three-dimensional digital model of the tooth to be segmented Position to the reference position.
  • the three-dimensional digital model of the dental jaw to be segmented may be registered with five reference three-dimensional digital models of the dental jaw respectively, and the position with the highest matching degree is selected as the registered position. Then, calculate the average center (centroid) of a large number of registered dental jaw three-dimensional digital models and use it as the center position. Finally, the center of the registered three-dimensional digital model of the tooth jaw to be segmented is translated to the central position, and it is considered that the three-dimensional digital model of the tooth jaw to be segmented has been positioned to the reference position at this time.
  • point cloud the three-dimensional digital model of the tooth jaw to be segmented to obtain a point cloud.
  • the ICP algorithm is used in the above embodiment to register the 3D digital model of the tooth to be segmented with the reference 3D digital model of the tooth, the 3D digital model of the tooth to be segmented has been point clouded, and the point cloud operation is no longer required. .
  • the three-dimensional digital model of the tooth jaw to be segmented is not point-clouded during the process of positioning it to the reference position, then the point-clouding operation needs to be performed at this time.
  • feature extraction is performed on the points in the point cloud.
  • each point in the point cloud can be the center of the corresponding patch Point
  • 3 features in x, y, z normal vector of the face
  • 9 features vector from the center of the face to the three vertices
  • the trained dynamic graph convolutional neural network is used to segment the point cloud based on the extracted features.
  • a dynamic graph convolutional neural network (Dynamic Graph Convolutional Neural Network, DGCNN for short) may be used to classify (segment) the point cloud.
  • DGCNN Dynamic Graph Convolutional Neural Network
  • 33 labels can be set, representing 32 teeth and gums, for classification (segmentation).
  • the point cloud may be sampled, and the trained DGCNN network may be used to classify only the sampled points.
  • the sampling may be uniform sampling.
  • the number of sampling points can be set according to the capabilities of the computing system, for example, 30,000 sampling points are taken.
  • the sampling points may be evenly grouped, and then each group of sampling points may be classified separately by using the trained DGCNN network.
  • the number of groups can be set according to the capabilities of the computing system, for example, divided into 3 groups.
  • the DGCNN network When the DGCNN network classifies a point, it will consider the adjacent points (that is, calculate the feature relationship between the point to be classified and the adjacent point, and consider this feature relationship when classifying).
  • the inventor of the present application found through a large number of experiments that with a step value of 5, the accuracy of classification increases with the number of adjacent points considered, until 25. Since then, even if the number of adjacent points considered further increases, the classification accuracy The accuracy is not significantly improved. Therefore, preferably, the number of adjacent points to be considered can be selected in the range of 20-30, and more preferably, the number of adjacent points to be considered can be selected in the range of 25-30.
  • the DGCNN network For each sampling point, the DGCNN network outputs its probability distribution on the 33 classifications (including 32 teeth and gums), and the classification with the highest probability is used as the classification result of the sampling point.
  • K-Nearest Neighbour K-Nearest Neighbour, KNN for short
  • KNN K-Nearest Neighbour
  • the basic idea of the KNN algorithm is that if a sample (here, the point to be classified) in the feature space, most of the k most similar (ie the nearest neighbors in the feature space) samples belong to a certain category, then The samples also fall into this category. In the KNN algorithm, the selected neighboring samples are already classified samples.
  • the average value of the probability distribution of the neighboring points of the point to be classified can be calculated, and the average value is used as the probability distribution of the point to be classified, and the classification with the highest probability is used as the classification result of the point to be classified.
  • k can be set to 5, that is, for each non-sampling point, it is classified based on the classification of the 5 nearest sampling points in the feature space.
  • the inventor of the present application has discovered through a large number of experiments that the setting of k has little effect on the classification accuracy and calculation amount of the network, and it should be within a reasonable range.
  • FIG. 2 is a schematic block diagram of the DGCNN network 200 in an embodiment of this application.
  • the input module 201 is used to input the extracted features.
  • the T-Net sub-network 203 is used to realize automatic alignment of point clouds to reduce the spatial variation of points.
  • the T-Net network is a network that predicts the transformation matrix of the feature space. It learns a transformation matrix consistent with the dimension of the feature space from the input data, and then uses this transformation matrix to multiply the original data to realize the transformation operation of the input feature space. So that every subsequent point has a relationship with every point in the input data. Through such data fusion, the gradual abstraction of the features contained in the original point cloud data is realized.
  • the DGCNN network 200 in this embodiment includes three edge convolution modules (Edge Convolution) 205, 207, and 209, where each edge convolution module has three layers.
  • the inventor of this application found through a lot of experiments that setting three layers of depth for the edge convolution module can ensure the prediction (classification) accuracy of the network. If the depth of the edge convolution module is further deepened, the prediction accuracy of the network will be limited, and it may be possible It is more likely to cause overfitting and increase the amount of calculation.
  • the DGCNN network 200 in this embodiment can segment the upper and lower jaw three-dimensional digital models as a whole, and the one-hot encoded category module 213 is used to distinguish whether a point belongs to the upper jaw or the lower jaw.
  • the DGCNN network 200 in this embodiment further includes three two-dimensional convolution modules 211, 215, and 217.
  • the output module 219 outputs the probability distribution of all points in 33 categories.
  • the segmentation result can be smoothed.
  • the Graph-Cut algorithm may be used to smooth the segmentation results based on the geometric relationship between the facets of the dental jaw three-dimensional digital model.
  • two losses can be set, classification loss and geometric loss.
  • classification loss and geometric loss Through the Graph-Cut algorithm, the weighted sum of the classification loss and the geometric loss is minimized to achieve smooth classification results.
  • the classification loss can be defined as: the current surface patch predicted tooth number (that is, the classification result) is smoothed into the loss of other tooth numbers, which is equal to the value of the current surface patch on the probability distribution output by the automatic segmentation system, that is, the greater the probability Larger, the greater the loss of smoothing processing.
  • the classification loss it can be calculated for all possibilities, that is, the loss when the current patch is smoothed into the other 32 classifications.
  • the geometric loss can be defined as: smoothing the predicted tooth number of the current face to the loss of the predicted tooth number of its adjacent face, equal to the product of the distance from the center of the current face to the center of its adjacent face and the dihedral angle .
  • the Graph-Cut algorithm is used to minimize the weighted sum of annotation loss and geometric loss to achieve smoothness.
  • a non-negative constant ⁇ can be set as the weight of the geometric loss to balance the influence of the labeling loss and the geometric loss on the total loss. Since the boundary between the teeth and the gums is less obvious, you are more inclined to believe the segmentation results at this time. Therefore, the geometric loss weights between the gums and the teeth can be set to be smaller than the geometric loss weights in other cases. That is to say, when the classification of the current point and the adjacent point is one tooth and the other is the gum, when the classification of the current point is smoothed into the classification of the adjacent point, the geometric loss weight is less than other cases.
  • the inventor of the present application found through a lot of experiments that when the geometric loss weight ⁇ between the gums and the teeth is set to 50, and the geometric loss weight ⁇ between the teeth is set to 250, the optimization effect is better.
  • minimizing the loss can be expressed by the following expression (1):
  • the first term is the classification loss
  • the second term is the geometric loss
  • F represents the face set of the dental jaw three-dimensional digital model
  • i represents the i-th face
  • l i represents the classification of the i-th face, which corresponds to The probability of is p i , where,
  • ⁇ ij represents the dihedral angle between the face piece i and the face piece j
  • ⁇ ij represents the distance between the center of the face piece i and the face piece j
  • the industry lacks a method that can detect the credibility of the segmentation results of the artificial neural network on the three-dimensional digital model of the dental jaw.
  • the inventor of this application has developed a method for evaluating the credibility of the segmentation results based on disturbances. method.
  • the basic idea of this method is to detect whether the segmentation method is sensitive to the position disturbance of the current three-dimensional digital model of the tooth and jaw. If it is, the current segmentation result is considered unreliable.
  • FIG. 3 is a method 300 for detecting the credibility of the segmentation result of the artificial neural network on the dental jaw three-dimensional digital model in an embodiment of the application.
  • the fiducial segmentation result may be a result obtained by segmenting the three-dimensional digital model of the dental jaw positioned at the fiducial using the segmentation method of the present application.
  • the three-dimensional digital model of the dental jaw positioned at the reference position can be rotated by plus or minus 5 degrees and plus or minus 10 degrees along the x, y, and z axes, respectively, to obtain 12 sets of perturbations.
  • the manner of perturbation and the number of perturbation positions can be determined according to specific conditions, and are not limited to the above specific examples.
  • the manner of perturbation can include translation, rotation, and a combination of the two, or It is perturbation along any axis or direction.
  • the minimum value of C/A and C/B can be taken as the similarity of the tooth number X prediction (labeling) between the reference position and the first disturbance position. If there is a case where the denominator of C/A or C/B is 0, this fraction can be assigned a value of 1.
  • the first quartile (Q1) of the 12 similarities is taken as the confidence of the tooth number X by the segmentation method.
  • the minimum value of the confidence levels of all tooth numbers is taken as the representative confidence level of the segmentation result of the three-dimensional digital model of the dental jaw located at the reference position by the segmentation method.
  • a threshold may be set. If the representative confidence of the segmentation result is greater than the threshold, the segmentation result is considered credible, otherwise the segmentation result is considered unreliable.
  • the threshold may be set to 0.85.
  • the threshold may need to be modified.
  • tooth three-dimensional digital model segmentation method in the embodiment of the present application is a point-based segmentation method, under the enlightenment of the present application, it can be understood that the segmentation result detection method of the present application is also applicable to a segmentation method based on a face.
  • the segmentation result detection method of this application is applicable to any segmentation method of a neural network dental jaw three-dimensional digital model based on positioning to a reference position for training and segmentation.
  • a segmentation result is not credible
  • the final output segmentation result if a credible segmentation result cannot be found among the multiple disturbance bit segmentation results, then the user is notified of human intervention for inspection and/or adjustment.
  • the 12 perturbation bit segmentation results can be traversed to find the perturbation bit segmentation result with the highest confidence, and output it as the final segmentation result. At this time, even if the segmentation result needs to be artificially repaired, the workload of artificial repair can be reduced.
  • the various diagrams may show exemplary architectures or other configurations of the disclosed methods and systems, which are helpful in understanding the features and functions that can be included in the disclosed methods and systems.
  • the claimed content is not limited to the exemplary architecture or configuration shown, and the desired features can be implemented with various alternative architectures and configurations.
  • the order of the blocks given here should not be limited to the various embodiments that are implemented in the same order to perform the functions, unless clearly indicated in the context .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Probability & Statistics with Applications (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
  • Biodiversity & Conservation Biology (AREA)

Abstract

一种牙颌三维数字模型的分割方法,包括:获取待分割牙颌三维数字模型;将所述待分割牙颌三维数字模型点云化获得点云;对所述点云进行采样,获得采样点;对所述采样点进行特征提取;利用经训练的DGCNN网络,基于所述提取的特征,对所述采样点进行分类;以及利用KNN算法,基于所述已分类的采样点,对所述点云中的其他点进行分类,其中,对一个点进行分类是把该点所代表的所述待分割牙颌三维数字模型的面片归类为某一颗牙齿或者牙龈。

Description

牙颌三维数字模型的分割方法 技术领域
本申请总体上涉及牙颌三维数字模型的分割方法,尤其是涉及利用深度学习人工神经网络对牙颌三维数字模型进行分割的方法。
背景技术
如今,牙科治疗越来越多地借助计算机技术,在很多情况下需要对扫描获得的包括牙列与至少部分牙龈的牙颌的三维数字模型进行分割,以把牙冠和牙龈分割开,以及把各牙冠分割开。
现有的半自动化牙颌三维数字模型分割方法通常包括以下步骤:首先,利用算法自动将牙龈从模型中剥离,然后,人工手动对每颗牙齿进行特征点标注,接着,在此基础上利用算法自动进行牙齿间的分割。在很多情况下,该类方法对牙龈的切割不够彻底,需要后期人为干预,效率较低。
由于牙颌三维数字模型的个体差异极大,现有的自动化算法很难在脱离人工更正的情况下取得符合生产标准的分割结果。此外,由于高精度的口扫牙颌三维数字模型具有大量面片,现有自动算法需要进行牙龈剥离、简化、特征计算、分割、映射、优化等操作才能得到最终的分割结果,其不仅对计算能力要求较高,且在同等情况下比半自动化方法更耗时。
鉴于以上,有必要提供一种新的牙颌三维数字模型的分割方法。
发明内容
本申请的一方面提供了一种牙颌三维数字模型的分割方法,包括:获取待分割牙颌三维数字模型;将所述待分割牙颌三维数字模型点云化获得点云;对所述 点云进行采样,获得采样点;对所述采样点进行特征提取;利用经训练的DGCNN网络,基于所述提取的特征,对所述采样点进行分类;以及利用KNN算法,基于所述已分类的采样点,对所述点云中的其他点进行分类,其中,对一个点进行分类是把该点所代表的所述待分割牙颌三维数字模型的面片归类为某一颗牙齿或者牙龈。
在一些实施方式中,所述采样可以是均匀采样。
在一些实施方式中,所述的牙颌三维数字模型的分割方法还可以包括:将所述待分割牙颌三维数字模型定位至基准位,所述DGCNN网络对所述采样点的分类是基于定位至所述基准位的所述待分割牙颌三维数字模型。
在一些实施方式中,所述DGCNN网络可以是以多个定位于所述基准位的牙颌三维数字模型样本进行训练。
在一些实施方式中,所述的牙颌三维数字模型的分割方法还可以包括:利用ICP算法,通过将所述待分割牙颌三维数字模型与多个位于所述基准位的牙颌三维数字模型模版进行配准,以将所述待分割牙颌三维数字模型定位至所述基准位。
在一些实施方式中,所述DGCNN网络对每一所述采样点的分类是基于与其邻近的N个点的特征,其中,20<N<30,且N为自然数。
在一些实施方式中,所述DGCNN网络中的边卷积模块可以设置三层卷积层。
在一些实施方式中,所述的牙颌三维数字模型的分割方法还可以包括:采用Graph-Cut算法对所述分类结果进行平滑处理,其中,所述Graph-Cut算法是基于最小化分类损失和几何损失的加权和,其中,所述分类损失是将当前点的分类平滑处理为其他分类的损失,所述几何损失是将当前点的分类平滑处理为相邻点的分类的损失,其中,牙齿和牙龈间的几何损失权重小于其他情况的几何损失权重。
在一些实施方式中,对每一所述采样点提取的特征可以包括:该采样点的坐标、该采样点所对应的面片的法向量以及自该采样点至对应的面片各顶点的向量。
在一些实施方式中,所述的牙颌三维数字模型的分割方法还可以包括:将所述采样点均匀地分为多个组,所述DGCNN网络对所述采样点的分类是分组进行。
附图说明
以下将结合附图及其详细描述对本申请的上述及其他特征作进一步说明。应当理解的是,这些附图仅示出了根据本申请的若干示例性的实施方式,因此不应被视为是对本申请保护范围的限制。除非特别指出,附图不必是成比例的,并且其中类似的标号表示类似的部件。
图1为本申请一个实施例中的牙颌三维数字模型分割方法的示意性流程图;
图2为本申请一个实施例中的DGCNN网络的示意性模块图;以及
图3为本申请一个实施例中的牙颌三维数字模型分割结果检测方法的示意性流程图。
具体实施方式
以下的详细描述中引用了构成本说明书一部分的附图。说明书和附图所提及的示意性实施方式仅仅出于是说明性之目的,并非意图限制本申请的保护范围。在本申请的启示下,本领域技术人员能够理解,可以采用许多其他的实施方式,并且可以对所描述实施方式做出各种改变,而不背离本申请的主旨和保护范围。应当理解的是,在此说明并图示的本申请的各个方面可以按照很多不同的配置来布置、替换、组合、分离和设计,这些不同配置都在本申请的保护范围之内。
为了提高牙颌三维数字模型的分割精度和效率,同时降低人为参与的程度,本申请的一方面提供了一种新的牙颌三维数字模型的分割方法。
请参图1,为本申请一个实施例中的牙颌三维数字模型分割方法100的示意 性流程图。
在101中,获取待分割牙颌三维数字模型。
在一个实施例中,待分割牙颌三维数字模型可以是通过对患者口腔进行扫描(例如,利用激光口腔扫描设备进行扫描)而获得。在又一实施例中,待分割牙颌三维数字模型也可以是通过对患者的牙颌模型(例如,石膏模型)进行扫描而获得。
在一个实施例中,待分割牙颌三维数字模型由三角面片构成,高精度牙颌三维数字模型包括大量三角面片,例如,10万个以上,甚至15万个以上。
在103中,将所述待分割牙颌三维数字模型定位到基准位。
为了保证用于分割牙颌三维数字模型的深度学习人工神经网络的鲁棒性,在训练该人工神经网络时以及利用其对牙颌三维数字模型进行分割时,均把牙颌三维数字模型定位到一个预定义的基准位,该基准位包括方向和位置。
在一个实施例中,可以根据先验,分别为上、下牙颌选定多个具有代表性的参考牙颌三维数字模型,例如,5个参考牙颌三维数字模型,分别对应正常牙弓、短牙弓、长牙弓、宽牙弓以及窄牙弓。在本申请的启示下,可以理解,参考牙颌三维数字模型的选取并不限于以上例子,例如,还可以根据由Su-Jung Park、Richard Leesungbok、Jae-Won Song、Se Hun Chang、Suk-Won Lee以及Su-Jin Ahn发表于The journal of advanced prosthodontics,9(5):321–327,2017的《Analysis of Dimensions and Shapes of Maxillary and Mandibular Dental Arch in Korean Young Adults》,分别为上、下颌选取3个参考牙颌三维数字模型,即卵形、V形以及U形参考牙颌三维数字模型。
在一个实施例中,参考牙颌三维数字模型是位于基准位,可以通过与参考牙颌三维数字模型进行配准而将待分割牙颌三维数字模型定位到基准位。
在一个实施例中,可以采用迭代最近点算法(Iterative Closest Point,简称ICP),通过将待分割牙颌三维数字模型与参考牙颌三维数字模型进行配准,而将待分割 牙颌三维数字模型定位到基准位。
在一个实施例中,可以将待分割牙颌三维数字模型分别与5个参考牙颌三维数字模型进行配准,选取匹配度最高的作为配准后位置。接着,计算出以往大量经配准的牙颌三维数字模型的平均中心(centroid),并将其作为中心位。最后,将配准后的待分割牙颌三维数字模型的中心平移至所述中心位,认为此时待分割牙颌三维数字模型已定位至基准位。
在105中,将所述待分割牙颌三维数字模型点云化获得点云。
由于在上述实施例中采用了ICP算法将待分割牙颌三维数字模型与参考牙颌三维数字模型进行配准,已经将待分割牙颌三维数字模型点云化,可以不再进行点云化操作。但是,若在将待分割牙颌三维数字模型定位至基准位的过程中未对其进行点云化,那么,此时就需要进行点云化操作。
在107中,对所述点云中的点进行特征提取。
本申请的发明人经过大量实验发现,对于每个点,提取以下特征,并基于此进行分割,具有较高精度:面片中心点坐标(点云中的每一个点可以是对应面片的中心点)(x、y、z共3个特征)、面片法向量(3个特征)以及面片中心点到三个顶点的向量(9个特征),共计15个特征。
在109中,利用经训练的动态图卷积神经网络,基于所述提取的特征,对所述点云进行分割。
在一个实施例中,可以采用动态图卷积神经网络(Dynamic Graph Convolutional Neural Network,简称DGCNN)对点云进行分类(分割)。
在一个实施例中,可以设置33个标签(label),分别代表32颗牙齿和牙龈,用于分类(分割)。
在一个实施例中,为提高计算效率,可以对所述点云进行采样,利用所述经训练的DGCNN网络,仅对采样点进行分类。
在一个实施例中,所述采样可以是均匀采样。
在一个实施例中,采样点的数量可以根据计算系统的能力进行设定,例如,取3万个采样点。
在一个实施例中,为了进一步提升计算效率,可以将所述采样点进行均匀分组,然后利用经训练的DGCNN网络对每一组采样点分别进行分类。在一个实施例中,分组的数量可以根据计算系统的能力进行设定,例如,分成3组。
DGCNN网络对一个点进行分类时,会考虑与其相邻的点(即计算待分类点与其相邻的点的特征关系,在分类时,考虑这个特征关系)。本申请的发明人经过大量实验发现,以5为步进值,分类的准确度随考虑的相邻点的数量增长,直到25,自此,即使考虑的相邻点的数量进一步增长,分类的准确度并无明显提高,因此,优选的,可以在20-30的范围内选择考虑的相邻点的数量,更优选的,可以在25-30的范围内选择考虑的相邻点的数量。
对于每一个采样点,DGCNN网络输出其在所述33个分类(包括32颗牙齿和牙龈)上的概率分布,将概率最大的分类作为该采样点的分类结果。
在完成对所有采样点的分类后,还需要对点云中的其他点进行分类。在一个实施例中,可以采用K近邻算法(K-Nearest Neighbour,简称KNN),基于所述采样点的分类结果,对点云中的其他点进行分类。
KNN算法的基本思想是,如果一个样本(在此处为待分类的点)在特征空间中的k个最相似(即特征空间中最邻近)的样本中的大多数属于某一个类别,则该样本也属于这个类别。在KNN算法中,所选择的邻近样本是已经分类的样本。
在一个实施例中,可以计算待分类点的邻近点概率分布的平均值,并将该平均值作为该待分类点的概率分布,将其中概率最大的分类作为该待分类点的分类结果。
在一个实施例中,可以将k设为5,即对于每一个非采样点,基于特征空间 中与其最近的5个采样点的分类,对其进行分类。
本申请的发明人经过大量实验发现,k的设置对网络的分类准确度以及计算量影响不大,在合理范围内即可。
请参图2,为本申请一个实施例中DGCNN网络200的示意性模块图。
输入模块201用于输入提取的特征,对于上述的实施例,可以输入提取得到的一组1万个采样点的特征,其中,每个点15个特征。
T-Net子网络203用于实现点云的自动对齐,以减小点的空间变化。T-Net网络是一个预测特征空间变换矩阵的网络,它从输入数据中学习出与特征空间维度一致的变换矩阵,然后用这个变换矩阵与原始数据相乘,实现对输入特征空间的变换操作,使得后续的每一个点都与输入数据中的每一个点都有关系。通过这样的数据融合,实现对原始点云数据包含特征的逐级抽象。
本实施例中的DGCNN网络200包括3个边卷积模块(Edge Convolution)205、207以及209,其中,每个边卷积模块有三层。本申请的发明人经过大量实验发现,为边卷积模块设置三层深度可以保证网络的预测(分类)准确度,若进一步加深边卷积模块的深度,网络的预测准确度提升有限,而且可能更容易导致过拟合,以及加重计算量。
本实施例中的DGCNN网络200能够对上、下颌三维数字模型作为一个整体进行分割,独热编码分类模块213(one-hot encoded category)用于区分一个点属于上颌还是下颌。
本实施例中的DGCNN网络200还包括三个二维卷积模块211、215以及217。
其中,图2中的
Figure PCTCN2021073234-appb-000001
符号表示特征联合(concatenation)。
输出模块219输出所有点在33个分类上的概率分布。
在111中,对分割结果进行优化。
为了消除分割结果中的可能存在的微小的局部的不平滑之处(例如,边界存在毛刺,或者某一面片的分类与周围的面片不同),可以对分割结果进行平滑处理。
在一个实施例中,可以采用Graph-Cut算法,基于牙颌三维数字模型面片之间的几何关系,对分割结果进行平滑处理。
在一个实施例中,可以设定两个损失,分类损失和几何损失。通过Graph-Cut算法,最小化分类损失和几何损失的加权和,以实现分类结果的平滑。
其中,可以将分类损失定义为:将当前面片预测齿号(即分类结果)平滑处理为其他齿号的损失,等于当前面片在自动分割系统输出的概率分布上的取值,即概率越大,平滑处理的损失越大。在计算分类损失时,可以针对所有可能性进行计算,即计算将当前面片平滑处理为其他32个分类时的损失。
可以将几何损失定义为:将当前面片预测齿号平滑处理为其相邻面片预测齿号的损失,等于当前面片中心点到其相邻面片中心点的距离与二面角的乘积。
接着,通过Graph-Cut算法,最小化标注损失和几何损失的加权和,以实现平滑。
在一个实施例中,在最小化损失的过程中,可以设置一个非负的常数λ作为几何损失的权重,用于平衡标注损失和几何损失对于总损失的影响。由于牙齿与牙龈的分界更为不明显,此时更倾向于相信分割结果,因此,可以设定牙龈与牙齿间的几何损失权重小于其他情况的几何损失权重。也就是说,当当前点与相邻点的分类一个是牙齿另一个是牙龈时,把当前点的分类平滑为该相邻点的分类时,其几何损失权重小于其他情况。本申请的发明人经过大量实验发现,牙龈和牙齿间的几何损失权重λ设为50,牙齿间的几何损失权重λ设为250时,优化的效果较好。
在一个实施例中,最小化损失可以由以下表达式(1)表达:
Figure PCTCN2021073234-appb-000002
其中,第一项为分类损失,第二项为几何损失,F表示牙颌三维数字模型的面片集合,i表示第i个面片,l i表示对第i个面片的分类,其对应的概率为p i,其中,
ξ U(p i,l i)=-log(p i(l i))         表达式(2)
Figure PCTCN2021073234-appb-000003
其中,θ ij表示面片i和面片j之间的二面角,φ ij表示面片i和面片j的中心的距离。
在本申请的启示下,可以理解,虽然以上实施例是对上、下颌三维数字模型作为一个整体进行分割,但本申请的方法也适用于对上、下颌三维数字模型分别进行分割(相应地,标注的数量可以改为17个,分别代表16颗牙齿和牙龈)。
虽然,本申请的上述牙颌三维数字模型的分割方法准确率极高,但对于一些极端案例,仍无法确保百分之百的分割准确率,因此,需要提供一种用于检测分类结果的可信度的方法。
当前,业界缺乏能够检测人工神经网络对牙颌三维数字模型的分割结果的可信度的方法,本申请的发明人经过大量研究,开发出了一种基于扰动来评价分割结果的可信度的方法。该方法的基本思路是检测分割方法是否对当前牙颌三维数字模型的位置扰动敏感,如果是,则认为当前分割结果不可信。
请参图3,为本申请一个实施例中的用于检测人工神经网络对牙颌三维数字模型的分割结果的可信度的方法300。
由以上可知,在本申请的牙颌三维数字模型分割方法中,在利用DGCNN网络对牙颌三维数字模型进行分割之前,需要将其定位至基准位,因为对该网络的训练也是采用定位于该基准位的牙颌三维数字模型,籍此确保分割的准确率。
在301中,获取基准位分割结果。
在一个实施例中,所述基准位分割结果可以是利用本申请的分割方法对定位于基准位的牙颌三维数字模型的进行分割获得的结果。
在303中,产生多个扰动位分割结果。
在一个实施例中,可以将定位于所述基准位的牙颌三维数字模型沿x、y、z轴分别作正负5度和正负10度的旋转,获得12组扰动。
接着,利用相同的分割方法对位于所述12个扰动位的所述牙颌三维数字模型进行分割,获得相应的12组扰动位分割结果。
在本申请的启示下,可以理解,扰动的方式和扰动位的数量可以根据具体情况确定,并不限于以上具体的例子,例如,扰动的方式可以包括平移、旋转以及两者的组合,也可以是沿任何轴或方向进行扰动。
在305中,基于所述基准位分割结果和所述多个扰动位分割结果之间的相似度判断所述基准位分割结果是否可信。
在一个实施例中,对于每个齿号X,设基准位分割结果中齿号X的面片数量为A,第一扰动位分割结果中齿号X的面片数量为B,基准位分割结果和第一扰动位分割结果中齿号均为X的面片数量为C(C<=A,C<=B)。可以取C/A和C/B中的最小值作为基准位和第一扰动位间牙号X预测(标注)的相似度。若C/A或C/B中有分母为0的情况,此时可以将这个分数赋值为1。
对于每个牙号X,重复以上操作12次(与扰动位分割结果数量相对于),会获得12个相似度。对于每个牙号X,取这12个相似度的第一四分位数(Q1)作为所述分割方法对牙号X的置信度。取所有牙号置信度的最小值作为所述分割方法对位于所述基准位的所述牙颌三维数字模型的分割结果的代表置信度。
在一个实施例中,可以设定一个阈值,若分割结果的代表置信度大于该阈值,则认为该分割结果可信,否则认为该分割结果不可信。
在一个实施例中,基于当前的参数设置,可以将所述阈值设定为0.85。
在本申请的启示下,可以理解,对于每个牙号的置信度,除了12个相似度的第一四分位数之外,还可以其他数值替代,例如,第二四分位数或者平均值等。相应地,可能需要对所述阈值进行修改。
虽然,本申请的实施例中牙齿三维数字模型分割方法是基于点的分割方法,但在本申请的启示下,可以理解,本申请的分割结果检测方法也适用于基于面片的分割方法。
在本申请的启示下,除了本申请的分割方法之外,本申请的分割结果检测方法适用于任何基于定位至基准位进行训练和分割的神经网络牙颌三维数字模型分割方法。
当判定一个分割结果不可信时,可以有两种选择:其一,是提醒用户需要人为介入进行检验和/或调整;其二,是试图从所述多个扰动位分割结果中筛选出可信的一个作为最终输出的分割结果,若无法在所述多个扰动位分割结果中找到可信的分割结果,再通知用户人为介入进行检验和/或调整。
在一个实施例中,可以对12个扰动位分割结果进行遍历,找出置信度最高的扰动位分割结果,将其作为最终分割结果输出。此时,即使需要人为修复分割结果,也可以降低人为修复的工作量。
尽管在此公开了本申请的多个方面和实施例,但在本申请的启发下,本申请的其他方面和实施例对于本领域技术人员而言也是显而易见的。在此公开的各个方面和实施例仅用于说明目的,而非限制目的。本申请的保护范围和主旨仅通过后附的权利要求书来确定。
同样,各个图表可以示出所公开的方法和系统的示例性架构或其他配置,其有助于理解可包含在所公开的方法和系统中的特征和功能。要求保护的内容并不限于所示的示例性架构或配置,而所希望的特征可以用各种替代架构和配置来实现。除此之外,对于流程图、功能性描述和方法权利要求,这里所给出的方框顺 序不应限于以同样的顺序实施以执行所述功能的各种实施例,除非在上下文中明确指出。
除非另外明确指出,本文中所使用的术语和短语及其变体均应解释为开放式的,而不是限制性的。在一些实例中,诸如“一个或多个”、“至少”、“但不限于”这样的扩展性词汇和短语或者其他类似用语的出现不应理解为在可能没有这种扩展性用语的示例中意图或者需要表示缩窄的情况。

Claims (10)

  1. 一种牙颌三维数字模型的分割方法,包括:
    获取待分割牙颌三维数字模型;
    将所述待分割牙颌三维数字模型点云化获得点云;
    对所述点云进行采样,获得采样点;
    对所述采样点进行特征提取;
    利用经训练的DGCNN网络,基于所述提取的特征,对所述采样点进行分类;以及
    利用KNN算法,基于所述已分类的采样点,对所述点云中的其他点进行分类,
    其中,对一个点进行分类是把该点所代表的所述待分割牙颌三维数字模型的面片归类为某一颗牙齿或者牙龈。
  2. 如权利要求1所述的牙颌三维数字模型的分割方法,其特征在于,所述采样是均匀采样。
  3. 如权利要求1所述的牙颌三维数字模型的分割方法,其特征在于,它还包括:将所述待分割牙颌三维数字模型定位至基准位,所述DGCNN网络对所述采样点的分类是基于定位至所述基准位的所述待分割牙颌三维数字模型。
  4. 如权利要求3所述的牙颌三维数字模型的分割方法,其特征在于,所述DGCNN网络是以多个定位于所述基准位的牙颌三维数字模型样本进行训练。
  5. 如权利要求3所述的牙颌三维数字模型的分割方法,其特征在于,它还包括:利用ICP算法,通过将所述待分割牙颌三维数字模型与多个位于所述基准位的牙颌三维数字模型模版进行配准,以将所述待分割牙颌三维数字模型定位至所述基准位。
  6. 如权利要求1所述的牙颌三维数字模型的分割方法,其特征在于,所述 DGCNN网络对每一所述采样点的分类是基于与其邻近的N个点的特征,其中,20<N<30,且N为自然数。
  7. 如权利要求1所述的牙颌三维数字模型的分割方法,其特征在于,所述DGCNN网络中的边卷积模块设置三层卷积层。
  8. 如权利要求1所述的牙颌三维数字模型的分割方法,其特征在于,它还包括:采用Graph-Cut算法对所述分类结果进行平滑处理,其中,所述Graph-Cut算法是基于最小化分类损失和几何损失的加权和,其中,所述分类损失是将当前点的分类平滑处理为其他分类的损失,所述几何损失是将当前点的分类平滑处理为相邻点的分类的损失,其中,牙齿和牙龈间的几何损失权重小于其他情况的几何损失权重。
  9. 如权利要求1所述的牙颌三维数字模型的分割方法,其特征在于,对每一所述采样点提取的特征包括:该采样点的坐标、该采样点所对应的面片的法向量以及自该采样点至对应的面片各顶点的向量。
  10. 如权利要求1所述的牙颌三维数字模型的分割方法,其特征在于,它还包括:将所述采样点均匀地分为多个组,所述DGCNN网络对所述采样点的分类是分组进行。
PCT/CN2021/073234 2020-04-21 2021-01-22 牙颌三维数字模型的分割方法 WO2021212940A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/614,541 US11989934B2 (en) 2020-04-21 2021-01-22 Method for segmenting 3D digital model of jaw

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010318004.X 2020-04-21
CN202010318004.XA CN113538438A (zh) 2020-04-21 2020-04-21 牙颌三维数字模型的分割方法

Publications (1)

Publication Number Publication Date
WO2021212940A1 true WO2021212940A1 (zh) 2021-10-28

Family

ID=78093908

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/073234 WO2021212940A1 (zh) 2020-04-21 2021-01-22 牙颌三维数字模型的分割方法

Country Status (3)

Country Link
US (1) US11989934B2 (zh)
CN (1) CN113538438A (zh)
WO (1) WO2021212940A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721115A (zh) * 2023-06-15 2023-09-08 小米汽车科技有限公司 金相组织获取方法、装置、存储介质及芯片

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471663A (zh) * 2022-11-15 2022-12-13 上海领健信息技术有限公司 基于深度学习的三阶段牙冠分割方法、装置、终端及介质
WO2024108341A1 (zh) * 2022-11-21 2024-05-30 深圳先进技术研究院 基于点云理解的自动排牙方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076443A1 (en) * 2015-09-11 2017-03-16 Carestream Health, Inc. Method and system for hybrid mesh segmentation
CN108986123A (zh) * 2017-06-01 2018-12-11 无锡时代天使医疗器械科技有限公司 牙颌三维数字模型的分割方法
US20190026599A1 (en) * 2017-07-21 2019-01-24 Dental Monitoring Method for analyzing an image of a dental arch
CN109903396A (zh) * 2019-03-20 2019-06-18 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) 一种基于曲面参数化的牙齿三维模型自动分割方法
CN110503652A (zh) * 2019-08-23 2019-11-26 北京大学口腔医学院 下颌智齿与邻牙及下颌管关系确定方法、装置、存储介质及终端

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11903793B2 (en) * 2019-12-31 2024-02-20 Align Technology, Inc. Machine learning dental segmentation methods using sparse voxel representations

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076443A1 (en) * 2015-09-11 2017-03-16 Carestream Health, Inc. Method and system for hybrid mesh segmentation
CN108986123A (zh) * 2017-06-01 2018-12-11 无锡时代天使医疗器械科技有限公司 牙颌三维数字模型的分割方法
US20190026599A1 (en) * 2017-07-21 2019-01-24 Dental Monitoring Method for analyzing an image of a dental arch
CN109903396A (zh) * 2019-03-20 2019-06-18 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) 一种基于曲面参数化的牙齿三维模型自动分割方法
CN110503652A (zh) * 2019-08-23 2019-11-26 北京大学口腔医学院 下颌智齿与邻牙及下颌管关系确定方法、装置、存储介质及终端

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721115A (zh) * 2023-06-15 2023-09-08 小米汽车科技有限公司 金相组织获取方法、装置、存储介质及芯片

Also Published As

Publication number Publication date
US11989934B2 (en) 2024-05-21
CN113538438A (zh) 2021-10-22
US20220222818A1 (en) 2022-07-14

Similar Documents

Publication Publication Date Title
WO2021212940A1 (zh) 牙颌三维数字模型的分割方法
US11790643B2 (en) Deep learning for tooth detection and evaluation
WO2018218988A1 (zh) 牙颌三维数字模型的分割方法
KR20210104777A (ko) 딥 러닝을 이용한 비 유클리드 3d 데이터 세트의 자동 의미론적 분할
WO2021212941A1 (zh) 牙颌三维数字模型分割结果的检测方法
CN108388874B (zh) 基于图像识别与级联分类器的对虾形态参数自动测量方法
CN111968146B (zh) 三维牙颌网格模型分割方法
CN110543906B (zh) 基于Mask R-CNN模型的肤质自动识别方法
CN109214353B (zh) 一种基于剪枝模型的人脸图像快速检测训练方法和装置
CN112381178B (zh) 一种基于多损失特征学习的医学影像分类方法
CN112989954B (zh) 基于深度学习的三维牙齿点云模型数据分类方法及系统
CN113269791B (zh) 一种基于边缘判定与区域生长的点云分割方法
WO2022183852A1 (zh) 牙颌三维数字模型的分割方法
CN115223205A (zh) 一种基于深度学习的三维口扫牙齿分离模型牙位识别方法、介质及装置
CN115471663A (zh) 基于深度学习的三阶段牙冠分割方法、装置、终端及介质
CN117095145B (zh) 一种牙齿网格分割模型的训练方法及终端
CN108242056B (zh) 一种基于调和场算法的三维牙齿网格数据的分割方法
CN117292171A (zh) 一种基于信息融合的牙病识别方法、计算机设备及存储介质
CN115761125A (zh) 基于点云注意力和齿间碰撞损失的齿科数字化正畸方法
CN110738249B (zh) 一种基于深度神经网络的极光图像聚类方法
CN111259806B (zh) 一种人脸区域识别方法、装置及存储介质
Xiong et al. TFormer: 3D tooth segmentation in mesh scans with geometry guided transformer
CN112668668A (zh) 一种术后医学影像评估方法、装置、计算机设备及存储介质
CN113139908A (zh) 一种三维牙列分割与标注方法
CN114463328B (zh) 一种自动化正畸难度系数评估方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21793504

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21793504

Country of ref document: EP

Kind code of ref document: A1