WO2021212941A1 - 牙颌三维数字模型分割结果的检测方法 - Google Patents
牙颌三维数字模型分割结果的检测方法 Download PDFInfo
- Publication number
- WO2021212941A1 WO2021212941A1 PCT/CN2021/073235 CN2021073235W WO2021212941A1 WO 2021212941 A1 WO2021212941 A1 WO 2021212941A1 CN 2021073235 W CN2021073235 W CN 2021073235W WO 2021212941 A1 WO2021212941 A1 WO 2021212941A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- segmentation
- dimensional digital
- digital model
- segmentation result
- dental
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
Definitions
- the present application generally relates to a method for detecting segmentation results of a three-dimensional digital model of a dental jaw.
- An aspect of the present application provides a method for detecting segmentation results of a three-dimensional digital model of a dental jaw, which includes: obtaining a segmentation result of a reference position of a three-dimensional digital model of a dental jaw to be inspected, wherein the segmentation result is based on the first segmentation method.
- the three-dimensional digital model of the dentition positioned at the reference position is obtained by segmenting; the three-dimensional digital model of the dentition is perturbed from the reference position multiple times, and the teeth at the multiple disturbance positions are analyzed by the first segmentation method.
- Segmentation of the jaw three-dimensional digital model to obtain corresponding segmentation results of multiple disturbance positions; and based on the similarity between the segmentation result of the reference position and the segmentation results of the plurality of disturbance positions, judging whether the segmentation result of the reference position is credible, wherein ,
- the segmentation of the dental jaw 3D digital model is to segment it according to different teeth and gums. It is to classify the face of the dental jaw 3D digital model according to tooth number and gum.
- the first segmentation method may be a segmentation method based on a deep learning artificial neural network, and the deep learning artificial neural network is trained with a plurality of three-dimensional digital models of dental jaws positioned at the reference position. .
- the deep learning artificial neural network may be a DGCNN network.
- the method for detecting segmentation results of the dental jaw three-dimensional digital model may further include: for each category, calculating the similarity between the segmentation result of the reference position and the segmentation result of each disturbance position; A category, the confidence of the segmentation result of the reference position is calculated based on the corresponding similarity; the representative confidence of the segmentation result of the reference position is calculated based on the confidence of all classifications; and the representative confidence is based on the representative confidence and the preset
- the threshold value determines whether the segmentation result of the reference bit is credible.
- the similarity between the segmentation result of the reference bit and the segmentation result of each disturbance bit is calculated based on the number of patches in the category of both.
- the global confidence level may be the minimum value of the confidence levels of all classifications.
- the method for detecting segmentation results of the dental jaw three-dimensional digital model may further include: if the reference position segmentation result is not credible, using the same method to determine whether the multiple disturbance position segmentation results exist If there is a credible segmentation result, use the credible perturbation bit segmentation result as the final segmentation result.
- FIG. 1 is a schematic flowchart of a method for segmenting a three-dimensional digital model of a dental jaw in an embodiment of the application;
- FIG. 2 is a schematic block diagram of the DGCNN network in an embodiment of this application.
- FIG. 3 is a schematic flowchart of a method for detecting segmentation results of a dental jaw three-dimensional digital model in an embodiment of the application.
- one aspect of the present application provides a new dental jaw three-dimensional digital model segmentation method.
- FIG. 1 is a schematic flowchart of a method 100 for segmenting a three-dimensional digital model of a dental jaw in an embodiment of the present application.
- the three-dimensional digital model of the dental jaw to be segmented may be obtained by scanning the patient's oral cavity (for example, scanning with a laser oral scanning device). In another embodiment, the three-dimensional digital model of the tooth jaw to be segmented may also be obtained by scanning the patient's tooth jaw model (for example, a plaster model).
- the three-dimensional digital model of the tooth jaw to be segmented is composed of triangular faces
- the high-precision three-dimensional digital model of the tooth jaw includes a large number of triangular faces, for example, more than 100,000, or even more than 150,000.
- the three-dimensional digital model of the tooth jaw to be segmented is positioned to a reference position.
- the three-dimensional dental model is positioned A predefined reference position, the reference position includes direction and position.
- a plurality of representative three-dimensional digital models of the reference dental jaws may be selected according to a priori, for example, five reference three-dimensional digital models of the dental jaws correspond to the normal dental arch, Short dental arch, long dental arch, wide dental arch and narrow dental arch. Under the enlightenment of this application, it can be understood that the selection of the reference dental jaw three-dimensional digital model is not limited to the above examples.
- the reference three-dimensional digital model of the dental jaw is located at the reference position, and the three-dimensional digital model of the dental jaw to be segmented can be positioned to the reference position by registration with the reference three-dimensional digital model of the dental jaw.
- the Iterative Closest Point may be used to register the three-dimensional digital model of the tooth to be segmented with the three-dimensional digital model of the reference tooth, and the three-dimensional digital model of the tooth to be segmented Position to the reference position.
- the three-dimensional digital model of the dental jaw to be segmented may be registered with five reference three-dimensional digital models of the dental jaw respectively, and the position with the highest matching degree is selected as the registered position. Then, calculate the average center (centroid) of a large number of registered dental jaw three-dimensional digital models and use it as the center position. Finally, the center of the registered three-dimensional digital model of the tooth jaw to be segmented is translated to the central position, and it is considered that the three-dimensional digital model of the tooth jaw to be segmented has been positioned to the reference position at this time.
- point cloud the three-dimensional digital model of the tooth jaw to be segmented to obtain a point cloud.
- the ICP algorithm is used in the above embodiment to register the 3D digital model of the tooth to be segmented with the reference 3D digital model of the tooth, the 3D digital model of the tooth to be segmented has been point clouded, and the point cloud operation is no longer required. .
- the three-dimensional digital model of the tooth jaw to be segmented is not point-clouded during the process of positioning it to the reference position, then the point-clouding operation needs to be performed at this time.
- feature extraction is performed on the points in the point cloud.
- each point in the point cloud can be the center of the corresponding patch Point
- 3 features in x, y, z normal vector of the face
- 9 features vector from the center of the face to the three vertices
- the trained dynamic graph convolutional neural network is used to segment the point cloud based on the extracted features.
- a dynamic graph convolutional neural network (Dynamic Graph Convolutional Neural Network, DGCNN for short) may be used to classify (segment) the point cloud.
- DGCNN Dynamic Graph Convolutional Neural Network
- 33 labels can be set, representing 32 teeth and gums, for classification (segmentation).
- the point cloud may be sampled, and the trained DGCNN network may be used to classify only the sampled points.
- the sampling may be uniform sampling.
- the number of sampling points can be set according to the capabilities of the computing system, for example, 30,000 sampling points are taken.
- the sampling points may be evenly grouped, and then each group of sampling points may be classified separately by using the trained DGCNN network.
- the number of groups can be set according to the capabilities of the computing system, for example, divided into 3 groups.
- the DGCNN network When the DGCNN network classifies a point, it will consider the adjacent points (that is, calculate the feature relationship between the point to be classified and the adjacent point, and consider this feature relationship when classifying).
- the inventor of the present application found through a large number of experiments that with a step value of 5, the accuracy of classification increases with the number of adjacent points considered, until 25. Since then, even if the number of adjacent points considered further increases, the classification accuracy The accuracy is not significantly improved. Therefore, preferably, the number of adjacent points to be considered can be selected in the range of 20-30, and more preferably, the number of adjacent points to be considered can be selected in the range of 25-30.
- the DGCNN network For each sampling point, the DGCNN network outputs its probability distribution on the 33 classifications (including 32 teeth and gums), and the classification with the highest probability is used as the classification result of the sampling point.
- K-Nearest Neighbour K-Nearest Neighbour, KNN for short
- KNN K-Nearest Neighbour
- the basic idea of the KNN algorithm is that if a sample (here, the point to be classified) in the feature space, most of the k most similar (ie the nearest neighbors in the feature space) samples belong to a certain category, then The samples also fall into this category. In the KNN algorithm, the selected neighboring samples are already classified samples.
- the average value of the probability distribution of the neighboring points of the point to be classified can be calculated, and the average value is used as the probability distribution of the point to be classified, and the classification with the highest probability is used as the classification result of the point to be classified.
- k may be set to 5, that is, for each non-sampling point, it is classified based on the classification of the 5 nearest sampling points in the feature space.
- the inventor of the present application has discovered through a large number of experiments that the setting of k has little effect on the classification accuracy and calculation amount of the network, and it should be within a reasonable range.
- FIG. 2 is a schematic block diagram of the DGCNN network 200 in an embodiment of this application.
- the input module 201 is used to input the extracted features.
- the T-Net sub-network 203 is used to realize automatic alignment of point clouds to reduce the spatial variation of points.
- the T-Net network is a network that predicts the transformation matrix of the feature space. It learns a transformation matrix consistent with the dimension of the feature space from the input data, and then uses this transformation matrix to multiply the original data to realize the transformation operation of the input feature space. So that every subsequent point has a relationship with every point in the input data. Through such data fusion, the gradual abstraction of the features contained in the original point cloud data is realized.
- the DGCNN network 200 in this embodiment includes three edge convolution modules (Edge Convolution) 205, 207, and 209, where each edge convolution module has three layers.
- the inventor of this application found through a lot of experiments that setting three layers of depth for the edge convolution module can ensure the prediction (classification) accuracy of the network. If the depth of the edge convolution module is further deepened, the prediction accuracy of the network will be limited, and it may be possible It is more likely to cause overfitting and increase the amount of calculation.
- the DGCNN network 200 in this embodiment can segment the upper and lower jaw three-dimensional digital models as a whole, and the one-hot encoded category module 213 is used to distinguish whether a point belongs to the upper jaw or the lower jaw.
- the DGCNN network 200 in this embodiment further includes three two-dimensional convolution modules 211, 215, and 217.
- the output module 219 outputs the probability distribution of all points in 33 categories.
- the segmentation result can be smoothed.
- the Graph-Cut algorithm may be used to smooth the segmentation results based on the geometric relationship between the facets of the dental jaw three-dimensional digital model.
- two losses can be set, classification loss and geometric loss.
- classification loss and geometric loss Through the Graph-Cut algorithm, the weighted sum of the classification loss and the geometric loss is minimized to achieve smooth classification results.
- the classification loss can be defined as: the current surface patch predicted tooth number (that is, the classification result) is smoothed into the loss of other tooth numbers, which is equal to the value of the current surface patch on the probability distribution output by the automatic segmentation system, that is, the greater the probability Larger, the greater the loss of smoothing processing.
- the classification loss it can be calculated for all possibilities, that is, the loss when the current patch is smoothed into the other 32 classifications.
- the geometric loss can be defined as: smoothing the predicted tooth number of the current face to the loss of the predicted tooth number of its adjacent face, equal to the product of the distance from the center of the current face to the center of its adjacent face and the dihedral angle .
- the Graph-Cut algorithm is used to minimize the weighted sum of annotation loss and geometric loss to achieve smoothness.
- a non-negative constant ⁇ can be set as the weight of the geometric loss to balance the influence of the labeling loss and the geometric loss on the total loss. Since the boundary between the teeth and the gums is less obvious, you are more inclined to believe the segmentation results at this time. Therefore, the geometric loss weights between the gums and the teeth can be set to be smaller than the geometric loss weights in other cases. That is to say, when the classification of the current point and the adjacent point is one tooth and the other is the gum, when the classification of the current point is smoothed into the classification of the adjacent point, the geometric loss weight is less than other cases.
- the inventor of the present application found through a lot of experiments that when the geometric loss weight ⁇ between the gums and the teeth is set to 50, and the geometric loss weight ⁇ between the teeth is set to 250, the optimization effect is better.
- minimizing the loss can be expressed by the following expression (1):
- the first term is the classification loss
- the second term is the geometric loss
- F represents the face set of the dental jaw three-dimensional digital model
- i represents the i-th face
- l i represents the classification of the i-th face, which corresponds to The probability of is p i , where,
- ⁇ ij represents the dihedral angle between the face piece i and the face piece j
- ⁇ ij represents the distance between the center of the face piece i and the face piece j
- the industry lacks a method that can detect the credibility of the segmentation results of the artificial neural network on the three-dimensional digital model of the dental jaw.
- the inventor of this application has developed a method for evaluating the credibility of the segmentation results based on disturbances. method.
- the basic idea of this method is to detect whether the segmentation method is sensitive to the position disturbance of the current three-dimensional digital model of the tooth and jaw. If it is, the current segmentation result is considered unreliable.
- FIG. 3 is a method 300 for detecting the credibility of the segmentation result of the artificial neural network on the dental jaw three-dimensional digital model in an embodiment of the application.
- the fiducial segmentation result may be a result obtained by segmenting the three-dimensional digital model of the dental jaw positioned at the fiducial using the segmentation method of the present application.
- the three-dimensional digital model of the dental jaw positioned at the reference position can be rotated by plus or minus 5 degrees and plus or minus 10 degrees along the x, y, and z axes, respectively, to obtain 12 sets of perturbations.
- the manner of perturbation and the number of perturbation positions can be determined according to specific conditions, and are not limited to the above specific examples.
- the manner of perturbation can include translation, rotation, and a combination of the two, or It is perturbation along any axis or direction.
- the minimum value of C/A and C/B can be taken as the similarity of the tooth number X prediction (labeling) between the reference position and the first disturbance position. If there is a case where the denominator of C/A or C/B is 0, this fraction can be assigned a value of 1.
- the first quartile (Q1) of the 12 similarities is taken as the confidence of the tooth number X by the segmentation method.
- the minimum value of the confidence levels of all tooth numbers is taken as the representative confidence level of the segmentation result of the three-dimensional digital model of the dental jaw located at the reference position by the segmentation method.
- a threshold may be set. If the representative confidence of the segmentation result is greater than the threshold, the segmentation result is considered credible, otherwise the segmentation result is considered unreliable.
- the threshold may be set to 0.85.
- the threshold may need to be modified.
- tooth three-dimensional digital model segmentation method in the embodiment of the present application is a point-based segmentation method, under the enlightenment of the present application, it can be understood that the segmentation result detection method of the present application is also applicable to a segmentation method based on a face.
- the segmentation result detection method of this application is applicable to any segmentation method of a neural network dental jaw three-dimensional digital model based on positioning to a reference position for training and segmentation.
- a segmentation result is not credible
- the final output segmentation result if no credible segmentation result can be found among the multiple disturbance bit segmentation results, then the user is notified for human intervention for inspection and/or adjustment.
- the 12 perturbation bit segmentation results can be traversed to find the perturbation bit segmentation result with the highest confidence, and output it as the final segmentation result. At this time, even if the segmentation result needs to be artificially repaired, the workload of artificial repair can be reduced.
- the various diagrams may show exemplary architectures or other configurations of the disclosed methods and systems, which are helpful in understanding the features and functions that can be included in the disclosed methods and systems.
- the claimed content is not limited to the exemplary architecture or configuration shown, and the desired features can be implemented with various alternative architectures and configurations.
- the order of the blocks given here should not be limited to the various embodiments that are implemented in the same order to perform the functions, unless clearly indicated in the context .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Architecture (AREA)
- Probability & Statistics with Applications (AREA)
- Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
Abstract
一种牙颌三维数字模型分割结果的检测方法,其包括:获取待检的牙颌三维数字模型的基准位分割结果(301),其中,该分割结果是以第一分割方法对定位于基准位的牙颌三维数字模型进行分割而获得;对所述牙颌三维数字模型自所述基准位扰动多次,并以所述第一分割方法对位于多个扰动位的所述牙颌三维数字模型进行分割,获得对应的多个扰动位分割结果(303);以及基于所述基准位分割结果与所述多个扰动位分割结果的相似度,判断所述基准位分割结果是否可信(305),其中,对牙颌三维数字模型的分割是将其按不同的牙齿和牙龈进行分割,是对牙颌三维数字模型的面片按牙号和牙龈进行分类。
Description
本申请总体上涉及牙颌三维数字模型分割结果的检测方法。
如今,牙科治疗越来越多地借助计算机技术,在很多情况下需要对扫描获得的包括牙列与至少部分牙龈的牙颌的三维数字模型进行分割,以把牙冠和牙龈分割开,以及把各牙冠分割开。
当前,已经出现半自动和全自动的牙颌三维数字模型分割方法,然而,还没有可靠的能够对牙颌三维数字模型分割结果进行自动检测的方法。因此,有必要提供一种牙颌三维数字模型分割结果的检测方法。
发明内容
本申请的一方面提供了一种牙颌三维数字模型分割结果的检测方法,其包括:获取待检的牙颌三维数字模型的基准位分割结果,其中,该分割结果是以第一分割方法对定位于基准位的牙颌三维数字模型进行分割而获得;对所述牙颌三维数字模型自所述基准位扰动多次,并以所述第一分割方法对位于多个扰动位的所述牙颌三维数字模型进行分割,获得对应的多个扰动位分割结果;以及基于所述基准位分割结果与所述多个扰动位分割结果的相似度,判断所述基准位分割结果是否可信,其中,对牙颌三维数字模型的分割是将其按不同的牙齿和牙龈进行分割,是对牙颌三维数字模型的面片按牙号和牙龈进行分类。
在一些实施方式中,所述第一分割方法可以是基于深度学习人工神经网络的分割方法,并且所述深度学习人工神经网络是以多个定位于所述基准位的牙颌三 维数字模型进行训练。
在一些实施方式中,其特征在于,所述深度学习人工神经网络可以是DGCNN网络。
在一些实施方式中,所述的牙颌三维数字模型分割结果的检测方法还可以包括:对于每一个分类,计算所述基准位分割结果与每一所述扰动位分割结果的相似度;对于每一个分类,基于对应的相似度计算得到所述基准位分割结果的置信度;基于所有分类的置信度计算得到所述基准位分割结果的代表置信度;以及基于所述代表置信度与预设的阈值判断所述基准位分割结果是否可信。
在一些实施方式中,对于每一个分类,所述基准位分割结果与每一所述扰动位分割结果的相似度是基于两者中均为该分类的面片数量计算得到。
在一些实施方式中,所述全局置信度可以是所述所有分类置信度的最小值。
在一些实施方式中,所述的牙颌三维数字模型分割结果的检测方法还可以包括:若所述基准位分割结果不可信,以相同的方法,判断所述多个扰动位分割结果中是否存在可信的分割结果,如果有,则将所述可信的扰动位分割结果作为最终分割结果。
以下将结合附图及其详细描述对本申请的上述及其他特征作进一步说明。应当理解的是,这些附图仅示出了根据本申请的若干示例性的实施方式,因此不应被视为是对本申请保护范围的限制。除非特别指出,附图不必是成比例的,并且其中类似的标号表示类似的部件。
图1为本申请一个实施例中的牙颌三维数字模型分割方法的示意性流程图;
图2为本申请一个实施例中的DGCNN网络的示意性模块图;以及
图3为本申请一个实施例中的牙颌三维数字模型分割结果检测方法的示意性流程图。
以下的详细描述中引用了构成本说明书一部分的附图。说明书和附图所提及的示意性实施方式仅仅出于是说明性之目的,并非意图限制本申请的保护范围。在本申请的启示下,本领域技术人员能够理解,可以采用许多其他的实施方式,并且可以对所描述实施方式做出各种改变,而不背离本申请的主旨和保护范围。应当理解的是,在此说明并图示的本申请的各个方面可以按照很多不同的配置来布置、替换、组合、分离和设计,这些不同配置都在本申请的保护范围之内。
为了提高牙颌三维数字模型的分割精度和效率,同时降低人为参与的程度,本申请的一方面提供了一种新的牙颌三维数字模型的分割方法。
请参图1,为本申请一个实施例中的牙颌三维数字模型分割方法100的示意性流程图。
在101中,获取待分割牙颌三维数字模型。
在一个实施例中,待分割牙颌三维数字模型可以是通过对患者口腔进行扫描(例如,利用激光口腔扫描设备进行扫描)而获得。在又一实施例中,待分割牙颌三维数字模型也可以是通过对患者的牙颌模型(例如,石膏模型)进行扫描而获得。
在一个实施例中,待分割牙颌三维数字模型由三角面片构成,高精度牙颌三维数字模型包括大量三角面片,例如,10万个以上,甚至15万个以上。
在103中,将所述待分割牙颌三维数字模型定位到基准位。
为了保证用于分割牙颌三维数字模型的深度学习人工神经网络的鲁棒性,在训练该人工神经网络时以及利用其对牙颌三维数字模型进行分割时,均把牙颌三 维数字模型定位到一个预定义的基准位,该基准位包括方向和位置。
在一个实施例中,可以根据先验,分别为上、下牙颌选定多个具有代表性的参考牙颌三维数字模型,例如,5个参考牙颌三维数字模型,分别对应正常牙弓、短牙弓、长牙弓、宽牙弓以及窄牙弓。在本申请的启示下,可以理解,参考牙颌三维数字模型的选取并不限于以上例子,例如,还可以根据由Su-Jung Park、Richard Leesungbok、Jae-Won Song、Se Hun Chang、Suk-Won Lee以及Su-Jin Ahn发表于The journal of advanced prosthodontics,9(5):321–327,2017的《Analysis of Dimensions and Shapes of Maxillary and Mandibular Dental Arch in Korean Young Adults》,分别为上、下颌选取3个参考牙颌三维数字模型,即卵形、V形以及U形参考牙颌三维数字模型。
在一个实施例中,参考牙颌三维数字模型是位于基准位,可以通过与参考牙颌三维数字模型进行配准而将待分割牙颌三维数字模型定位到基准位。
在一个实施例中,可以采用迭代最近点算法(Iterative Closest Point,简称ICP),通过将待分割牙颌三维数字模型与参考牙颌三维数字模型进行配准,而将待分割牙颌三维数字模型定位到基准位。
在一个实施例中,可以将待分割牙颌三维数字模型分别与5个参考牙颌三维数字模型进行配准,选取匹配度最高的作为配准后位置。接着,计算出以往大量经配准的牙颌三维数字模型的平均中心(centroid),并将其作为中心位。最后,将配准后的待分割牙颌三维数字模型的中心平移至所述中心位,认为此时待分割牙颌三维数字模型已定位至基准位。
在105中,将所述待分割牙颌三维数字模型点云化获得点云。
由于在上述实施例中采用了ICP算法将待分割牙颌三维数字模型与参考牙颌三维数字模型进行配准,已经将待分割牙颌三维数字模型点云化,可以不再进行点云化操作。但是,若在将待分割牙颌三维数字模型定位至基准位的过程中未对其进行点云化,那么,此时就需要进行点云化操作。
在107中,对所述点云中的点进行特征提取。
本申请的发明人经过大量实验发现,对于每个点,提取以下特征,并基于此进行分割,具有较高精度:面片中心点坐标(点云中的每一个点可以是对应面片的中心点)(x、y、z共3个特征)、面片法向量(3个特征)以及面片中心点到三个顶点的向量(9个特征),共计15个特征。
在109中,利用经训练的动态图卷积神经网络,基于所述提取的特征,对所述点云进行分割。
在一个实施例中,可以采用动态图卷积神经网络(Dynamic Graph Convolutional Neural Network,简称DGCNN)对点云进行分类(分割)。
在一个实施例中,可以设置33个标签(label),分别代表32颗牙齿和牙龈,用于分类(分割)。
在一个实施例中,为提高计算效率,可以对所述点云进行采样,利用所述经训练的DGCNN网络,仅对采样点进行分类。
在一个实施例中,所述采样可以是均匀采样。
在一个实施例中,采样点的数量可以根据计算系统的能力进行设定,例如,取3万个采样点。
在一个实施例中,为了进一步提升计算效率,可以将所述采样点进行均匀分组,然后利用经训练的DGCNN网络对每一组采样点分别进行分类。在一个实施例中,分组的数量可以根据计算系统的能力进行设定,例如,分成3组。
DGCNN网络对一个点进行分类时,会考虑与其相邻的点(即计算待分类点与其相邻的点的特征关系,在分类时,考虑这个特征关系)。本申请的发明人经过大量实验发现,以5为步进值,分类的准确度随考虑的相邻点的数量增长,直到25,自此,即使考虑的相邻点的数量进一步增长,分类的准确度并无明显提高,因此,优选的,可以在20-30的范围内选择考虑的相邻点的数量,更优选的, 可以在25-30的范围内选择考虑的相邻点的数量。
对于每一个采样点,DGCNN网络输出其在所述33个分类(包括32颗牙齿和牙龈)上的概率分布,将概率最大的分类作为该采样点的分类结果。
在完成对所有采样点的分类后,还需要对点云中的其他点进行分类。在一个实施例中,可以采用K近邻算法(K-Nearest Neighbour,简称KNN),基于所述采样点的分类结果,对点云中的其他点进行分类。
KNN算法的基本思想是,如果一个样本(在此处为待分类的点)在特征空间中的k个最相似(即特征空间中最邻近)的样本中的大多数属于某一个类别,则该样本也属于这个类别。在KNN算法中,所选择的邻近样本是已经分类的样本。
在一个实施例中,可以计算待分类点的邻近点概率分布的平均值,并将该平均值作为该待分类点的概率分布,将其中概率最大的分类作为该待分类点的分类结果。
在一个实施例中,可以将k设为5,即对于每一个非采样点,基于特征空间中与其最近的5个采样点的分类,对其进行分类。
本申请的发明人经过大量实验发现,k的设置对网络的分类准确度以及计算量影响不大,在合理范围内即可。
请参图2,为本申请一个实施例中DGCNN网络200的示意性模块图。
输入模块201用于输入提取的特征,对于上述的实施例,可以输入提取得到的一组1万个采样点的特征,其中,每个点15个特征。
T-Net子网络203用于实现点云的自动对齐,以减小点的空间变化。T-Net网络是一个预测特征空间变换矩阵的网络,它从输入数据中学习出与特征空间维度一致的变换矩阵,然后用这个变换矩阵与原始数据相乘,实现对输入特征空间的变换操作,使得后续的每一个点都与输入数据中的每一个点都有关系。通过这 样的数据融合,实现对原始点云数据包含特征的逐级抽象。
本实施例中的DGCNN网络200包括3个边卷积模块(Edge Convolution)205、207以及209,其中,每个边卷积模块有三层。本申请的发明人经过大量实验发现,为边卷积模块设置三层深度可以保证网络的预测(分类)准确度,若进一步加深边卷积模块的深度,网络的预测准确度提升有限,而且可能更容易导致过拟合,以及加重计算量。
本实施例中的DGCNN网络200能够对上、下颌三维数字模型作为一个整体进行分割,独热编码分类模块213(one-hot encoded category)用于区分一个点属于上颌还是下颌。
本实施例中的DGCNN网络200还包括三个二维卷积模块211、215以及217。
输出模块219输出所有点在33个分类上的概率分布。
在111中,对分割结果进行优化。
为了消除分割结果中的可能存在的微小的局部的不平滑之处(例如,边界存在毛刺,或者某一面片的分类与周围的面片不同),可以对分割结果进行平滑处理。
在一个实施例中,可以采用Graph-Cut算法,基于牙颌三维数字模型面片之间的几何关系,对分割结果进行平滑处理。
在一个实施例中,可以设定两个损失,分类损失和几何损失。通过Graph-Cut算法,最小化分类损失和几何损失的加权和,以实现分类结果的平滑。
其中,可以将分类损失定义为:将当前面片预测齿号(即分类结果)平滑处理为其他齿号的损失,等于当前面片在自动分割系统输出的概率分布上的取值,即概率越大,平滑处理的损失越大。在计算分类损失时,可以针对所有可能性进行计算,即计算将当前面片平滑处理为其他32个分类时的损失。
可以将几何损失定义为:将当前面片预测齿号平滑处理为其相邻面片预测齿号的损失,等于当前面片中心点到其相邻面片中心点的距离与二面角的乘积。
接着,通过Graph-Cut算法,最小化标注损失和几何损失的加权和,以实现平滑。
在一个实施例中,在最小化损失的过程中,可以设置一个非负的常数λ作为几何损失的权重,用于平衡标注损失和几何损失对于总损失的影响。由于牙齿与牙龈的分界更为不明显,此时更倾向于相信分割结果,因此,可以设定牙龈与牙齿间的几何损失权重小于其他情况的几何损失权重。也就是说,当当前点与相邻点的分类一个是牙齿另一个是牙龈时,把当前点的分类平滑为该相邻点的分类时,其几何损失权重小于其他情况。本申请的发明人经过大量实验发现,牙龈和牙齿间的几何损失权重λ设为50,牙齿间的几何损失权重λ设为250时,优化的效果较好。
在一个实施例中,最小化损失可以由以下表达式(1)表达:
其中,第一项为分类损失,第二项为几何损失,F表示牙颌三维数字模型的面片集合,i表示第i个面片,l
i表示对第i个面片的分类,其对应的概率为p
i,其中,
ξ
U(p
i,l
i)=-log(p
i(l
i)) 表达式(2)
其中,θ
ij表示面片i和面片j之间的二面角,φ
ij表示面片i和面片j的中心的距离。
在本申请的启示下,可以理解,虽然以上实施例是对上、下颌三维数字模型作为一个整体进行分割,但本申请的方法也适用于对上、下颌三维数字模型分别 进行分割(相应地,标注的数量可以改为17个,分别代表16颗牙齿和牙龈)。
虽然,本申请的上述牙颌三维数字模型的分割方法准确率极高,但对于一些极端案例,仍无法确保百分之百的分割准确率,因此,需要提供一种用于检测分类结果的可信度的方法。
当前,业界缺乏能够检测人工神经网络对牙颌三维数字模型的分割结果的可信度的方法,本申请的发明人经过大量研究,开发出了一种基于扰动来评价分割结果的可信度的方法。该方法的基本思路是检测分割方法是否对当前牙颌三维数字模型的位置扰动敏感,如果是,则认为当前分割结果不可信。
请参图3,为本申请一个实施例中的用于检测人工神经网络对牙颌三维数字模型的分割结果的可信度的方法300。
由以上可知,在本申请的牙颌三维数字模型分割方法中,在利用DGCNN网络对牙颌三维数字模型进行分割之前,需要将其定位至基准位,因为对该网络的训练也是采用定位于该基准位的牙颌三维数字模型,籍此确保分割的准确率。
在301中,获取基准位分割结果。
在一个实施例中,所述基准位分割结果可以是利用本申请的分割方法对定位于基准位的牙颌三维数字模型的进行分割获得的结果。
在303中,产生多个扰动位分割结果。
在一个实施例中,可以将定位于所述基准位的牙颌三维数字模型沿x、y、z轴分别作正负5度和正负10度的旋转,获得12组扰动。
接着,利用相同的分割方法对位于所述12个扰动位的所述牙颌三维数字模型进行分割,获得相应的12组扰动位分割结果。
在本申请的启示下,可以理解,扰动的方式和扰动位的数量可以根据具体情况确定,并不限于以上具体的例子,例如,扰动的方式可以包括平移、旋转以及两者的组合,也可以是沿任何轴或方向进行扰动。
在305中,基于所述基准位分割结果和所述多个扰动位分割结果之间的相似度判断所述基准位分割结果是否可信。
在一个实施例中,对于每个齿号X,设基准位分割结果中齿号X的面片数量为A,第一扰动位分割结果中齿号X的面片数量为B,基准位分割结果和第一扰动位分割结果中齿号均为X的面片数量为C(C<=A,C<=B)。可以取C/A和C/B中的最小值作为基准位和第一扰动位间牙号X预测(标注)的相似度。若C/A或C/B中有分母为0的情况,此时可以将这个分数赋值为1。
对于每个牙号X,重复以上操作12次(与扰动位分割结果数量相对于),会获得12个相似度。对于每个牙号X,取这12个相似度的第一四分位数(Q1)作为所述分割方法对牙号X的置信度。取所有牙号置信度的最小值作为所述分割方法对位于所述基准位的所述牙颌三维数字模型的分割结果的代表置信度。
在一个实施例中,可以设定一个阈值,若分割结果的代表置信度大于该阈值,则认为该分割结果可信,否则认为该分割结果不可信。
在一个实施例中,基于当前的参数设置,可以将所述阈值设定为0.85。
在本申请的启示下,可以理解,对于每个牙号的置信度,除了12个相似度的第一四分位数之外,还可以其他数值替代,例如,第二四分位数或者平均值等。相应地,可能需要对所述阈值进行修改。
虽然,本申请的实施例中牙齿三维数字模型分割方法是基于点的分割方法,但在本申请的启示下,可以理解,本申请的分割结果检测方法也适用于基于面片的分割方法。
在本申请的启示下,除了本申请的分割方法之外,本申请的分割结果检测方法适用于任何基于定位至基准位进行训练和分割的神经网络牙颌三维数字模型分割方法。
当判定一个分割结果不可信时,可以有两种选择:其一,是提醒用户需要人为介入进行检验和/或调整;其二,是试图从所述多个扰动位分割结果中筛选出 可信的一个作为最终输出的分割结果,若无法在所述多个扰动位分割结果中找到可信的分割结果,再通知用户人为介入进行检验和/或调整。
在一个实施例中,可以对12个扰动位分割结果进行遍历,找出置信度最高的扰动位分割结果,将其作为最终分割结果输出。此时,即使需要人为修复分割结果,也可以降低人为修复的工作量。
尽管在此公开了本申请的多个方面和实施例,但在本申请的启发下,本申请的其他方面和实施例对于本领域技术人员而言也是显而易见的。在此公开的各个方面和实施例仅用于说明目的,而非限制目的。本申请的保护范围和主旨仅通过后附的权利要求书来确定。
同样,各个图表可以示出所公开的方法和系统的示例性架构或其他配置,其有助于理解可包含在所公开的方法和系统中的特征和功能。要求保护的内容并不限于所示的示例性架构或配置,而所希望的特征可以用各种替代架构和配置来实现。除此之外,对于流程图、功能性描述和方法权利要求,这里所给出的方框顺序不应限于以同样的顺序实施以执行所述功能的各种实施例,除非在上下文中明确指出。
除非另外明确指出,本文中所使用的术语和短语及其变体均应解释为开放式的,而不是限制性的。在一些实例中,诸如“一个或多个”、“至少”、“但不限于”这样的扩展性词汇和短语或者其他类似用语的出现不应理解为在可能没有这种扩展性用语的示例中意图或者需要表示缩窄的情况。
Claims (7)
- 一种牙颌三维数字模型分割结果的检测方法,其包括:获取待检的牙颌三维数字模型的基准位分割结果,其中,该分割结果是以第一分割方法对定位于基准位的牙颌三维数字模型进行分割而获得;对所述牙颌三维数字模型自所述基准位扰动多次,并以所述第一分割方法对位于多个扰动位的所述牙颌三维数字模型进行分割,获得对应的多个扰动位分割结果;以及基于所述基准位分割结果与所述多个扰动位分割结果的相似度,判断所述基准位分割结果是否可信,其中,对牙颌三维数字模型的分割是将其按不同的牙齿和牙龈进行分割,是对牙颌三维数字模型的面片按牙号和牙龈进行分类。
- 如权利要求1所述的牙颌三维数字模型分割结果的检测方法,其特征在于,所述第一分割方法是基于深度学习人工神经网络的分割方法,并且所述深度学习人工神经网络是以多个定位于所述基准位的牙颌三维数字模型进行训练。
- 如权利要求2所述的牙颌三维数字模型分割结果的检测方法,其特征在于,所述深度学习人工神经网络是DGCNN网络。
- 如权利要求1所述的牙颌三维数字模型分割结果的检测方法,其特征在于,它还包括:对于每一个分类,计算所述基准位分割结果与每一所述扰动位分割结果的相似度;对于每一个分类,基于对应的相似度计算得到所述基准位分割结果的置信度;基于所有分类的置信度计算得到所述基准位分割结果的代表置信度;以及基于所述代表置信度与预设的阈值判断所述基准位分割结果是否可信。
- 如权利要求4所述的牙颌三维数字模型分割结果的检测方法,其特征在于,对于每一个分类,所述基准位分割结果与每一所述扰动位分割结果的相似度是基于两者中均为该分类的面片数量计算得到。
- 如权利要求4所述的牙颌三维数字模型分割结果的检测方法,其特征在于,所述全局置信度是所述所有分类置信度的最小值。
- 如权利要求1所述的牙颌三维数字模型分割结果的检测方法,其特征在于,它还包括:若所述基准位分割结果不可信,以相同的方法,判断所述多个扰动位分割结果中是否存在可信的分割结果,如果有,则将所述可信的扰动位分割结果作为最终分割结果。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/614,542 US11830196B2 (en) | 2020-04-21 | 2021-01-22 | Method for verifying a segmentation result of a 3D digital model of jaw |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010317368.6 | 2020-04-21 | ||
CN202010317368.6A CN113538437A (zh) | 2020-04-21 | 2020-04-21 | 牙颌三维数字模型分割结果的检测方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021212941A1 true WO2021212941A1 (zh) | 2021-10-28 |
Family
ID=78093820
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/073235 WO2021212941A1 (zh) | 2020-04-21 | 2021-01-22 | 牙颌三维数字模型分割结果的检测方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11830196B2 (zh) |
CN (1) | CN113538437A (zh) |
WO (1) | WO2021212941A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11842484B2 (en) * | 2021-01-04 | 2023-12-12 | James R. Glidewell Dental Ceramics, Inc. | Teeth segmentation using neural networks |
CN116631634B (zh) * | 2023-07-19 | 2023-09-19 | 南京铖联激光科技有限公司 | 一种基于点云深度学习的可摘全口义齿的智能设计方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104867131A (zh) * | 2015-04-24 | 2015-08-26 | 杭州一牙数字口腔有限公司 | 基于数字化模型的牙冠数据提取方法 |
CN105447908A (zh) * | 2015-12-04 | 2016-03-30 | 山东山大华天软件有限公司 | 基于口腔扫描数据和cbct数据的牙列模型生成方法 |
CN109165663A (zh) * | 2018-07-03 | 2019-01-08 | 上海正雅齿科科技股份有限公司 | 牙齿特征的识别方法、装置、用户终端及存储介质 |
CN109903396A (zh) * | 2019-03-20 | 2019-06-18 | 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) | 一种基于曲面参数化的牙齿三维模型自动分割方法 |
US20190247165A1 (en) * | 2016-10-31 | 2019-08-15 | Dentsply Sirona Inc. | Method for planning a dental structure |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8021147B2 (en) * | 2001-04-13 | 2011-09-20 | Orametrix, Inc. | Method and system for comprehensive evaluation of orthodontic care using unified workstation |
US8126726B2 (en) * | 2004-02-27 | 2012-02-28 | Align Technology, Inc. | System and method for facilitating automated dental measurements and diagnostics |
EP3533411B1 (en) * | 2007-10-12 | 2022-03-16 | Align Technology, Inc. | Prosthodontic and orthodontic apparatus and methods |
DE102011010975A1 (de) * | 2011-02-10 | 2012-08-16 | Martin Tank | Verfahren und Analysesystem zur geometrischen Analyse von Scandaten oraler Strukturen |
US9626462B2 (en) * | 2014-07-01 | 2017-04-18 | 3M Innovative Properties Company | Detecting tooth wear using intra-oral 3D scans |
US10032271B2 (en) * | 2015-12-10 | 2018-07-24 | 3M Innovative Properties Company | Method for automatic tooth type recognition from 3D scans |
CN108986123A (zh) * | 2017-06-01 | 2018-12-11 | 无锡时代天使医疗器械科技有限公司 | 牙颌三维数字模型的分割方法 |
US10115197B1 (en) * | 2017-06-06 | 2018-10-30 | Imam Abdulrahman Bin Faisal University | Apparatus and method for lesions segmentation |
WO2018232299A1 (en) * | 2017-06-16 | 2018-12-20 | Align Technology, Inc. | Automatic detection of tooth type and eruption status |
US10327693B2 (en) * | 2017-07-07 | 2019-06-25 | 3M Innovative Properties Company | Tools for tracking the gum line and displaying periodontal measurements using intra-oral 3D scans |
US20220215547A1 (en) * | 2017-07-21 | 2022-07-07 | Dental Monitoring | Method for analyzing an image of a dental arch |
FR3069361B1 (fr) * | 2017-07-21 | 2019-08-23 | Dental Monitoring | Procede d'analyse d'une image d'une arcade dentaire |
FR3069360B1 (fr) * | 2017-07-21 | 2022-11-04 | Dental Monitoring | Procede d'analyse d'une image d'une arcade dentaire |
FR3069359B1 (fr) * | 2017-07-21 | 2019-08-23 | Dental Monitoring | Procede d'analyse d'une image d'une arcade dentaire |
FR3069355B1 (fr) * | 2017-07-21 | 2023-02-10 | Dental Monitoring | Procédé d’entrainement d’un réseau de neurones par enrichissement de sa base d’apprentissage pour l’analyse d’une image d’arcade dentaire |
FR3069358B1 (fr) * | 2017-07-21 | 2021-10-15 | Dental Monitoring | Procede d'analyse d'une image d'une arcade dentaire |
US20210338379A1 (en) * | 2017-07-21 | 2021-11-04 | Dental Monitoring | Method for analyzing an image of a dental arch |
EP3503038A1 (en) * | 2017-12-22 | 2019-06-26 | Promaton Holding B.V. | Automated 3d root shape prediction using deep learning methods |
GB2584469B (en) * | 2019-06-05 | 2023-10-18 | Sony Interactive Entertainment Inc | Digital model repair system and method |
US10849585B1 (en) * | 2019-08-14 | 2020-12-01 | Siemens Healthcare Gmbh | Anomaly detection using parametrized X-ray images |
CN110889850B (zh) * | 2019-12-13 | 2022-07-22 | 电子科技大学 | 一种基于中心点检测的cbct牙齿图像分割方法 |
US11842484B2 (en) * | 2021-01-04 | 2023-12-12 | James R. Glidewell Dental Ceramics, Inc. | Teeth segmentation using neural networks |
US11423697B1 (en) * | 2021-08-12 | 2022-08-23 | Sdc U.S. Smilepay Spv | Machine learning architecture for imaging protocol detector |
-
2020
- 2020-04-21 CN CN202010317368.6A patent/CN113538437A/zh active Pending
-
2021
- 2021-01-22 US US17/614,542 patent/US11830196B2/en active Active
- 2021-01-22 WO PCT/CN2021/073235 patent/WO2021212941A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104867131A (zh) * | 2015-04-24 | 2015-08-26 | 杭州一牙数字口腔有限公司 | 基于数字化模型的牙冠数据提取方法 |
CN105447908A (zh) * | 2015-12-04 | 2016-03-30 | 山东山大华天软件有限公司 | 基于口腔扫描数据和cbct数据的牙列模型生成方法 |
US20190247165A1 (en) * | 2016-10-31 | 2019-08-15 | Dentsply Sirona Inc. | Method for planning a dental structure |
CN109165663A (zh) * | 2018-07-03 | 2019-01-08 | 上海正雅齿科科技股份有限公司 | 牙齿特征的识别方法、装置、用户终端及存储介质 |
CN109903396A (zh) * | 2019-03-20 | 2019-06-18 | 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) | 一种基于曲面参数化的牙齿三维模型自动分割方法 |
Also Published As
Publication number | Publication date |
---|---|
US11830196B2 (en) | 2023-11-28 |
US20220222827A1 (en) | 2022-07-14 |
CN113538437A (zh) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021212940A1 (zh) | 牙颌三维数字模型的分割方法 | |
WO2018218988A1 (zh) | 牙颌三维数字模型的分割方法 | |
WO2021212941A1 (zh) | 牙颌三维数字模型分割结果的检测方法 | |
JP2022540634A (ja) | 深層学習に基づく3d点群の物体検出およびインスタンスセグメント化 | |
CN108388874B (zh) | 基于图像识别与级联分类器的对虾形态参数自动测量方法 | |
CN110543906B (zh) | 基于Mask R-CNN模型的肤质自动识别方法 | |
CN106203377B (zh) | 一种煤粉尘图像识别方法 | |
CN112381178B (zh) | 一种基于多损失特征学习的医学影像分类方法 | |
CN108805196A (zh) | 用于图像识别的自动增量学习方法 | |
CN108052886A (zh) | 一种小麦条锈病菌夏孢子自动统计计数方法 | |
CN112365497A (zh) | 基于TridentNet和Cascade-RCNN结构的高速目标检测方法和系统 | |
CN110288640A (zh) | 基于凸密度极值的点云配准方法 | |
CN115147363A (zh) | 一种基于深度学习算法的影像缺陷检测和分类方法及系统 | |
CN113269791A (zh) | 一种基于边缘判定与区域生长的点云分割方法 | |
US11804029B2 (en) | Hierarchical constraint (HC)-based method and system for classifying fine-grained graptolite images | |
CN112069992A (zh) | 一种基于多监督稠密对齐的人脸检测方法、系统及存储介质 | |
Pang et al. | Convolutional neural network-based sub-pixel line-edged angle detection with applications in measurement | |
CN117095145B (zh) | 一种牙齿网格分割模型的训练方法及终端 | |
CN118279320A (zh) | 基于自动提示学习的目标实例分割模型建立方法及其应用 | |
CN114358279A (zh) | 图像识别网络模型剪枝方法、装置、设备及存储介质 | |
Nguyen et al. | Pavement crack detection and segmentation based on deep neural network | |
CN111259806B (zh) | 一种人脸区域识别方法、装置及存储介质 | |
Kotyza et al. | Detection of directions in an image as a method for circle detection | |
CN114358191A (zh) | 一种基于深度自动编码器的基因表达数据聚类方法 | |
Li et al. | Dental detection and classification of yolov3-spp based on convolutional block attention module |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21793425 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21793425 Country of ref document: EP Kind code of ref document: A1 |