WO2022174747A1 - 牙齿的计算机断层扫描图像的分割方法 - Google Patents
牙齿的计算机断层扫描图像的分割方法 Download PDFInfo
- Publication number
- WO2022174747A1 WO2022174747A1 PCT/CN2022/075513 CN2022075513W WO2022174747A1 WO 2022174747 A1 WO2022174747 A1 WO 2022174747A1 CN 2022075513 W CN2022075513 W CN 2022075513W WO 2022174747 A1 WO2022174747 A1 WO 2022174747A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tooth
- image
- mask
- dimensional
- sequence
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000002591 computed tomography Methods 0.000 title claims abstract description 30
- 230000011218 segmentation Effects 0.000 claims abstract description 54
- 238000003709 image segmentation Methods 0.000 claims abstract description 27
- 238000013145 classification model Methods 0.000 claims abstract description 22
- 238000013528 artificial neural network Methods 0.000 claims abstract description 7
- 208000003941 Impacted Tooth Diseases 0.000 claims description 38
- 230000036961 partial effect Effects 0.000 claims description 34
- 210000000332 tooth crown Anatomy 0.000 claims description 6
- 238000007408 cone-beam computed tomography Methods 0.000 claims description 4
- 239000007787 solid Substances 0.000 claims description 4
- 238000003325 tomography Methods 0.000 abstract description 12
- 210000001847 jaw Anatomy 0.000 description 41
- 210000003739 neck Anatomy 0.000 description 8
- 230000009466 transformation Effects 0.000 description 7
- 210000004373 mandible Anatomy 0.000 description 6
- 210000002050 maxilla Anatomy 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 239000003550 marker Substances 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000009499 grossing Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 239000011505 plaster Substances 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000036346 tooth eruption Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/34—Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the present application generally relates to methods of segmentation of computed tomography images of teeth.
- a high-precision three-dimensional digital model of the crown can be obtained by intraoral scanning, or by scanning an impression or a solid model of the tooth, but this method cannot obtain information about the root.
- the tomography technology can be used to obtain a two-dimensional tomographic image sequence of the whole tooth (including crown and root), and based on this Generates a three-digit model of the entire tooth. Since the two-dimensional tomographic images obtained by using computed tomography technology include not only the teeth but also the jaws, before using the two-dimensional tomographic image sequence to establish a three-dimensional digital model of the whole tooth, it needs to be segmented to remove the jaws part, keep the tooth part.
- One aspect of the present application provides a computer-implemented method for segmenting a computed tomography image of a tooth, comprising: acquiring a first three-dimensional digital model representing a crown of an erupting tooth of a first tooth and a two-dimensional image of the first tooth Tomographic image sequence; using a local image classification model, based on the two-dimensional tomographic image sequence of the first jaw, for each erupted tooth from the two-dimensional tomographic image sequence of the first jaw, the starting segmentation of the two-dimensional image sequence is selected; 3D tomographic images, the local image classification model is a trained deep neural network for classifying local 2D tomographic images into categories including crowns and roots; based on the first 3D digital model, obtain each erupting tooth position and range information; and for each erupting tooth, using the position and range information, using the local image segmentation model, from the corresponding two-dimensional tomographic image of the initial segmentation to the crown and root direction of the erupting tooth
- the local image
- the first three-dimensional digital model is obtained by one of the following methods: intraoral scanning or scanning a dental impression or solid model.
- the two-dimensional tomographic image of the first jaw is obtained by cone beam computed tomography.
- the computer-implemented method for segmenting a computed tomography image of a tooth further comprises: for each erupting tooth, using the position and range information, and using the partial image segmentation model, for the erupting tooth The partial image of the erupting tooth is segmented in the two-dimensional tomographic image of the initial segmentation, to obtain a masked binary image of the erupting tooth corresponding to the two-dimensional tomographic image of the initial segmentation; and for each erupting tooth, the front The masked binary image corresponding to one two-dimensional tomographic image is used as range information, and the partial image segmentation model is used to segment the partial image of the erupting tooth in the next two-dimensional tomographic image by using the range information.
- the computer-implemented method for segmenting a computed tomographic image of a tooth further comprises: using a global image segmentation model and the local image classification model, based on the two-dimensional tomographic image sequence of the first jaw , extracting the masked image sequence of the crown portion of the erupting teeth of the first jaw, wherein the global image segmentation model is a trained deep neural network for segmenting two-dimensional tomographic images to extract global teeth a mask image; and registering the first three-dimensional digital model with a sequence of mask images of the crown portion of the first jaw erupting tooth to obtain the position and extent information.
- the registering includes projecting the first three-dimensional digital model and the sequence of masked images of the crown portion of the first jaw erupting tooth on a first plane and matching the projections of the two. allow.
- the first plane is parallel to the sequence of two-dimensional tomographic images of the first jaw.
- the registering further comprises projecting the first three-dimensional digital model and the sequence of masked images of the crown portion of the first jaw erupting tooth in the sagittal plane and performing the projections of both. registration, the results of which are used to guide the segmentation of the local image for each erupting tooth.
- the registering is registering the first three-dimensional digital model in three-dimensional space with a sequence of masked images of the crown portion of the first jaw erupting tooth to obtain the position and range information, and guidance for segmentation of the local image for each erupting tooth.
- the partial image of the erupting tooth in the initial segmented two-dimensional tomographic image is located in the middle of the tooth.
- the partial image of the erupting tooth in the initial segmented two-dimensional tomographic image is located at the neck of the tooth.
- the computer-implemented method for segmenting a computed tomography image of a tooth further comprises: using a global image segmentation model to extract the first tooth based on a sequence of two-dimensional tomographic images of the first jaw A sequence of global tooth mask images of the jaw, wherein the global image segmentation model is a deep neural network trained to segment two-dimensional tomographic images to extract a global tooth mask image; using the local image classification model , delete the masks of the root parts of all erupting teeth in the global tooth mask image sequence of the first jaw to obtain a second mask image sequence; generate a mask brightness curve based on the second mask image sequence, and Determine the range where the impacted tooth is located in the two-dimensional tomographic image sequence of the first jaw based on the brightness curve; for each impacted tooth, within the range where the impacted tooth is located, determine a two-dimensional tomographic image for initial segmentation; And for each impacted tooth, using the local image segmentation model, the partial image of the impacted tooth is finally segmented
- the computer-implemented method for segmenting a computed tomographic image of a tooth further comprises: for each impacted tooth, using the partial image segmentation model, within the range of the two-dimensional tomographic image, The partial image is pre-segmented, and the 2D tomographic image whose initial segmentation is determined based on the mask area obtained by the segmentation.
- the two-dimensional tomographic image corresponding to the largest area mask obtained by the pre-segmentation is used as the two-dimensional tomographic image of the initial segmentation.
- the computer-implemented method for segmenting a computed tomography image of a tooth further comprises: if the distance between the center points of the largest area masks of the two impacted teeth is less than a first threshold, then, not comparing the area The impacted teeth corresponding to the small masks are finally segmented using the local image segmentation model.
- the computer-implemented method for segmenting a computed tomography image of a tooth further comprises: if the maximum area mask center point of an impacted tooth corresponds to the mask of the starting two-dimensional tomographic image of the nearest erupting tooth If the distance between the center points of the hood is greater than the second distance threshold, then the impacted tooth is not subjected to final segmentation using the partial image segmentation model.
- FIG. 1 is a schematic flowchart of a computer-implemented fusion method of a three-dimensional digital model of teeth in an embodiment of the application;
- FIG. 2A shows a two-dimensional tomographic image of a mandible classified as a mandible in an example of the present application
- Figure 2B shows a two-dimensional tomographic image of an example of the present application that is classified as a maxilla
- Figure 2C shows a two-dimensional tomographic image of an example of the present application classified as a tooth
- Figure 2D shows a two-dimensional tomographic image of an example of the present application classified as occlusal teeth
- Figure 2E shows a two-dimensional tomographic image of a tooth classified as open jaw in one example of the present application.
- Figure 3 shows the global tooth mask extracted by the global image segmentation model in an example of the present application.
- An aspect of the present application provides a method for segmenting a computed tomography image of a tooth, for extracting a tooth portion in a two-dimensional tomographic image sequence.
- Cone Beam Computed Tomography can be used to obtain a two-dimensional tomographic image sequence of a tooth. It can be understood that other computer tomography techniques can also be used to obtain a two-dimensional tomographic image sequence of a tooth. . Among them, the two-dimensional tomographic image of the tooth is a grayscale image.
- FIG. 1 is a schematic flowchart of a method 100 for segmenting a computed tomography image of a tooth according to an embodiment of the present application.
- the method 100 for segmenting a tooth computed tomography image is described below by taking a single tooth (ie, the upper jaw or the lower jaw) as an example.
- a first three-dimensional digital model representing a crown of a first dental jaw and a sequence of two-dimensional tomographic images of the first dental jaw are acquired.
- the first three-dimensional digital model may be obtained by intraoral scanning, or by scanning a dental impression or solid model (eg, a plaster cast of a tooth), wherein the first three-dimensional digital model may be segmented, That is, each of the crowns is independent.
- a dental impression or solid model eg, a plaster cast of a tooth
- a two-dimensional tomographic image sequence of the entire upper and lower jaw is obtained by computer tomography.
- the global image classification model can be used to divide it into two parts: the upper jaw and the lower jaw.
- the global image classification model is based on a deep convolutional neural network, which can be a Vgg network, an Inception network, or a Resnet network, etc.
- the global image classification model can classify the two-dimensional tomographic images into five categories: mandible, teeth, occlusal teeth, open teeth, and maxilla, ie, assign a category to each two-dimensional tomographic image.
- two-dimensional tomographic images containing only the mandible or maxilla will be classified as mandible or maxilla; two-dimensional tomographic images containing both jaw and teeth will be classified as teeth; two-dimensional tomographic images containing both maxillary and mandibular teeth will be classified as teeth; 2D tomographic images will be classified as occlusal teeth; 2D tomographic images containing only a single jaw (e.g. maxillary or mandibular) molars will be classified as open teeth (a sequence of 2D tomographic images obtained by scanning a patient in an open jaw state) usually include such two-dimensional tomographic images).
- FIG. 2A an example of a 2D tomographic image classified as a mandible is shown.
- FIG. 2B an example of a 2D tomographic image classified as the maxilla is shown.
- FIG. 2C an example of a 2D tomographic image classified as a tooth is shown.
- FIG. 2D there is shown an example of a 2D tomographic image of teeth classified as occlusal.
- FIG. 2E a 2D tomographic image of an example of a tooth classified as open jaw is shown.
- a mask of the crown portion is extracted based on the sequence of two-dimensional tomographic images of the first jaw.
- a global image segmentation model may be used to extract the mask of the tooth portion based on the two-dimensional tomographic image sequence of the first jaw, to obtain a global tooth mask image sequence of the first jaw, each global
- the tooth mask image contains masks corresponding to all the teeth in the two-dimensional tomographic image, and these tooth masks are taken as a whole, that is, they are not segmented by a single tooth, wherein the mask image is a binary image.
- the global tooth segmentation model may be a trained deep convolutional neural network, eg, an FCN network, a UNet network, or a VNet network, or the like.
- the global tooth segmentation model classifies each pixel of the two-dimensional tomographic image to extract pixels belonging to teeth.
- Figure 3 shows the global tooth mask extracted by the global image segmentation model in an example.
- a local tooth classification model can be used to classify the tooth local image corresponding to each tooth mask connected domain in the binary image sequence of the tooth mask of the first jaw, remove the root mask, and retain the crown mask mask, to obtain the crown mask image sequence of the first jaw.
- the local tooth classification model may be a trained deep convolutional neural network, eg, a Vgg network, an Inception network, or a Resnet network, among others.
- the local tooth classification model can classify the tooth local images corresponding to each tooth mask connected domain into three categories: crown, root, and background.
- the mask of the root portion of each tooth can be deleted from the global tooth mask image sequence of the first jaw according to the result of the local classification .
- the first three-dimensional digital model and the sequence of crown mask images of the first jaw are projected and registered on a first plane.
- the first plane may be parallel to the two-dimensional tomographic image of the first jaw. In yet another embodiment, the first plane may be parallel to the xy plane of the coordinate system of the first three-dimensional digital model.
- the two-dimensional projection image of the first three-dimensional digital model on the first plane is denoted as T 1
- the two-dimensional projection image of the crown mask image sequence of the first jaw on the first plane Denoted as I 1 .
- the following method may be used to register T1 and I1 by translation along the first plane and rotation about a first axis perpendicular to the first plane.
- T 1 rotate T 1 by R degrees to obtain TR 1 , and use TR 1 as a template image and I 1 as a source image to perform template matching.
- the matching coefficient matrix is calculated by the normalized variance sum method, and the pixel position with the smallest value in the matching coefficient matrix is the best matching center point.
- R can choose some discrete values within a set range, for example, from -10 degrees to 10 degrees, with a step size of 2 degrees to choose 11 rotation angles.
- a matching coefficient value (that is, the minimum value of the corresponding coefficient matrix) is calculated for each of the obtained series of matching coefficient matrices, and the smallest matching coefficient value is selected, and the corresponding rotation angle of the matching coefficient matrix is For the best matching rotation angle, denoted as R tm1 .
- the offset of the template image is the coordinate values min_loc_x1 and min_loc_y1 corresponding to the minimum matching coefficient value.
- R tm1 , min_loc_x1 and min_loc_y1 are the best matching transformation parameters, and the registration on the first plane is completed.
- the two in addition to registering the first three-dimensional digital model and the sequence of crown mask images of the first jaw in the first plane, the two can also be registered in the sagittal plane Project and register the projections of the two to align the two in the z-axis direction (ie, the tooth height direction).
- the registration results of the sagittal plane can be used to guide the classification of the partial tooth mask images (ie crowns, necks and roots) by the subsequent partial tooth classification model.
- the registration of the sagittal plane may be performed before or after the registration of the first plane.
- a second three-dimensional digital model may be generated based on a sequence of crown mask images of the first jaw, and then the first three-dimensional digital model and the second three-dimensional digital model may be matched in three-dimensional space. allow.
- the registration result includes both the alignment in the first plane and the alignment in the z-axis direction.
- a second sequence of crown mask images may be generated based on the first three-dimensional digital model and registered in three-dimensional space with the sequence of crown mask images of the first jaw. Similarly, the registration result includes both the alignment in the first plane and the alignment in the z-axis direction.
- position and range information of the erupting teeth is determined based on the result of the registration and the first three-dimensional digital model.
- the center points of each crown in the first three-dimensional digital model may be calculated (for example, the center points of each crown may be obtained by calculating the average coordinates of the vertices of the crown), and then these center points may be Projecting on the first plane to obtain a projected image C 1 .
- a local tooth classification model and a local tooth segmentation model to re-segment the two-dimensional tomographic image sequence of the first jaw to obtain a mask sequence of the erupting teeth .
- the center point of the erupting tooth is taken as the center, and a preset range (for example, a square with a side length of 15 mm is used as the center. It can be understood that this preset The set range can be adjusted according to different pixel spacing and physical size) to capture partial images of teeth. Then, using the partial tooth classification model to classify these partial tooth images into crowns, roots or backgrounds, the position of the tomographic image adjacent to the root and the crown, that is, the position of the tomographic image where the tooth neck is located can be obtained.
- segmentation can be performed in the direction of the tooth crown and the tooth root respectively, that is, the segment image where the tooth neck is located is segmented first to extract the erupting tooth Then, the next tomographic image is segmented using the current mask of the erupting tooth as a reference.
- segmentation can also start from the tomographic image near the tomographic image where the tooth neck is located (that is, the tomographic image of the middle segment of the tooth) (because the shape of the tooth mask of the same tooth in the adjacent tomographic images does not change much.
- a tomographic image is selected as the starting point of segmentation, and the selected tomographic image is hereinafter referred to as the initial segmented tomographic image of the erupting tooth.
- the partial tomographic image in the initial tomographic image is segmented using the local tooth segmentation model to extract its tooth mask in the current tomographic image (ie, the initial tomographic image).
- the local tooth segmentation model may be a trained deep convolutional neural network, eg, FCN network, UNet network, or VNet network, etc., for extracting tooth masks from tooth partial tomographic images.
- the partial tooth segmentation model classifies each pixel of the partial tomographic image to extract pixels belonging to teeth.
- a local tooth segmentation model may be used for its local tomographic image in the next tomographic image (by The tooth local classification model is processed) for segmentation to extract its tooth mask in the next tomographic image. And so on, until the mask of the erupting tooth in all the tomographic images is extracted.
- the accurate segmentation of the sequence of tomographic images of the first jaw includes two steps: first, based on the location, using the local tooth classification model, the local tomographic images are classified to find each eruption The tomographic image where the tooth neck is located; secondly, based on the position and range information, the local tomographic image is segmented by using the local tooth segmentation model. Due to the reference to the position and range information, the accuracy and efficiency of the local tooth classification model and the local tooth segmentation model are effectively improved.
- the first jaw may also include impacted teeth.
- these impacted teeth also need to be segmented.
- the mask sequence of the impacted teeth is also obtained, but the accuracy of the segmentation result for the impacted teeth may not be high, so the following steps can be performed. operation to obtain more accurate segmentation results of impacted teeth.
- the impacted tooth is detected using the tooth mask luminance curve.
- the abscissa of the luminance curve may be the position of the tomographic image, and the ordinate may be the total luminance of the tooth mask in each tomographic image, that is, the sum of the number of pixels or the sum of the pixel values of the tooth mask.
- the mandibular impacted teeth are located below the neck of the mandibular erupting teeth, and the maxillary impacted teeth are located above the maxillary impacted teeth.
- the global tooth mask sequence obtained by the previous segmentation is called the first mask sequence. Since the local segmentation results of the erupting teeth with high accuracy have been obtained before, the roots of all the erupting teeth can be divided into the first mask sequence. The mask is removed, and only the mask of the crown of the erupting tooth is retained, and the second mask sequence is obtained.
- the tooth mask brightness curve generated based on the second mask sequence will include a main peak and a small peak, the main peak corresponds to the tomographic image index range where the erupting tooth crown is located, and the small peak corresponds to the tomographic image index range where the impacted tooth is located.
- the impacted tooth can be detected within the tomographic image index range corresponding to the small peak. Since in the second tooth mask sequence, the root mask of the erupting tooth has been deleted, within the range of the second tooth mask sequence, only ambush teeth are masked.
- pseudo seed points in the seed points obtained at this time.
- these pseudo seed points can be screened out by the following method based on the distance discrimination criterion.
- the first distance threshold may be set to 3 mm. If the distance (three-dimensional space distance) between the seed points of two impacted teeth is less than the first distance threshold, then the two seed points are likely to belong to For the same impacted tooth, therefore, remove the seed point with a smaller mask area.
- the second distance threshold may be set to 15 mm. If the distance between the seed point of the impacted tooth and the nearest erupting tooth seed point is greater than the second distance threshold, then the seed point is likely to belong to the jawbone, so , remove the seed point of the ambush tooth.
- the mask sequence of all teeth including erupting and impacted teeth, it can be optimized using methods such as adjacent layer constraints, Gaussian smoothing, and watershed segmentation.
- the tooth masking adjacent layer constraint algorithm can remove the over-segmented regions in the tooth segmentation results.
- the specific operations are as follows:
- the slice index where the seed point is located is S, segment the tooth partial image IS of the S slice, and obtain the mask image MS of the S slice tooth;
- M RS+1 as the mask image of the S+1 tomography, repeat the above operation for the S+2 tomography to calculate the mask image of the S+2 tomography, and so on, until all the tooth masks are processed.
- Gaussian smoothing may be used to smooth the tooth mask image sequence processed by the adjacent layer constraint algorithm.
- the watershed segmentation algorithm can detect the boundaries of connected crowns, and remove the connected adjacent teeth from the segmentation results.
- the specific operations are as follows:
- the marker image Marker is generated from the tooth mask images M i-1 and M io of the i-1 layer according to the following equation (1),
- Erode and Dilate are morphological erosion operations and dilation operations, respectively, "
- the watershed segmentation algorithm is applied to the partial image of the i-th crown section to obtain the tooth boundary image B;
- the overall three-dimensional digital model of these teeth (including crown and root) can be generated based on it.
- the overall three-dimensional digital model of the teeth is very useful. , because not only the relationship between the crowns but also the roots can be obtained.
- the various diagrams may illustrate exemplary architectural or other configurations of the disclosed methods and systems, which may be helpful in understanding the features and functionality that may be included in the disclosed methods and systems. What is claimed is not limited to the exemplary architectures or configurations shown, and the desired features may be implemented in various alternative architectures and configurations. Additionally, with respect to the flowcharts, functional descriptions, and method claims, the order of blocks presented herein should not be limited to various embodiments that are implemented in the same order to perform the functions, unless the context clearly dictates otherwise. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Quality & Reliability (AREA)
- Evolutionary Biology (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (16)
- 一种计算机执行的牙齿的计算机断层扫描图像的分割方法,包括:获取表示第一牙颌萌出牙牙冠的第一三维数字模型和所述第一牙颌的二维断层图像序列;利用局部图像分类模型,基于所述第一牙颌的二维断层图像序列,为每一萌出牙自所述第一牙颌的二维断层图像序列中选定起始分割的二维断层图像,所述局部图像分类模型是经训练的深度神经网络,用于将局部二维断层图像分为包括牙冠和牙根的类别;基于所述第一三维数字模型,获取每一萌出牙的位置和范围信息;以及对于每一萌出牙,利用所述位置和范围信息,利用局部图像分割模型,自对应的起始分割的二维断层图像起向牙冠和牙根方向对该萌出牙的局部图像进行分割,获得其二值遮罩图像序列。
- 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述第一三维数字模型是通过以下方法之一获得:口内扫描或者扫描牙齿印模或实体模型。
- 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述第一牙颌的二维断层图像是通过锥束计算机断层扫描获得。
- 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:对于每一萌出牙,利用所述位置和范围信息,利用所述局部图像分割模型,对该萌出牙的起始分割的二维断层图像中该萌出牙的局部图像进行分割,获得该萌出牙对应所述起始分割的二维断层图像的遮罩二值图像;以及对于每一萌出牙,将前一二维断层图像对应的遮罩二值图像作为范围信息,并利用该范围信息,利用所述局部图像分割模型对下一二维断层图像中该萌出牙的局部图像进行分割。
- 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:利用全局图像分割模型和所述局部图像分类模型,基于所述第一牙颌的二维断层图像序列,提取所述第一牙颌萌出牙的牙冠部分的遮罩图像序列,其中,所述全局图像分割模型是经训练的深度神经网络,用于对二维断层图像进行分割,以提取全局牙齿遮罩图像;以及将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列进行配准,以获得所述位置和范围信息。
- 如权利要求5所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述配准包括将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列在第一平面投影并将两者的投影进行配准。
- 如权利要求6所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述第一平面与所述第一牙颌的二维断层图像序列平行。
- 如权利要求7所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述配准还包括将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列在矢状面投影并将两者的投影进行配准,该配准结果用于指导对每一萌出牙的所述局部图像分割。
- 如权利要求5所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述配准是在三维空间将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列进行配准,以获得所述位置和范围信息,以及指导对每一萌出牙的所述局部图像分割。
- 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,对于每一萌出牙,所述起始分割的二维断层图像中该萌出牙的局部图像是位于牙齿中段。
- 如权利要求10所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,对于每一萌出牙,所述起始分割的二维断层图像中该萌出牙的局部图像是位于牙颈。
- 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:利用全局图像分割模型,基于所述第一牙颌的二维断层图像序列,提取所述第一牙颌的全局牙齿遮罩图像序列,其中,所述全局图像分割模型是经训练的深度神经网络,用于对二维断层图像进行分割,以提取全局牙齿遮罩图像;利用所述局部图像分类模型,删除所述第一牙颌的全局牙齿遮罩图像序列中所有萌出牙的牙根部分的遮罩,获得第二遮罩图像序列;基于所述第二遮罩图像序列产生遮罩亮度曲线,并基于所述亮度曲线确定所述第一牙颌的二维断层图像序列中埋伏牙所在的范围;对于每一埋伏牙,在所述埋伏牙所在范围内,确定起始分割的二维断层图像;以及对于每一埋伏牙,利用所述局部图像分割模型,自对应的起始分割的二维断层图像起向牙冠和牙根方向对该埋伏牙的局部图像进行最终分割,获得其二值遮罩图像序列。
- 如权利要求12所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:对于每一埋伏牙,利用所述局部图像分割模型,在所述二维断层图像范围内,对其局部图像进行预分割,并基于分割获得的遮罩面积确定其起始分割的二维断层图像。
- 如权利要求13所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,对于每一埋伏牙,将所述预分割获得的最大面积遮罩所对应的二维断层图像作为起始分割的二维断层图像。
- 如权利要求14所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:若两个埋伏牙的最大面积遮罩中心点之间的距离小于第一阈值,那么,不对面积较小的遮罩所对应的埋伏牙利用所述局部图像分割模型进行最终分割。
- 如权利要求14所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:若一个埋伏牙的最大面积遮罩中心点与最近萌出牙对应其分割起始二维断层图像的遮罩的中心点之间的距离大于第二距离阈值, 那么,不对该埋伏牙利用所述局部图像分割模型进行最终分割。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22755520.8A EP4296944A1 (en) | 2021-02-18 | 2022-02-08 | Method for segmenting computed tomography image of teeth |
AU2022222097A AU2022222097A1 (en) | 2021-02-18 | 2022-02-08 | Method for segmenting computed tomography image of teeth |
US18/546,872 US20240127445A1 (en) | 2021-02-18 | 2022-02-08 | Method of segmenting computed tomography images of teeth |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110190754.8 | 2021-02-18 | ||
CN202110190754.8A CN114972360A (zh) | 2021-02-18 | 2021-02-18 | 牙齿的计算机断层扫描图像的分割方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022174747A1 true WO2022174747A1 (zh) | 2022-08-25 |
Family
ID=82931199
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/075513 WO2022174747A1 (zh) | 2021-02-18 | 2022-02-08 | 牙齿的计算机断层扫描图像的分割方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240127445A1 (zh) |
EP (1) | EP4296944A1 (zh) |
CN (1) | CN114972360A (zh) |
AU (1) | AU2022222097A1 (zh) |
WO (1) | WO2022174747A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117437250B (zh) * | 2023-12-21 | 2024-04-02 | 天津医科大学口腔医院 | 一种基于深度学习的三维牙颌图像分割方法及系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106806030A (zh) * | 2015-11-30 | 2017-06-09 | 北京大学口腔医学院 | 一种冠根三维模型融合方法 |
CN108932716A (zh) * | 2017-05-26 | 2018-12-04 | 无锡时代天使医疗器械科技有限公司 | 用于牙齿图像的图像分割方法 |
US20200175678A1 (en) * | 2018-11-28 | 2020-06-04 | Orca Dental AI Ltd. | Dental image segmentation and registration with machine learning |
EP3673864A1 (en) * | 2018-12-28 | 2020-07-01 | Trophy | Tooth segmentation using tooth registration |
-
2021
- 2021-02-18 CN CN202110190754.8A patent/CN114972360A/zh active Pending
-
2022
- 2022-02-08 WO PCT/CN2022/075513 patent/WO2022174747A1/zh active Application Filing
- 2022-02-08 US US18/546,872 patent/US20240127445A1/en active Pending
- 2022-02-08 EP EP22755520.8A patent/EP4296944A1/en active Pending
- 2022-02-08 AU AU2022222097A patent/AU2022222097A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106806030A (zh) * | 2015-11-30 | 2017-06-09 | 北京大学口腔医学院 | 一种冠根三维模型融合方法 |
CN108932716A (zh) * | 2017-05-26 | 2018-12-04 | 无锡时代天使医疗器械科技有限公司 | 用于牙齿图像的图像分割方法 |
US20200175678A1 (en) * | 2018-11-28 | 2020-06-04 | Orca Dental AI Ltd. | Dental image segmentation and registration with machine learning |
EP3673864A1 (en) * | 2018-12-28 | 2020-07-01 | Trophy | Tooth segmentation using tooth registration |
Also Published As
Publication number | Publication date |
---|---|
CN114972360A (zh) | 2022-08-30 |
US20240127445A1 (en) | 2024-04-18 |
AU2022222097A1 (en) | 2023-10-05 |
EP4296944A1 (en) | 2023-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200402647A1 (en) | Dental image processing protocol for dental aligners | |
KR102273438B1 (ko) | 구강 스캔 데이터의 크라운 분할을 이용한 구강 스캔 데이터와 컴퓨터 단층촬영 이미지 자동 정합 장치 및 방법 | |
CN110782974A (zh) | 预测解剖标志的方法和使用该方法预测解剖标志的设备 | |
US20060127854A1 (en) | Image based dentition record digitization | |
CN112120810A (zh) | 一种牙齿正畸隐型矫治器的三维数据生成方法 | |
GB2440267A (en) | Combining generic and patient tooth models to produce complete tooth model | |
JP2010524529A (ja) | 顔面解析を用いた特注歯セットアップのコンピュータ支援作成 | |
KR102373500B1 (ko) | 딥러닝을 이용한 3차원 의료 영상 데이터의 특징점 자동 검출 방법 및 장치 | |
KR102302587B1 (ko) | 3차원 치과 ct 영상과 3차원 디지털 인상 모델의 정합 정확도 판단 방법 및 이를 컴퓨터에서 실행시키기 위한 프로그램이 기록된 컴퓨터로 읽을 수 있는 기록 매체 | |
CN113052902B (zh) | 牙齿治疗监测方法 | |
US20230206451A1 (en) | Method for automatic segmentation of a dental arch | |
WO2022174747A1 (zh) | 牙齿的计算机断层扫描图像的分割方法 | |
CN101950430A (zh) | 基于曲面断层片的三维牙齿重建方法 | |
WO2024046400A1 (zh) | 牙齿模型生成方法、装置、电子设备和存储介质 | |
WO2021147333A1 (zh) | 利用人工神经网络生成牙科正畸治疗效果的图像的方法 | |
WO2020181973A1 (zh) | 确定上、下颌牙齿咬合关系的方法及计算机系统 | |
KR102215068B1 (ko) | 임플란트 진단용 영상 정합을 위한 장치 및 방법 | |
CN112807108B (zh) | 一种正畸矫治过程中的牙齿矫治状态的检测方法 | |
KR102302249B1 (ko) | 영상처리와 cnn을 이용한 자동 3차원 세팔로메트리 장치 및 방법 | |
US20220358740A1 (en) | System and Method for Alignment of Volumetric and Surface Scan Images | |
CN113139908B (zh) | 一种三维牙列分割与标注方法 | |
CN107564094A (zh) | 一种基于局部坐标的牙齿模型特征点自动识别算法 | |
WO2024088359A1 (zh) | 检测牙齿三维数字模型形态差异的方法 | |
EP4307229A1 (en) | Method and system for tooth pose estimation | |
KR102502588B1 (ko) | 교합 정렬 방법 및 교합 정렬 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22755520 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18546872 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022222097 Country of ref document: AU Ref document number: AU2022222097 Country of ref document: AU Ref document number: 2022755520 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022222097 Country of ref document: AU Date of ref document: 20220208 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2022755520 Country of ref document: EP Effective date: 20230918 |