WO2022174747A1 - 牙齿的计算机断层扫描图像的分割方法 - Google Patents

牙齿的计算机断层扫描图像的分割方法 Download PDF

Info

Publication number
WO2022174747A1
WO2022174747A1 PCT/CN2022/075513 CN2022075513W WO2022174747A1 WO 2022174747 A1 WO2022174747 A1 WO 2022174747A1 CN 2022075513 W CN2022075513 W CN 2022075513W WO 2022174747 A1 WO2022174747 A1 WO 2022174747A1
Authority
WO
WIPO (PCT)
Prior art keywords
tooth
image
mask
dimensional
sequence
Prior art date
Application number
PCT/CN2022/075513
Other languages
English (en)
French (fr)
Inventor
田媛
冯洋
汪葛
Original Assignee
无锡时代天使医疗器械科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 无锡时代天使医疗器械科技有限公司 filed Critical 无锡时代天使医疗器械科技有限公司
Priority to EP22755520.8A priority Critical patent/EP4296944A1/en
Priority to AU2022222097A priority patent/AU2022222097A1/en
Priority to US18/546,872 priority patent/US20240127445A1/en
Publication of WO2022174747A1 publication Critical patent/WO2022174747A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present application generally relates to methods of segmentation of computed tomography images of teeth.
  • a high-precision three-dimensional digital model of the crown can be obtained by intraoral scanning, or by scanning an impression or a solid model of the tooth, but this method cannot obtain information about the root.
  • the tomography technology can be used to obtain a two-dimensional tomographic image sequence of the whole tooth (including crown and root), and based on this Generates a three-digit model of the entire tooth. Since the two-dimensional tomographic images obtained by using computed tomography technology include not only the teeth but also the jaws, before using the two-dimensional tomographic image sequence to establish a three-dimensional digital model of the whole tooth, it needs to be segmented to remove the jaws part, keep the tooth part.
  • One aspect of the present application provides a computer-implemented method for segmenting a computed tomography image of a tooth, comprising: acquiring a first three-dimensional digital model representing a crown of an erupting tooth of a first tooth and a two-dimensional image of the first tooth Tomographic image sequence; using a local image classification model, based on the two-dimensional tomographic image sequence of the first jaw, for each erupted tooth from the two-dimensional tomographic image sequence of the first jaw, the starting segmentation of the two-dimensional image sequence is selected; 3D tomographic images, the local image classification model is a trained deep neural network for classifying local 2D tomographic images into categories including crowns and roots; based on the first 3D digital model, obtain each erupting tooth position and range information; and for each erupting tooth, using the position and range information, using the local image segmentation model, from the corresponding two-dimensional tomographic image of the initial segmentation to the crown and root direction of the erupting tooth
  • the local image
  • the first three-dimensional digital model is obtained by one of the following methods: intraoral scanning or scanning a dental impression or solid model.
  • the two-dimensional tomographic image of the first jaw is obtained by cone beam computed tomography.
  • the computer-implemented method for segmenting a computed tomography image of a tooth further comprises: for each erupting tooth, using the position and range information, and using the partial image segmentation model, for the erupting tooth The partial image of the erupting tooth is segmented in the two-dimensional tomographic image of the initial segmentation, to obtain a masked binary image of the erupting tooth corresponding to the two-dimensional tomographic image of the initial segmentation; and for each erupting tooth, the front The masked binary image corresponding to one two-dimensional tomographic image is used as range information, and the partial image segmentation model is used to segment the partial image of the erupting tooth in the next two-dimensional tomographic image by using the range information.
  • the computer-implemented method for segmenting a computed tomographic image of a tooth further comprises: using a global image segmentation model and the local image classification model, based on the two-dimensional tomographic image sequence of the first jaw , extracting the masked image sequence of the crown portion of the erupting teeth of the first jaw, wherein the global image segmentation model is a trained deep neural network for segmenting two-dimensional tomographic images to extract global teeth a mask image; and registering the first three-dimensional digital model with a sequence of mask images of the crown portion of the first jaw erupting tooth to obtain the position and extent information.
  • the registering includes projecting the first three-dimensional digital model and the sequence of masked images of the crown portion of the first jaw erupting tooth on a first plane and matching the projections of the two. allow.
  • the first plane is parallel to the sequence of two-dimensional tomographic images of the first jaw.
  • the registering further comprises projecting the first three-dimensional digital model and the sequence of masked images of the crown portion of the first jaw erupting tooth in the sagittal plane and performing the projections of both. registration, the results of which are used to guide the segmentation of the local image for each erupting tooth.
  • the registering is registering the first three-dimensional digital model in three-dimensional space with a sequence of masked images of the crown portion of the first jaw erupting tooth to obtain the position and range information, and guidance for segmentation of the local image for each erupting tooth.
  • the partial image of the erupting tooth in the initial segmented two-dimensional tomographic image is located in the middle of the tooth.
  • the partial image of the erupting tooth in the initial segmented two-dimensional tomographic image is located at the neck of the tooth.
  • the computer-implemented method for segmenting a computed tomography image of a tooth further comprises: using a global image segmentation model to extract the first tooth based on a sequence of two-dimensional tomographic images of the first jaw A sequence of global tooth mask images of the jaw, wherein the global image segmentation model is a deep neural network trained to segment two-dimensional tomographic images to extract a global tooth mask image; using the local image classification model , delete the masks of the root parts of all erupting teeth in the global tooth mask image sequence of the first jaw to obtain a second mask image sequence; generate a mask brightness curve based on the second mask image sequence, and Determine the range where the impacted tooth is located in the two-dimensional tomographic image sequence of the first jaw based on the brightness curve; for each impacted tooth, within the range where the impacted tooth is located, determine a two-dimensional tomographic image for initial segmentation; And for each impacted tooth, using the local image segmentation model, the partial image of the impacted tooth is finally segmented
  • the computer-implemented method for segmenting a computed tomographic image of a tooth further comprises: for each impacted tooth, using the partial image segmentation model, within the range of the two-dimensional tomographic image, The partial image is pre-segmented, and the 2D tomographic image whose initial segmentation is determined based on the mask area obtained by the segmentation.
  • the two-dimensional tomographic image corresponding to the largest area mask obtained by the pre-segmentation is used as the two-dimensional tomographic image of the initial segmentation.
  • the computer-implemented method for segmenting a computed tomography image of a tooth further comprises: if the distance between the center points of the largest area masks of the two impacted teeth is less than a first threshold, then, not comparing the area The impacted teeth corresponding to the small masks are finally segmented using the local image segmentation model.
  • the computer-implemented method for segmenting a computed tomography image of a tooth further comprises: if the maximum area mask center point of an impacted tooth corresponds to the mask of the starting two-dimensional tomographic image of the nearest erupting tooth If the distance between the center points of the hood is greater than the second distance threshold, then the impacted tooth is not subjected to final segmentation using the partial image segmentation model.
  • FIG. 1 is a schematic flowchart of a computer-implemented fusion method of a three-dimensional digital model of teeth in an embodiment of the application;
  • FIG. 2A shows a two-dimensional tomographic image of a mandible classified as a mandible in an example of the present application
  • Figure 2B shows a two-dimensional tomographic image of an example of the present application that is classified as a maxilla
  • Figure 2C shows a two-dimensional tomographic image of an example of the present application classified as a tooth
  • Figure 2D shows a two-dimensional tomographic image of an example of the present application classified as occlusal teeth
  • Figure 2E shows a two-dimensional tomographic image of a tooth classified as open jaw in one example of the present application.
  • Figure 3 shows the global tooth mask extracted by the global image segmentation model in an example of the present application.
  • An aspect of the present application provides a method for segmenting a computed tomography image of a tooth, for extracting a tooth portion in a two-dimensional tomographic image sequence.
  • Cone Beam Computed Tomography can be used to obtain a two-dimensional tomographic image sequence of a tooth. It can be understood that other computer tomography techniques can also be used to obtain a two-dimensional tomographic image sequence of a tooth. . Among them, the two-dimensional tomographic image of the tooth is a grayscale image.
  • FIG. 1 is a schematic flowchart of a method 100 for segmenting a computed tomography image of a tooth according to an embodiment of the present application.
  • the method 100 for segmenting a tooth computed tomography image is described below by taking a single tooth (ie, the upper jaw or the lower jaw) as an example.
  • a first three-dimensional digital model representing a crown of a first dental jaw and a sequence of two-dimensional tomographic images of the first dental jaw are acquired.
  • the first three-dimensional digital model may be obtained by intraoral scanning, or by scanning a dental impression or solid model (eg, a plaster cast of a tooth), wherein the first three-dimensional digital model may be segmented, That is, each of the crowns is independent.
  • a dental impression or solid model eg, a plaster cast of a tooth
  • a two-dimensional tomographic image sequence of the entire upper and lower jaw is obtained by computer tomography.
  • the global image classification model can be used to divide it into two parts: the upper jaw and the lower jaw.
  • the global image classification model is based on a deep convolutional neural network, which can be a Vgg network, an Inception network, or a Resnet network, etc.
  • the global image classification model can classify the two-dimensional tomographic images into five categories: mandible, teeth, occlusal teeth, open teeth, and maxilla, ie, assign a category to each two-dimensional tomographic image.
  • two-dimensional tomographic images containing only the mandible or maxilla will be classified as mandible or maxilla; two-dimensional tomographic images containing both jaw and teeth will be classified as teeth; two-dimensional tomographic images containing both maxillary and mandibular teeth will be classified as teeth; 2D tomographic images will be classified as occlusal teeth; 2D tomographic images containing only a single jaw (e.g. maxillary or mandibular) molars will be classified as open teeth (a sequence of 2D tomographic images obtained by scanning a patient in an open jaw state) usually include such two-dimensional tomographic images).
  • FIG. 2A an example of a 2D tomographic image classified as a mandible is shown.
  • FIG. 2B an example of a 2D tomographic image classified as the maxilla is shown.
  • FIG. 2C an example of a 2D tomographic image classified as a tooth is shown.
  • FIG. 2D there is shown an example of a 2D tomographic image of teeth classified as occlusal.
  • FIG. 2E a 2D tomographic image of an example of a tooth classified as open jaw is shown.
  • a mask of the crown portion is extracted based on the sequence of two-dimensional tomographic images of the first jaw.
  • a global image segmentation model may be used to extract the mask of the tooth portion based on the two-dimensional tomographic image sequence of the first jaw, to obtain a global tooth mask image sequence of the first jaw, each global
  • the tooth mask image contains masks corresponding to all the teeth in the two-dimensional tomographic image, and these tooth masks are taken as a whole, that is, they are not segmented by a single tooth, wherein the mask image is a binary image.
  • the global tooth segmentation model may be a trained deep convolutional neural network, eg, an FCN network, a UNet network, or a VNet network, or the like.
  • the global tooth segmentation model classifies each pixel of the two-dimensional tomographic image to extract pixels belonging to teeth.
  • Figure 3 shows the global tooth mask extracted by the global image segmentation model in an example.
  • a local tooth classification model can be used to classify the tooth local image corresponding to each tooth mask connected domain in the binary image sequence of the tooth mask of the first jaw, remove the root mask, and retain the crown mask mask, to obtain the crown mask image sequence of the first jaw.
  • the local tooth classification model may be a trained deep convolutional neural network, eg, a Vgg network, an Inception network, or a Resnet network, among others.
  • the local tooth classification model can classify the tooth local images corresponding to each tooth mask connected domain into three categories: crown, root, and background.
  • the mask of the root portion of each tooth can be deleted from the global tooth mask image sequence of the first jaw according to the result of the local classification .
  • the first three-dimensional digital model and the sequence of crown mask images of the first jaw are projected and registered on a first plane.
  • the first plane may be parallel to the two-dimensional tomographic image of the first jaw. In yet another embodiment, the first plane may be parallel to the xy plane of the coordinate system of the first three-dimensional digital model.
  • the two-dimensional projection image of the first three-dimensional digital model on the first plane is denoted as T 1
  • the two-dimensional projection image of the crown mask image sequence of the first jaw on the first plane Denoted as I 1 .
  • the following method may be used to register T1 and I1 by translation along the first plane and rotation about a first axis perpendicular to the first plane.
  • T 1 rotate T 1 by R degrees to obtain TR 1 , and use TR 1 as a template image and I 1 as a source image to perform template matching.
  • the matching coefficient matrix is calculated by the normalized variance sum method, and the pixel position with the smallest value in the matching coefficient matrix is the best matching center point.
  • R can choose some discrete values within a set range, for example, from -10 degrees to 10 degrees, with a step size of 2 degrees to choose 11 rotation angles.
  • a matching coefficient value (that is, the minimum value of the corresponding coefficient matrix) is calculated for each of the obtained series of matching coefficient matrices, and the smallest matching coefficient value is selected, and the corresponding rotation angle of the matching coefficient matrix is For the best matching rotation angle, denoted as R tm1 .
  • the offset of the template image is the coordinate values min_loc_x1 and min_loc_y1 corresponding to the minimum matching coefficient value.
  • R tm1 , min_loc_x1 and min_loc_y1 are the best matching transformation parameters, and the registration on the first plane is completed.
  • the two in addition to registering the first three-dimensional digital model and the sequence of crown mask images of the first jaw in the first plane, the two can also be registered in the sagittal plane Project and register the projections of the two to align the two in the z-axis direction (ie, the tooth height direction).
  • the registration results of the sagittal plane can be used to guide the classification of the partial tooth mask images (ie crowns, necks and roots) by the subsequent partial tooth classification model.
  • the registration of the sagittal plane may be performed before or after the registration of the first plane.
  • a second three-dimensional digital model may be generated based on a sequence of crown mask images of the first jaw, and then the first three-dimensional digital model and the second three-dimensional digital model may be matched in three-dimensional space. allow.
  • the registration result includes both the alignment in the first plane and the alignment in the z-axis direction.
  • a second sequence of crown mask images may be generated based on the first three-dimensional digital model and registered in three-dimensional space with the sequence of crown mask images of the first jaw. Similarly, the registration result includes both the alignment in the first plane and the alignment in the z-axis direction.
  • position and range information of the erupting teeth is determined based on the result of the registration and the first three-dimensional digital model.
  • the center points of each crown in the first three-dimensional digital model may be calculated (for example, the center points of each crown may be obtained by calculating the average coordinates of the vertices of the crown), and then these center points may be Projecting on the first plane to obtain a projected image C 1 .
  • a local tooth classification model and a local tooth segmentation model to re-segment the two-dimensional tomographic image sequence of the first jaw to obtain a mask sequence of the erupting teeth .
  • the center point of the erupting tooth is taken as the center, and a preset range (for example, a square with a side length of 15 mm is used as the center. It can be understood that this preset The set range can be adjusted according to different pixel spacing and physical size) to capture partial images of teeth. Then, using the partial tooth classification model to classify these partial tooth images into crowns, roots or backgrounds, the position of the tomographic image adjacent to the root and the crown, that is, the position of the tomographic image where the tooth neck is located can be obtained.
  • segmentation can be performed in the direction of the tooth crown and the tooth root respectively, that is, the segment image where the tooth neck is located is segmented first to extract the erupting tooth Then, the next tomographic image is segmented using the current mask of the erupting tooth as a reference.
  • segmentation can also start from the tomographic image near the tomographic image where the tooth neck is located (that is, the tomographic image of the middle segment of the tooth) (because the shape of the tooth mask of the same tooth in the adjacent tomographic images does not change much.
  • a tomographic image is selected as the starting point of segmentation, and the selected tomographic image is hereinafter referred to as the initial segmented tomographic image of the erupting tooth.
  • the partial tomographic image in the initial tomographic image is segmented using the local tooth segmentation model to extract its tooth mask in the current tomographic image (ie, the initial tomographic image).
  • the local tooth segmentation model may be a trained deep convolutional neural network, eg, FCN network, UNet network, or VNet network, etc., for extracting tooth masks from tooth partial tomographic images.
  • the partial tooth segmentation model classifies each pixel of the partial tomographic image to extract pixels belonging to teeth.
  • a local tooth segmentation model may be used for its local tomographic image in the next tomographic image (by The tooth local classification model is processed) for segmentation to extract its tooth mask in the next tomographic image. And so on, until the mask of the erupting tooth in all the tomographic images is extracted.
  • the accurate segmentation of the sequence of tomographic images of the first jaw includes two steps: first, based on the location, using the local tooth classification model, the local tomographic images are classified to find each eruption The tomographic image where the tooth neck is located; secondly, based on the position and range information, the local tomographic image is segmented by using the local tooth segmentation model. Due to the reference to the position and range information, the accuracy and efficiency of the local tooth classification model and the local tooth segmentation model are effectively improved.
  • the first jaw may also include impacted teeth.
  • these impacted teeth also need to be segmented.
  • the mask sequence of the impacted teeth is also obtained, but the accuracy of the segmentation result for the impacted teeth may not be high, so the following steps can be performed. operation to obtain more accurate segmentation results of impacted teeth.
  • the impacted tooth is detected using the tooth mask luminance curve.
  • the abscissa of the luminance curve may be the position of the tomographic image, and the ordinate may be the total luminance of the tooth mask in each tomographic image, that is, the sum of the number of pixels or the sum of the pixel values of the tooth mask.
  • the mandibular impacted teeth are located below the neck of the mandibular erupting teeth, and the maxillary impacted teeth are located above the maxillary impacted teeth.
  • the global tooth mask sequence obtained by the previous segmentation is called the first mask sequence. Since the local segmentation results of the erupting teeth with high accuracy have been obtained before, the roots of all the erupting teeth can be divided into the first mask sequence. The mask is removed, and only the mask of the crown of the erupting tooth is retained, and the second mask sequence is obtained.
  • the tooth mask brightness curve generated based on the second mask sequence will include a main peak and a small peak, the main peak corresponds to the tomographic image index range where the erupting tooth crown is located, and the small peak corresponds to the tomographic image index range where the impacted tooth is located.
  • the impacted tooth can be detected within the tomographic image index range corresponding to the small peak. Since in the second tooth mask sequence, the root mask of the erupting tooth has been deleted, within the range of the second tooth mask sequence, only ambush teeth are masked.
  • pseudo seed points in the seed points obtained at this time.
  • these pseudo seed points can be screened out by the following method based on the distance discrimination criterion.
  • the first distance threshold may be set to 3 mm. If the distance (three-dimensional space distance) between the seed points of two impacted teeth is less than the first distance threshold, then the two seed points are likely to belong to For the same impacted tooth, therefore, remove the seed point with a smaller mask area.
  • the second distance threshold may be set to 15 mm. If the distance between the seed point of the impacted tooth and the nearest erupting tooth seed point is greater than the second distance threshold, then the seed point is likely to belong to the jawbone, so , remove the seed point of the ambush tooth.
  • the mask sequence of all teeth including erupting and impacted teeth, it can be optimized using methods such as adjacent layer constraints, Gaussian smoothing, and watershed segmentation.
  • the tooth masking adjacent layer constraint algorithm can remove the over-segmented regions in the tooth segmentation results.
  • the specific operations are as follows:
  • the slice index where the seed point is located is S, segment the tooth partial image IS of the S slice, and obtain the mask image MS of the S slice tooth;
  • M RS+1 as the mask image of the S+1 tomography, repeat the above operation for the S+2 tomography to calculate the mask image of the S+2 tomography, and so on, until all the tooth masks are processed.
  • Gaussian smoothing may be used to smooth the tooth mask image sequence processed by the adjacent layer constraint algorithm.
  • the watershed segmentation algorithm can detect the boundaries of connected crowns, and remove the connected adjacent teeth from the segmentation results.
  • the specific operations are as follows:
  • the marker image Marker is generated from the tooth mask images M i-1 and M io of the i-1 layer according to the following equation (1),
  • Erode and Dilate are morphological erosion operations and dilation operations, respectively, "
  • the watershed segmentation algorithm is applied to the partial image of the i-th crown section to obtain the tooth boundary image B;
  • the overall three-dimensional digital model of these teeth (including crown and root) can be generated based on it.
  • the overall three-dimensional digital model of the teeth is very useful. , because not only the relationship between the crowns but also the roots can be obtained.
  • the various diagrams may illustrate exemplary architectural or other configurations of the disclosed methods and systems, which may be helpful in understanding the features and functionality that may be included in the disclosed methods and systems. What is claimed is not limited to the exemplary architectures or configurations shown, and the desired features may be implemented in various alternative architectures and configurations. Additionally, with respect to the flowcharts, functional descriptions, and method claims, the order of blocks presented herein should not be limited to various embodiments that are implemented in the same order to perform the functions, unless the context clearly dictates otherwise. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

本申请的一方面提供了一种计算机执行的牙齿的计算机断层扫描图像的分割方法,它包括:获取表示第一牙颌萌出牙牙冠的第一三维数字模型和所述第一牙颌的二维断层图像序列;利用局部图像分类模型,基于所述第一牙颌的二维断层图像序列,为每一萌出牙自所述第一牙颌的二维断层图像序列中选定起始分割的二维断层图像,所述局部图像分类模型是经训练的深度神经网络,用于将局部二维断层图像分为包括牙冠和牙根的类别;基于所述第一三维数字模型,获取每一萌出牙的位置和范围信息;以及对于每一萌出牙,利用所述位置和范围信息,利用局部图像分割模型,自对应的起始分割的二维断层图像起向牙冠和牙根方向对该萌出牙的局部图像进行分割,获得其二值遮罩图像序列。

Description

牙齿的计算机断层扫描图像的分割方法 技术领域
本申请总体上涉及牙齿的计算机断层扫描图像的分割方法。
背景技术
随着计算机科学的不断发展,牙科专业人员越来越多地借助计算机技术来提高牙科诊疗的效率。在借助计算机的牙科诊疗中,经常会用到牙齿的三维数字模型。
目前,通过口内扫描,或者通过扫描牙齿的印模或实体模型,能够获得精度较高的牙冠的三维数字模型,但是这种方法无法获得牙根的信息。
对于一些需要掌握牙根信息的诊疗项目(例如,牙齿正畸治疗),可以利用断层扫描技术(ComputedTomography,简称CT)可以获得牙齿整体的二维断层图像序列(包括牙冠和牙根),并基于此产生牙齿整体的三位数字模型。由于利用计算机断层扫描技术获得的二维断层图像不仅包括牙齿,还包括颌骨,因此,在利用二维断层图像序列建立牙齿整体的三维数字模型之前,就需要对其进行分割,以剔除颌骨部分,保留牙齿部分。
鉴于以上,有必要提供一种牙齿的计算机断层扫描图像的分割方法。
发明内容
本申请的一方面提供了一种计算机执行的牙齿的计算机断层扫描图像的分割方法,它包括:获取表示第一牙颌萌出牙牙冠的第一三维数字模型和所述第一牙颌的二维断层图像序列;利用局部图像分类模型,基于所述第一牙颌的二维断层图像序列,为每一萌出牙自所述第一牙颌的二维断层图像序列中选定起始分割 的二维断层图像,所述局部图像分类模型是经训练的深度神经网络,用于将局部二维断层图像分为包括牙冠和牙根的类别;基于所述第一三维数字模型,获取每一萌出牙的位置和范围信息;以及对于每一萌出牙,利用所述位置和范围信息,利用局部图像分割模型,自对应的起始分割的二维断层图像起向牙冠和牙根方向对该萌出牙的局部图像进行分割,获得其二值遮罩图像序列。
在一些实施方式中,所述第一三维数字模型是通过以下方法之一获得:口内扫描或者扫描牙齿印模或实体模型。
在一些实施方式中,所述第一牙颌的二维断层图像是通过锥束计算机断层扫描获得。
在一些实施方式中,所述的计算机执行的牙齿的计算机断层扫描图像的分割方法还包括:对于每一萌出牙,利用所述位置和范围信息,利用所述局部图像分割模型,对该萌出牙的起始分割的二维断层图像中该萌出牙的局部图像进行分割,获得该萌出牙对应所述起始分割的二维断层图像的遮罩二值图像;以及对于每一萌出牙,将前一二维断层图像对应的遮罩二值图像作为范围信息,并利用该范围信息,利用所述局部图像分割模型对下一二维断层图像中该萌出牙的局部图像进行分割。
在一些实施方式中,所述的计算机执行的牙齿的计算机断层扫描图像的分割方法还包括:利用全局图像分割模型和所述局部图像分类模型,基于所述第一牙颌的二维断层图像序列,提取所述第一牙颌萌出牙的牙冠部分的遮罩图像序列,其中,所述全局图像分割模型是经训练的深度神经网络,用于对二维断层图像进行分割,以提取全局牙齿遮罩图像;以及将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列进行配准,以获得所述位置和范围信息。
在一些实施方式中,所述配准包括将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列在第一平面投影并将两者的投影进行配准。
在一些实施方式中,所述第一平面与所述第一牙颌的二维断层图像序列平行。
在一些实施方式中,所述配准还包括将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列在矢状面投影并将两者的投影进行配准,该配准结果用于指导对每一萌出牙的所述局部图像分割。
在一些实施方式中,所述配准是在三维空间将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列进行配准,以获得所述位置和范围信息,以及指导对每一萌出牙的所述局部图像分割。
在一些实施方式中,对于每一萌出牙,所述起始分割的二维断层图像中该萌出牙的局部图像是位于牙齿中段。
在一些实施方式中,对于每一萌出牙,所述起始分割的二维断层图像中该萌出牙的局部图像是位于牙颈。
在一些实施方式中,所述的计算机执行的牙齿的计算机断层扫描图像的分割方法还包括:利用全局图像分割模型,基于所述第一牙颌的二维断层图像序列,提取所述第一牙颌的全局牙齿遮罩图像序列,其中,所述全局图像分割模型是经训练的深度神经网络,用于对二维断层图像进行分割,以提取全局牙齿遮罩图像;利用所述局部图像分类模型,删除所述第一牙颌的全局牙齿遮罩图像序列中所有萌出牙的牙根部分的遮罩,获得第二遮罩图像序列;基于所述第二遮罩图像序列产生遮罩亮度曲线,并基于所述亮度曲线确定所述第一牙颌的二维断层图像序列中埋伏牙所在的范围;对于每一埋伏牙,在所述埋伏牙所在范围内,确定起始分割的二维断层图像;以及对于每一埋伏牙,利用所述局部图像分割模型,自对应的起始分割的二维断层图像起向牙冠和牙根方向对该埋伏牙的局部图像进行最终分割,获得其二值遮罩图像序列。
在一些实施方式中,所述的计算机执行的牙齿的计算机断层扫描图像的分割方法还包括:对于每一埋伏牙,利用所述局部图像分割模型,在所述二维断层图像范围内,对其局部图像进行预分割,并基于分割获得的遮罩面积确定其起始分割的二维断层图像。
在一些实施方式中,对于每一埋伏牙,将所述预分割获得的最大面积遮罩所 对应的二维断层图像作为起始分割的二维断层图像。
在一些实施方式中,所述的计算机执行的牙齿的计算机断层扫描图像的分割方法还包括:若两个埋伏牙的最大面积遮罩中心点之间的距离小于第一阈值,那么,不对面积较小的遮罩所对应的埋伏牙利用所述局部图像分割模型进行最终分割。
在一些实施方式中,所述的计算机执行的牙齿的计算机断层扫描图像的分割方法还包括:若一个埋伏牙的最大面积遮罩中心点与最近萌出牙对应其分割起始二维断层图像的遮罩的中心点之间的距离大于第二距离阈值,那么,不对该埋伏牙利用所述局部图像分割模型进行最终分割。
附图说明
以下将结合附图及其详细描述对本申请的上述及其他特征作进一步说明。应当理解的是,这些附图仅示出了根据本申请的若干示例性的实施方式,因此不应被视为是对本申请保护范围的限制。除非特别指出,附图不必是成比例的,并且其中类似的标号表示类似的部件。
图1为本申请一个实施例中计算机实施的牙齿三维数字模型的融合方法的示意性流程图;
图2A展示了本申请一个例子中被分类为下颌骨的二维断层图像;
图2B展示了本申请一个例子中被分类为上颌骨的二维断层图像;
图2C展示了本申请一个例子中被分类为牙齿的二维断层图像;
图2D展示了本申请一个例子中被分类为咬合牙齿的二维断层图像;
图2E展示了本申请一个例子中被分类为开颌牙齿的二维断层图像;以及
图3展示了本申请一个例子中利用全局图像分割模型提取出的全局牙齿遮 罩。
具体实施方式
以下的详细描述引用了构成本说明书一部分的附图。说明书和附图所提及的示意性实施方式仅仅是出于说明性之目的,并非意图限制本申请的保护范围。在本申请的启示下,本领域技术人员能够理解,可以采用许多其他实施方式,并且可以对所描述实施方式做出各种改变,而不背离本申请的主旨和保护范围。应当理解的是,在此说明并图示的本申请的各个方面可以按照很多不同的配置来布置、替换、组合、分离和设计,这些不同配置都在本申请的保护范围之内。
本申请的一方面提供了一种牙齿的计算机断层扫描图像的分割方法,用于提取二维断层图像序列中的牙齿部分。
在一个实施例中,可以利用锥束计算机断层扫描(Cone Beam Computed Tomography,简称CBCT)获取牙齿的二维断层图像序列,可以理解,也可以利用其他计算机断层扫描技术获取牙齿的二维断层图像序列。其中,牙齿的二维断层图像是灰度图。
请参图1,为本申请一个实施例中的牙齿的计算机断层扫描图像的分割方法100的示意性流程图。
为了便于说明,下面以单个牙颌(即上颌或下颌)为例,对牙齿的计算机断层扫描图像的分割方法100进行说明。
在101中,获取表示第一牙颌的牙冠的第一三维数字模型以及第一牙颌的二维断层图像序列。
在一个实施例中,可以通过口内扫描,或者扫描牙齿印模或实体模型(例如,牙齿的石膏模型),获得第一三维数字模型,其中,所述第一三维数字模型可以是经分割的,即其中的每一个牙冠是独立的。获取所述第一三维数字模型的技术 手段已为业界所习知,此处不再赘述。
通常,利用计算机断层扫描获得的是上、下颌整体的二维断层图像序列,要获得上颌或者下颌的二维断层图像序列,可以利用全局图像分类模型将其分成上颌与下颌两部分。全局图像分类模型是基于深度卷积神经网络,它可以是Vgg网络、Inception网络或Resnet网络等。在一个实施例中,所述全局图像分类模型可以将二维断层图像分为五个类别:下颌骨、牙齿、咬合牙齿、开颌牙齿以及上颌骨,即为每一张二维断层图像分配一个类别。其中,仅包含下颌骨或上颌骨的二维断层图像将被分类为下颌骨或上颌骨;同时包含颌骨与牙齿的二维断层图像将被分类为牙齿;同时包含上颌牙齿与下颌牙齿的二维断层图像将被分类为咬合牙齿;仅包含单个牙颌(例如,上颌或下颌)磨牙的二维断层图像将被分类为开颌牙齿(患者在开颌状态下扫描获得的二维断层图像序列中通常包括这类二维断层图像)。
请参图2A,展示了一个例子中被分类为下颌骨的二维断层图像。
请参图2B,展示了一个例子中被分类为上颌骨的二维断层图像。
请参图2C,展示了一个例子中被分类为牙齿的二维断层图像。
请参图2D,展示了一个例子中被分类为咬合牙齿的二维断层图像。
请参图2E,展示了一个例子中被分类为开颌牙齿的二维断层图像。
在103中,基于所述第一牙颌的二维断层图像序列提取牙冠部分的遮罩。
在一个实施例中,可以利用全局图像分割模型基于所述第一牙颌的二维断层图像序列提取牙齿部分的遮罩,得到所述第一牙颌的全局牙齿遮罩图像序列,每一全局牙齿遮罩图像包含对应二维断层图像中所有牙齿的遮罩,并且这些牙齿遮罩被作为一个整体,即未对其按单颗牙齿进行分割,其中,遮罩图像是二值图像。所述全局牙齿分割模型可以是经训练的深度卷积神经网络,例如,FCN网络、UNet网络或VNet网络等。在一个实施例中,对一张二维断层图像进行分割时,所述全局牙齿分割模型对该二维断层图像的每一个像素进行分类,以提取出属于 牙齿的像素。
请参图3,展示了一个例子中利用全局图像分割模型提取出的全局牙齿遮罩。
接着,可以利用局部牙齿分类模型对所述第一牙颌的牙齿遮罩的二值图像序列中每一牙齿遮罩连通域所对应的牙齿局部图像进行分类,去除牙根遮罩,保留牙冠遮罩,得到所述第一牙颌的牙冠遮罩图像序列。所述局部牙齿分类模型可以是经训练的深度卷积神经网络,例如,Vgg网络、Inception网络或Resnet网络等。在一个实施例中,所述局部牙齿分类模型可以将每一牙齿遮罩连通域所对应的牙齿局部图像分为三个类别:牙冠、牙根以及背景。
由于牙根部分的遮罩一般不存在邻牙之间粘连的情况,因此,可以根据所述局部分类的结果从所述第一牙颌的全局牙齿遮罩图像序列中删除各牙齿牙根部分的遮罩。
在105中,将所述第一三维数字模型以及所述第一牙颌的牙冠遮罩图像序列在第一平面上投影并进行配准。
在一个实施例中,所述第一平面可以平行于所述第一牙颌的二维断层图像。在又一实施例中,所述第一平面可以平行于所述第一三维数字模型的坐标系的xy平面。
以下将所述第一三维数字模型在所述第一平面的二维投影图像记为T 1,将所述第一牙颌的牙冠遮罩图像序列在所述第一平面的二维投影图像记为I 1
在一个实施例中,可以采用以下方法通过沿所述第一平面的平移和绕垂直于所述第一平面的第一轴线的旋转将T 1和I 1进行配准。
首先,将T 1旋转R度,得到TR 1,将TR 1作为模板图像,I 1作为源图像,做模板匹配。用归一化方差和方法计算匹配系数矩阵,匹配系数矩阵中值最小的像素位置即为最佳匹配中心点。R可以在设定范围内选取一些离散值,例如,从-10度到10度,步长为2度选取11个旋转角度。
接着,为得到的一系列匹配系数矩阵的每一个计算一个匹配系数值(即对应系数矩阵的最小值),筛选出其中最小的匹配系数值,其所对应的匹配系数矩阵所对应的旋转角度即为最佳匹配旋转角度,记为R tm1。模板图像的偏移量即为最小匹配系数值所对应的坐标值min_loc_x1和min_loc_y1。R tm1、min_loc_x1以及min_loc_y1即为最佳匹配变换参数,完成在所述第一平面上的配准。
在一个实施例中,除了在所述第一平面将所述第一三维数字模型和所述第一牙颌的牙冠遮罩图像序列进行配准,还可以将两者在矢状面上进行投影并将两者的投影进行配准,以在z轴方向(即牙齿高度方向)将两者对准。该矢状面的配准结果能够用于指导后续局部牙齿分类模型对牙齿局部遮罩图像的分类(即牙冠、牙颈及牙根)。矢状面的配准可以在所述第一平面的配准之前或之后进行。
在又一实施例中,可以基于所述第一牙颌的牙冠遮罩图像序列产生第二三维数字模型,然后,在三维空间将所述第一三维数字模型和第二三维数字模型进行配准。该配准结果既包括在所述第一平面的对准,还包括在z轴方向的对准。
在又一实施例中,可以基于所述第一三维数字模型产生第二牙冠遮罩图像序列,并在三维空间将其与所述第一牙颌的牙冠遮罩图像序列进行配准。类似的,该配准结果既包括在所述第一平面的对准,还包括在z轴方向的对准。
在本申请的起始下,可以理解,除了以上配准方法之外,还可以采用任何适用的其他配准方法,此处不再一一列举。
在107中,基于所述配准的结果和所述第一三维数字模型确定萌出牙的位置和范围信息。
在一个实施例中,可以计算所述第一三维数字模型中各牙冠的中心点(例如,可以通过计算每一牙冠的顶点的平均坐标获得其中心点),然后将这些中心点在所述第一平面上投影,得到投影图像C 1
将C 1按所述配准得到的变换参数旋转R tm1,沿x轴偏移min_loc_x1,以及沿y轴偏移min_loc_y1,得到变换后的图像C t1,将其作为所述第一牙颌各萌出 牙的位置信息。
将所述第一三维数字模型中各牙冠在所述第一平面的投影按所述配准得到的变换参数旋转R tm1,沿x轴偏移min_loc_x1,以及沿y轴偏移min_loc_y1,得到变换后的图像,将其作为所述第一牙颌各萌出牙的范围信息。
在109中,基于所述牙齿的位置和范围信息,利用局部牙齿分类模型和局部牙齿分割模型对所述第一牙颌的二维断层图像序列进行重新分割,获得所述萌出牙的遮罩序列。
在一个实施例中,在所述第一牙颌的二维断层图像上以所述萌出牙的中心点为中心,以预设的范围(例如,边长为15mm的正方形,可以理解,这个预设的范围可以根据不同的像素间距和物理尺寸进行调整)截取牙齿局部图像。然后,利用所述局部牙齿分类模型对这些牙齿局部图像分类为牙冠、牙根或背景,即可得到牙根和牙冠相邻接的断层图像位置,即牙颈所在的断层图像位置。
在一个实施例中,对于每一颗萌出牙,可以自牙颈所在断层图像开始,分别向牙冠和牙根两个方向进行分割,即先对牙颈所在断层图像进行分割,以提取该萌出牙的遮罩,接着,再以该萌出牙的当前遮罩作为参考,对下一断层图像进行分割。在本申请的启示下,可以理解,也可以自牙颈所在断层图像附近的断层图像开始(即牙齿中段的断层图像)(由于同一牙齿在邻近的断层图像中的牙齿遮罩的形状变化不大,不会降低分割的准确性),分别向牙冠和牙根两个方向进行分割。对于每一颗萌出牙,都选定一个断层图像作为分割的起始点,以下将该选定的断层图像称为该萌出牙的起始分割断层图像。
在一个实施例中,对于每一颗萌出牙,可以基于所述位置和范围信息(即所述第一三维数字模型中对应该萌出牙的牙冠在所述第一平面上的投影经所述变换参数变换后的结果),利用局部牙齿分割模型对其在起始断层图像中的局部断层图像进行分割,以提取其在当前断层图像(即起始断层图像)中的牙齿遮罩。所述局部牙齿分割模型可以是经训练的深度卷积神经网络,例如,FCN网络、UNet网络或VNet网络等,用于在牙齿局部断层图像中提取出牙齿遮罩。对一张 局部断层图像进行分割时,所述局部牙齿分割模型对该局部断层图像的每一个像素进行分类,以提取出属于牙齿的像素。
在一个实施例中,对于同一萌出牙的下一断层图像的分割,可以基于该萌出牙在当前断层图像的牙齿遮罩,利用局部牙齿分割模型对其在下一断层图像中的局部断层图像(由所述牙齿局部分类模型处理得到)进行分割,以提取其在下一断层图像中的牙齿遮罩。以此类推,直至提取出该萌出牙在所有断层图像中的遮罩。
简单地说,对所述第一牙颌的断层图像序列的精确分割包括两个步骤:第一,基于所述位置,利用所述局部牙齿分类模型,对局部断层图像进行分类,以找到各萌出牙牙颈所在断层图像;第二,基于所述位置和范围信息,利用所述局部牙齿分割模型,对所述局部断层图像进行分割。由于参考了所述位置和范围信息,有效地提高了所述局部牙齿分类模型和局部牙齿分割模型的准确度以及效率。
在一些情况下,除了萌出牙,所述第一牙颌还可能包括埋伏牙,此时,还需要对这些埋伏牙进行分割。在所述局部断层图像分割中,除了分割获得萌出牙的遮罩序列,同时也获得了埋伏牙的遮罩序列,但对埋伏牙的这个分割结果的准确性可能不高,因此,可以进行以下操作,以获得比较准确的埋伏牙的分割结果。
在111中,利用牙齿遮罩亮度曲线检测埋伏牙。
在一个实施例中,亮度曲线的横坐标可以是断层图像位置,纵坐标可以是每个断层图像中牙齿遮罩的亮度总和,即牙齿遮罩的像素数总和或像素值总和。
通常,在z轴方向上,下颌埋伏牙是位于下颌萌出牙的牙颈之下,上颌埋伏牙是位于上颌埋伏牙之上。将之前分割获得的全局牙齿遮罩序列称为第一遮罩序列,由于之前已经获得准确性较高的萌出牙的局部分割结果,可以在所述第一遮罩序列中将所有萌出牙的牙根的遮罩去除,仅保留萌出牙的牙冠的遮罩,得到第二遮罩序列。基于该第二遮罩序列产生的牙齿遮罩亮度曲线将包括一个主峰和一个小峰,主峰对应萌出牙牙冠所在断层图像索引范围,而小峰则对应埋伏牙所在断层图像索引范围。接着,可以在所述小峰所对应的断层图像索引范围内检测埋 伏牙。由于在所述第二牙齿遮罩序列中,已经将萌出牙的牙根遮罩删除,因此,在所述第二牙齿遮罩序列的所述范围内,只有埋伏牙遮罩。
然后,对于每一埋伏牙,利用局部牙齿分割模型对其局部断层图像进行预分割,获得其遮罩序列,从中找出面积最大的遮罩,将该遮罩的中心点作为后续局部分割的种子点。
此时获得的种子点中可能存在伪种子点,在一个实施例中,可以基于距离判别标准,通过以下方法筛除这些伪种子点。
在一个实施例中,可以设定第一距离阈值为3mm,若两颗埋伏牙的种子点之间的距离(三维空间距离)小于该第一距离阈值,那么,这两个种子点很可能属于同一埋伏牙,因此,去除遮罩面积较小的种子点。
在一个实施例中,可以设定第二距离阈值为15mm,若埋伏牙的种子点与最近的萌出牙种子点的距离大于该第二距离阈值,那么,该种子点很可能属于颌骨,因此,去除该埋伏牙种子点。
最后,对于每一剩余的种子点,从其所在的层开始,分别向牙冠和牙根两个方向,利用所述局部牙齿分割模型进行局部分割,获得其所对应的埋伏牙的完整遮罩序列。
在获得所有牙齿(包括萌出牙和埋伏牙)的遮罩序列后,可以利用邻层约束、高斯平滑和分水岭分割等方法对其进行优化。
牙齿遮罩邻层约束算法可以去除牙齿分割结果中过分割的区域,其具体操作如下:
设种子点所在断层索引为S,对S断层的牙齿局部图像I S进行分割,得到S断层牙齿的遮罩图像M S
将I S作为模板图像,S+1断层图像作为源图像进行模板匹配,得到S+1断层中牙齿的中心点;
对S+1断层的牙齿局部图像I S+1进行分割,得到S+1断层牙齿的遮罩图像M S+1
对M S进行位移变换,使M S的中心点与M S+1的中心点重合;
对M S做形态学膨胀运算,得到膨胀后的遮罩图像M SD,结构元素大小可采用3;
用M SD和M S+1做与运算,得到邻层牙冠遮罩约束后的遮罩结果图像M RS+1
用M RS+1作为S+1断层的遮罩图像,对S+2断层重复以上操作计算S+2断层的遮罩图像,以此类推,直至处理完所有牙齿遮罩。
在一个实施例中,可以采用高斯平滑对经邻层约束算法处理的牙齿遮罩图像序列进行平滑处理。
分水岭分割算法可以检测相连牙冠的边界,在分割结果中去除相连的邻牙区域,具体操作如下:
对第i层牙冠断层局部图像用局部牙齿分割模型进行分割,得到牙冠遮罩图像M io
判断牙冠是否与邻牙相连,若M io中的牙冠连通域包含图像边界位置,则判断该牙冠与邻牙相连,即牙冠遮罩图像M io中包含邻牙区域;
由i-1层的牙齿遮罩图像M i-1和M io根据以下方程式(1)生成标记图像Marker,
Marker=Erode(((Mio-Dilate(Mi-1))|Mio))        方程式(1)
其中,Erode和Dilate分别为形态学腐蚀运算和膨胀运算,“|”为或运算,
以Marker为标记图像,对第i层牙冠断层局部图像应用分水岭分割算法,得到牙齿边界图像B;
用M io减去边界图像B得到分离的牙冠遮罩图像,在牙冠遮罩图像中提取包 含图像中心点的连通区域得到断层i的牙冠遮罩图像M i,其中,M i为不包含邻牙区域的牙冠遮罩。
在获得所述第一牙颌所有牙齿的遮罩序列后,就可以基于它产生这些牙齿的整体(包括牙冠和牙根)三维数字模型,对于一些牙齿诊疗项目,牙齿的整体三维数字模型非常有用,因为,不仅可以获得牙冠之间的关系,还能获得牙根之间的关系。
尽管在此公开了本申请的多个方面和实施例,但在本申请的启发下,本申请的其他方面和实施例对于本领域技术人员而言也是显而易见的。在此公开的各个方面和实施例仅用于说明目的,而非限制目的。本申请的保护范围和主旨仅通过后附的权利要求书来确定。
同样,各个图表可以示出所公开的方法和系统的示例性架构或其他配置,其有助于理解可包含在所公开的方法和系统中的特征和功能。要求保护的内容并不限于所示的示例性架构或配置,而所希望的特征可以用各种替代架构和配置来实现。除此之外,对于流程图、功能性描述和方法权利要求,这里所给出的方框顺序不应限于以同样的顺序实施以执行所述功能的各种实施例,除非在上下文中明确指出。
除非另外明确指出,本文中所使用的术语和短语及其变体均应解释为开放式的,而不是限制性的。在一些实例中,诸如“一个或多个”、“至少”、“但不限于”这样的扩展性词汇和短语或者其他类似用语的出现不应理解为在可能没有这种扩展性用语的示例中意图或者需要表示缩窄的情况。

Claims (16)

  1. 一种计算机执行的牙齿的计算机断层扫描图像的分割方法,包括:
    获取表示第一牙颌萌出牙牙冠的第一三维数字模型和所述第一牙颌的二维断层图像序列;
    利用局部图像分类模型,基于所述第一牙颌的二维断层图像序列,为每一萌出牙自所述第一牙颌的二维断层图像序列中选定起始分割的二维断层图像,所述局部图像分类模型是经训练的深度神经网络,用于将局部二维断层图像分为包括牙冠和牙根的类别;
    基于所述第一三维数字模型,获取每一萌出牙的位置和范围信息;以及
    对于每一萌出牙,利用所述位置和范围信息,利用局部图像分割模型,自对应的起始分割的二维断层图像起向牙冠和牙根方向对该萌出牙的局部图像进行分割,获得其二值遮罩图像序列。
  2. 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述第一三维数字模型是通过以下方法之一获得:口内扫描或者扫描牙齿印模或实体模型。
  3. 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述第一牙颌的二维断层图像是通过锥束计算机断层扫描获得。
  4. 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:
    对于每一萌出牙,利用所述位置和范围信息,利用所述局部图像分割模型,对该萌出牙的起始分割的二维断层图像中该萌出牙的局部图像进行分割,获得该萌出牙对应所述起始分割的二维断层图像的遮罩二值图像;以及
    对于每一萌出牙,将前一二维断层图像对应的遮罩二值图像作为范围信息,并利用该范围信息,利用所述局部图像分割模型对下一二维断层图像中该萌出牙的局部图像进行分割。
  5. 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:
    利用全局图像分割模型和所述局部图像分类模型,基于所述第一牙颌的二维断层图像序列,提取所述第一牙颌萌出牙的牙冠部分的遮罩图像序列,其中,所述全局图像分割模型是经训练的深度神经网络,用于对二维断层图像进行分割,以提取全局牙齿遮罩图像;以及
    将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列进行配准,以获得所述位置和范围信息。
  6. 如权利要求5所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述配准包括将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列在第一平面投影并将两者的投影进行配准。
  7. 如权利要求6所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述第一平面与所述第一牙颌的二维断层图像序列平行。
  8. 如权利要求7所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述配准还包括将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列在矢状面投影并将两者的投影进行配准,该配准结果用于指导对每一萌出牙的所述局部图像分割。
  9. 如权利要求5所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,所述配准是在三维空间将所述第一三维数字模型和所述第一牙颌萌出牙的牙冠部分的遮罩图像序列进行配准,以获得所述位置和范围信息,以及指导对每一萌出牙的所述局部图像分割。
  10. 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,对于每一萌出牙,所述起始分割的二维断层图像中该萌出牙的局部图像是位于牙齿中段。
  11. 如权利要求10所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,对于每一萌出牙,所述起始分割的二维断层图像中该萌出牙的局部图像是位于牙颈。
  12. 如权利要求1所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:
    利用全局图像分割模型,基于所述第一牙颌的二维断层图像序列,提取所述第一牙颌的全局牙齿遮罩图像序列,其中,所述全局图像分割模型是经训练的深度神经网络,用于对二维断层图像进行分割,以提取全局牙齿遮罩图像;
    利用所述局部图像分类模型,删除所述第一牙颌的全局牙齿遮罩图像序列中所有萌出牙的牙根部分的遮罩,获得第二遮罩图像序列;
    基于所述第二遮罩图像序列产生遮罩亮度曲线,并基于所述亮度曲线确定所述第一牙颌的二维断层图像序列中埋伏牙所在的范围;
    对于每一埋伏牙,在所述埋伏牙所在范围内,确定起始分割的二维断层图像;以及
    对于每一埋伏牙,利用所述局部图像分割模型,自对应的起始分割的二维断层图像起向牙冠和牙根方向对该埋伏牙的局部图像进行最终分割,获得其二值遮罩图像序列。
  13. 如权利要求12所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:对于每一埋伏牙,利用所述局部图像分割模型,在所述二维断层图像范围内,对其局部图像进行预分割,并基于分割获得的遮罩面积确定其起始分割的二维断层图像。
  14. 如权利要求13所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,对于每一埋伏牙,将所述预分割获得的最大面积遮罩所对应的二维断层图像作为起始分割的二维断层图像。
  15. 如权利要求14所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:若两个埋伏牙的最大面积遮罩中心点之间的距离小于第一阈值,那么,不对面积较小的遮罩所对应的埋伏牙利用所述局部图像分割模型进行最终分割。
  16. 如权利要求14所述的计算机执行的牙齿的计算机断层扫描图像的分割方法,其特征在于,它还包括:若一个埋伏牙的最大面积遮罩中心点与最近萌出牙对应其分割起始二维断层图像的遮罩的中心点之间的距离大于第二距离阈值, 那么,不对该埋伏牙利用所述局部图像分割模型进行最终分割。
PCT/CN2022/075513 2021-02-18 2022-02-08 牙齿的计算机断层扫描图像的分割方法 WO2022174747A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP22755520.8A EP4296944A1 (en) 2021-02-18 2022-02-08 Method for segmenting computed tomography image of teeth
AU2022222097A AU2022222097A1 (en) 2021-02-18 2022-02-08 Method for segmenting computed tomography image of teeth
US18/546,872 US20240127445A1 (en) 2021-02-18 2022-02-08 Method of segmenting computed tomography images of teeth

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110190754.8 2021-02-18
CN202110190754.8A CN114972360A (zh) 2021-02-18 2021-02-18 牙齿的计算机断层扫描图像的分割方法

Publications (1)

Publication Number Publication Date
WO2022174747A1 true WO2022174747A1 (zh) 2022-08-25

Family

ID=82931199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/075513 WO2022174747A1 (zh) 2021-02-18 2022-02-08 牙齿的计算机断层扫描图像的分割方法

Country Status (5)

Country Link
US (1) US20240127445A1 (zh)
EP (1) EP4296944A1 (zh)
CN (1) CN114972360A (zh)
AU (1) AU2022222097A1 (zh)
WO (1) WO2022174747A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437250B (zh) * 2023-12-21 2024-04-02 天津医科大学口腔医院 一种基于深度学习的三维牙颌图像分割方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106806030A (zh) * 2015-11-30 2017-06-09 北京大学口腔医学院 一种冠根三维模型融合方法
CN108932716A (zh) * 2017-05-26 2018-12-04 无锡时代天使医疗器械科技有限公司 用于牙齿图像的图像分割方法
US20200175678A1 (en) * 2018-11-28 2020-06-04 Orca Dental AI Ltd. Dental image segmentation and registration with machine learning
EP3673864A1 (en) * 2018-12-28 2020-07-01 Trophy Tooth segmentation using tooth registration

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106806030A (zh) * 2015-11-30 2017-06-09 北京大学口腔医学院 一种冠根三维模型融合方法
CN108932716A (zh) * 2017-05-26 2018-12-04 无锡时代天使医疗器械科技有限公司 用于牙齿图像的图像分割方法
US20200175678A1 (en) * 2018-11-28 2020-06-04 Orca Dental AI Ltd. Dental image segmentation and registration with machine learning
EP3673864A1 (en) * 2018-12-28 2020-07-01 Trophy Tooth segmentation using tooth registration

Also Published As

Publication number Publication date
CN114972360A (zh) 2022-08-30
US20240127445A1 (en) 2024-04-18
AU2022222097A1 (en) 2023-10-05
EP4296944A1 (en) 2023-12-27

Similar Documents

Publication Publication Date Title
US20200402647A1 (en) Dental image processing protocol for dental aligners
KR102273438B1 (ko) 구강 스캔 데이터의 크라운 분할을 이용한 구강 스캔 데이터와 컴퓨터 단층촬영 이미지 자동 정합 장치 및 방법
CN110782974A (zh) 预测解剖标志的方法和使用该方法预测解剖标志的设备
US20060127854A1 (en) Image based dentition record digitization
CN112120810A (zh) 一种牙齿正畸隐型矫治器的三维数据生成方法
GB2440267A (en) Combining generic and patient tooth models to produce complete tooth model
JP2010524529A (ja) 顔面解析を用いた特注歯セットアップのコンピュータ支援作成
KR102373500B1 (ko) 딥러닝을 이용한 3차원 의료 영상 데이터의 특징점 자동 검출 방법 및 장치
KR102302587B1 (ko) 3차원 치과 ct 영상과 3차원 디지털 인상 모델의 정합 정확도 판단 방법 및 이를 컴퓨터에서 실행시키기 위한 프로그램이 기록된 컴퓨터로 읽을 수 있는 기록 매체
CN113052902B (zh) 牙齿治疗监测方法
US20230206451A1 (en) Method for automatic segmentation of a dental arch
WO2022174747A1 (zh) 牙齿的计算机断层扫描图像的分割方法
CN101950430A (zh) 基于曲面断层片的三维牙齿重建方法
WO2024046400A1 (zh) 牙齿模型生成方法、装置、电子设备和存储介质
WO2021147333A1 (zh) 利用人工神经网络生成牙科正畸治疗效果的图像的方法
WO2020181973A1 (zh) 确定上、下颌牙齿咬合关系的方法及计算机系统
KR102215068B1 (ko) 임플란트 진단용 영상 정합을 위한 장치 및 방법
CN112807108B (zh) 一种正畸矫治过程中的牙齿矫治状态的检测方法
KR102302249B1 (ko) 영상처리와 cnn을 이용한 자동 3차원 세팔로메트리 장치 및 방법
US20220358740A1 (en) System and Method for Alignment of Volumetric and Surface Scan Images
CN113139908B (zh) 一种三维牙列分割与标注方法
CN107564094A (zh) 一种基于局部坐标的牙齿模型特征点自动识别算法
WO2024088359A1 (zh) 检测牙齿三维数字模型形态差异的方法
EP4307229A1 (en) Method and system for tooth pose estimation
KR102502588B1 (ko) 교합 정렬 방법 및 교합 정렬 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22755520

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18546872

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2022222097

Country of ref document: AU

Ref document number: AU2022222097

Country of ref document: AU

Ref document number: 2022755520

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022222097

Country of ref document: AU

Date of ref document: 20220208

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022755520

Country of ref document: EP

Effective date: 20230918