US20200402647A1 - Dental image processing protocol for dental aligners - Google Patents

Dental image processing protocol for dental aligners Download PDF

Info

Publication number
US20200402647A1
US20200402647A1 US17/010,079 US202017010079A US2020402647A1 US 20200402647 A1 US20200402647 A1 US 20200402647A1 US 202017010079 A US202017010079 A US 202017010079A US 2020402647 A1 US2020402647 A1 US 2020402647A1
Authority
US
United States
Prior art keywords
training
medical images
training set
images
subset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/010,079
Inventor
Marina Evgenievna DOMRACHEVA
Fedor Alexandrovich Aptekarev
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
3d Smile Usa Inc
Original Assignee
Dommar LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EA201700561 external-priority patent/EA043155B1/en
Application filed by Dommar LLC filed Critical Dommar LLC
Priority to US17/010,079 priority Critical patent/US20200402647A1/en
Assigned to 3D SMILE USA, INC. reassignment 3D SMILE USA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Dommar LLC
Publication of US20200402647A1 publication Critical patent/US20200402647A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • A61B6/51
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C7/00Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • G06K9/6256
    • G06K9/628
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • G06K2209/05
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present disclosure relates to a dental image processing protocol for the design of dental aligners.
  • Orthodontics generally, and dental alignment, in particular, is a well-developed area of dental care.
  • traditional braces or, more recently, clear aligners offer a strategy for improved dental function and aesthetics through gradual teeth movements. These gradual teeth movements slowly move a crown of a tooth until a desired final position is reached.
  • These approaches fail to appropriately consider the impact of corresponding root movements, in the context of surrounding soft and hard tissues, on the final position of the crown, focusing instead on an aesthetically and functionally ideal crown position.
  • An approach for determining crown position that adequately incorporates the impact of root movement and the root environment has yet to be developed.
  • the present disclosure relates to a method, apparatus, and computer-readable medium comprising a processing circuitry configured to classify pixels of one or more medical images into classes corresponding to biological structure types, segment the classified pixels of the one or more medical images into biological structures, render a three-dimensional model of the biological structures based on the segmented classified pixels, determine one or more metrics, based upon the three-dimensional model, describing a bone of the biological structures, acquire a final position of each of the one or more teeth of the dental arch based upon the three-dimensional model, and generate the intermediate position of each of the one or more teeth of the patient based upon the one or more metrics and the acquired final position.
  • the present disclosure further relates to a method of pixel-based classification of medical images, comprising training a neural network to perform the pixel-based classification of the medical images, the training comprising performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled, applying the classifier after the first training to a second training set of medical images to classify pixels of the second training set of medical images, identifying and correcting incorrectly classified pixels of the second training set of medical images, and performing a second training on the classifier after the first training using the manually labeled first training set of medical images and the correctly classified second training set of medical images; and applying the classifier after the second training to the medical images to perform the pixel-based classification, the pixel-based classification of the medical images including assigning pixels of each medical image to a biological structure type.
  • FIG. 1 is an illustration of dental aligners, according to an embodiment of the present disclosure
  • FIG. 2 is an illustration of a dental image of a tooth, according to an embodiment of the present disclosure
  • FIG. 3 is a flowchart of a dental image processing protocol, according to an embodiment of the present disclosure
  • FIG. 4 is an illustration of a complex three-dimensional model generated from a plurality of processed dental images and annotated with a surface heat map, according to an embodiment of the present disclosure
  • FIG. 5 is a flowchart of an aspect of a training protocol of a dental image processing protocol, according to an embodiment of the present disclosure
  • FIG. 6A is an illustration of an aspect of a manually-segmented dental image of a tooth, according to an embodiment of the present disclosure
  • FIG. 6B is an illustration of an aspect of a manually-segmented dental image of a tooth, according to an embodiment of the present disclosure
  • FIG. 7A is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7B is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7C is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7D is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7E is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7F is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7G is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7H is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7I is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7J is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7K is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7L is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7M is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7N is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7O is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7P is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7Q is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7R is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7S is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7T is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 7U is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure.
  • FIG. 8A is an illustration of an aspect of a snakes segmentation of a dental image processing protocol, according to an embodiment of the present disclosure
  • FIG. 8B is an illustration of an aspect of a snakes segmentation of a dental image processing protocol, according to an embodiment of the present disclosure
  • FIG. 8C is an illustration of an aspect of a snakes segmentation of a dental image processing protocol, according to an embodiment of the present disclosure
  • FIG. 8D is an illustration of an aspect of a snakes segmentation of a dental image processing protocol, according to an embodiment of the present disclosure
  • FIG. 9 is a flowchart of an aspect of a training data generation protocol, according to an embodiment of the present disclosure.
  • FIG. 10A is an illustration of a segmentation of teeth of a dental image, according to an embodiment of the present disclosure.
  • FIG. 10B is an illustration of a segmentation of teeth of a dental image, according to an embodiment of the present disclosure.
  • FIG. 10C is an illustration of a segmentation of teeth of a dental image, according to an embodiment of the present disclosure.
  • FIG. 11A is an illustration of a source image of a segmentation of bone of a dental image, according to an embodiment of the present disclosure
  • FIG. 11B is an illustration of a segmentation of bone of a dental image, according to an embodiment of the present disclosure.
  • FIG. 11C is an illustration of a segmentation of bone of a dental image, according to an embodiment of the present disclosure.
  • FIG. 12A is a flowchart of a determination of a distance metric of a three-dimensional model, according to an embodiment of the present disclosure
  • FIG. 12B is a flowchart of a determination of a density metric of a three-dimensional model, according to an embodiment of the present disclosure
  • FIG. 13A is an illustration of a three-dimensional model of dentition, according to an embodiment of the present disclosure.
  • FIG. 13B is an illustration of a three-dimensional model of dentition, according to an embodiment of the present disclosure.
  • FIG. 14A is an illustration of a three-dimensional model of an initial tooth position, according to an embodiment of the present disclosure.
  • FIG. 14B is an illustration of a three-dimensional model of an intermediary tooth position, according to an embodiment of the present disclosure.
  • FIG. 14C is an illustration of a three-dimensional model of a final tooth position, according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic of exemplary hardware for implementation of a dental image processing protocol, according to an embodiment of the present disclosure.
  • teeth movement plans based upon initial and ideal final crown positions.
  • an intermediary tooth position may result in collision of the roots of adjacent teeth.
  • a thickness of the alveolar process, the bone tissue surrounding a tooth root can limit the ability of the tooth root to move to an intermediary step.
  • a realizable movement of a tooth may be less than an ideal movement of the tooth, resulting in impaired treatment and sub-optimal teeth function and aesthetic.
  • varying densities of periodontal bone can impact potential root movements and realignment, thereof.
  • the present disclosure describes an orthodontic treatment approach that considers an evaluation of the condition of the tissues surrounding the tooth and the alveolar process, in particular. Moreover, the evaluation of the condition of the tissues surrounding the tooth is patient-specific, reflecting the unique density and thickness of an individual patient's periodontal bone.
  • FIG. 1 is an illustration of dental aligners, according to an embodiment of the present disclosure. Following a determination of an initial tooth position and a realizable final tooth position, intermediary tooth movements can be determined and dental aligners 100 can be fabricated, accordingly, to slowly move and realign a patient's teeth. Oftentimes, however, as described above, these determinations are based only upon an ideal final tooth position and the position of the crown of the tooth relative to adjacent teeth, which can lead to root, periodontal ligament, or periodontal bone damage upon movement. In order to incorporate information related to the environment of the tooth and surrounding tissues during dental aligner 100 fabrication, an approach for identifying tissue-types, generating three-dimensional (3D) models, and determining periodontal tissue characteristics, thereof, is required.
  • 3D three-dimensional
  • FIG. 2 is an illustration of a dental image of a tooth, according to an embodiment of the present disclosure.
  • a dental image of a tooth may be but is not limited to an image acquired via intraoral optical imaging, impressions, dental models, ultrasound, or radiography.
  • a plurality of images, or slices may be acquired via radiography and reconstructed to render a 3D model.
  • a tooth 204 comprises a crown 206 and one or more roots 205 .
  • the one or more roots 205 are resident within an alveolar process, a thickened ridge of bone containing dental alveoli, or tooth sockets.
  • the alveolar process is comprised of cortical bone 215 , a compact, relatively dense bone, and cancellous bone 210 , a spongy, relatively porous bone.
  • cortical bone 215 and cancellous bone 210 provide a strong foundation from which the one or more roots 205 of the tooth 204 are anchored.
  • cortical bone 215 and cancellous bone 210 as periodontal tissues, contribute to the determination of possible movements of a tooth.
  • FIG. 3 a flowchart of a dental image processing protocol implemented via a dental image processing device comprising a processing circuitry.
  • the dental image processing protocol described herein can be appreciated in context of a full dental arch or an individual tooth.
  • digital representations of an initial position of a patient's teeth must be acquired.
  • a variety of techniques including but not limited to impressions, dental scanner of impressions or dental models, intraoral scanners for digital impressions, intraoral X-ray, ultrasound, and computed tomography can be used individually or in combination to acquire digital representations of the initial position of the patient's teeth.
  • an intraoral scanner S 341 may be employed to acquire topographical characteristics of crowns of the teeth.
  • the intraoral scanner may employ a modality selected from a group including but not limited to lasers, infrared light, and structured light. So that tooth movements can be determined in the context of crowns and roots, a radiographic imaging modality may be employed in order to acquire spatial information relating to the roots and periodontal tissues, including soft tissues and hard tissues (e.g. alveolar process).
  • the radiographic imaging modality may be selected from the group including but not limited to projection radiography, computed tomography (CT), dual energy X-ray absorptiometry, fluoroscopy, and contrast radiography.
  • CT computed tomography
  • the radiographic imaging modality may be cone beam computed tomography (CBCT) S 350 .
  • Radiographic images may comprise multi-planar radiographic images including but not limited to sagittal, transverse, and coronal. It should be appreciated that, apart from radiographic techniques, a variety of imaging modalities including but not limited to ultrasound may be used for acquisition of images describing spatial information of the roots and periodontal tissues.
  • the present disclosure employs a machine learning approach, a platform for rapid evaluation of the plurality of dental images, to classify the various biological structures of each dental image of a patient S 351 .
  • the machine learning approach may be a fully convolutional neural network (FCN).
  • FCN fully convolutional neural network
  • an FCN classifier evaluates and predicts the classification of each pixel of an unknown image.
  • Per-pixel classification allows the resulting predictions to be segmented into multiple tissue classes S 352 , combining adjacent pixels of similar classification and density and defining the shape of each type of tissue, or biological structure, as building blocks for a 3D model.
  • tissue classes S 352 combining adjacent pixels of similar classification and density and defining the shape of each type of tissue, or biological structure, as building blocks for a 3D model.
  • a surface reconstruction technique such as, for instance, marching cubes
  • each of the plurality of classified and segmented dental images of a patient are combined and reconstructed to form a 3D model of dentition, including a patient's dental arches and surrounding tissues S 353 .
  • the surface reconstruction technique may be selected from a group including but not limited to marching tetrahedrons and marching cubes, a sequential surface rendering model wherein a polygonal mesh is fitted to a surface interacting with pixels from adjacent slices.
  • the reconstructed surfaces can then be integrated into the digital impressions acquired via intraoral 3D imaging to create a simple 3D model of the mouth of a patient S 343 , including crowns, roots and periodontal tissues.
  • this data integration may be accomplished via point-based alignment of two surface models, an interactive method of registration of polygonal meshes. First, at least three corresponding points on each of the two surface models are selected. Next, a transformation matrix is computed and applied via translation and rotation or quaternion. If the resulting overlap between the two surface models is greater than a pre-determined threshold, a new transformation matrix must be determined and applied such that the resulting overlap between the two surface models is less than the pre-determined threshold.
  • the at least three corresponding points on each of the two surface models are selected manually.
  • corresponding points on each of the two surface models may be selected automatically via software, wherein the corresponding points are features selected from a group including but not limited to cusps, gloves, offsets and pits of molars, or central points of cutting edges on incisors.
  • a density measurement S 354 and a distance measurement S 355 of the periodontal space may be computed from the surface reconstruction, or simple 3D model, of the classified and segmented predictions.
  • the density measurement comprises computing, from each point on a mesh describing the surface of a root, a metric of the density of the surrounding tissue. This metric may be determined on the basis of mean voxel intensities surrounding a vertex-point coordinate of the segmentation, reflecting the spatial qualities of bone and the ability of a tooth to move, therein.
  • the distance measurement comprises computing, for each point on the mesh describing the surface of a bone, such as the buccal surface of the alveolar bone, a distance to a nearest point on the mesh describing the root. This distance, therefore, reflects a volume of the alveolar process wherein a tooth may move.
  • the density measurement and distance measurement may be mapped to the simple 3D model, creating a complex 3D model S 356 .
  • varying distances are denoted via heat map, wherein regions of minimal thickness and regions of maximal thickness are represented with varying colors.
  • FIG. 4 is an illustration of a complex 3D model generated from a plurality of processed dental images, annotated with a surface heat map, according to an embodiment of the present disclosure.
  • surface mesh data may be integrated to create a 3D model of an initial position of the dental arches of a patient.
  • a heat map overlaid on the 3D model, indicates local thicknesses of the alveolar process, the periodontal environment therein varying across individual teeth of the dental arches.
  • a canine 408 and an adjacent premolar 407 have disparate local periodontal environments.
  • the canine 408 may be positioned closer to a buccal surface of an alveolar process 409 , as indicated by a darker shade, intense red, while the premolar 407 may be positioned posteriorly with respect to the buccal surface of the alveolar process 409 , proximate to a lingual surface of the alveolar process 409 , as indicated by light shades of the heat map.
  • This heat map feature allows a prescribing dental professional to visualize possible and impossible tooth movements and select appropriate intermediary movements within skeletal constraints.
  • FIG. 5 is a flowchart of an aspect of a training protocol of a classification approach of a dental image processing protocol, according to an embodiment of the present disclosure.
  • the training approach prepares an FCN classifier to be applied to the binary classification of ‘bone’ or the binary classification of ‘teeth’.
  • the training approach provides annotated training data, manually or automatically generated, directed to the above-described classes.
  • the FCN classifier is meant to predict, for each pixel of a slice, class 1 if a tooth is present and class 0 if a tooth is not present.
  • the annotated training data may be based, in part, on combinations of pixel intensities that are considered visual features.
  • a manual segmentation tool may be applied in order to label the ‘tooth’ pixels S 530 .
  • the manual segmentation tool can be a ‘brush tool’.
  • a pretrained convolutional neural network CNN
  • CNN classifier CNN classifier
  • the pretrained CNN classifier can be based on AlexNet. In another embodiment, the pretrained CNN classifier can be further tuned according to a plurality of labeled pixel patches S 532 . To this end, the pixel-wise manually segmented dental images described above are converted to a plurality of pixel patches S 531 , wherein each of the plurality of generated pixel patches may comprise 120 pixels surrounding a central pixel. Compared with pixel-wise training, patch-wise training decreases training time without unnecessarily sacrificing resulting classification accuracy.
  • the retrained CNN classifier may be applied to a second dataset comprising a plurality of CBCT dental images.
  • the retrained CNN classifier predictions can be converted to segmentations S 534 .
  • the segmented predictions may then be downsampled to obtain prepared images for snakes segmentation S 535 , a framework in computer vision for delineating an object outline from a 2D image.
  • pixels of the dental images may first be thresholded according to Hounsfield units (HU), wherein HU values reflect the radiodensity of a biological structure S 536 .
  • HU Hounsfield units
  • Table 1 describes exemplary HU thresholding values, according to an exemplary embodiment of the present disclosure. Every second pixel may be taken into a sample dataset to generate a Gaussian model estimator for given biological structure types, or classes S 537 , thus providing a speed-map for snakes segmentation.
  • the above-described snakes segmentation may be performed as a final segmentation of the modified output of the retrained CNN classifier.
  • the resulting plurality of CBCT dental images segmented via snakes segmentation form an initial FCN training database.
  • the initial FCN training database can then be used to retrain a pretrained Unet-FCN S 538 , referred to herein as “retrained FCN”, and an FCN classifier therein.
  • false pixels adjacent to two true pixels may have added weight.
  • Radiodensity Assignments Label Tissue Type Radiodensity (HU) #0 Clear ⁇ 990 HU #1 Teeth Segmented by CNN #2 Bone Not teeth > 650 HU #3 Soft tissue 0 HU ⁇ not teeth ⁇ 650 HU #4 Liquids ⁇ 800 HU ⁇ not teeth ⁇ 0 HU
  • the initial FCN training database can be continuously updated through a process of 3D prediction enhancement S 539 .
  • the 3D prediction enhancement process follows a similar, run-time, process employed for unknown images during implementation of the retrained FCN classifier.
  • predictions of the initial FCN training database by the retrained FCN classifier may be segmented. These segmentations may then be converted to a 3D polygonal surface. This allows for enhancement of the 3D surface model at a holistic level and with focus on the result, eliminating the laborious task of enhancing individual slices of the 3D polygonal surface.
  • the 3D polygonal surface model can then be converted back into segmentation and, upon confirming the segmentation quality, returned to a subsequent FCN training database.
  • 3D prediction enhancement comprises surface reconstruction via marching cubes, for instance, followed by manual adjustments to apply filters and correct prediction errors in the reconstructed surface.
  • the surface may be re-segmented into 2D slices and returned to the initial FCN training database, thus forming the subsequent training database.
  • the retrained FCN classifier may be further retrained on the enhanced, subsequent FCN training database S 540 in order to further improve classification accuracy.
  • the above process may be iterative.
  • the initial FCN training database and subsequent FCN training databases, therefrom comprise approximately 50,000 dental images, based upon the quality of the produced data.
  • the enhancement and retraining process may be repeated when an FCN training database doubles in size, the dental images with lowest prediction quality being enhanced, as described above, and the FCN classifier being retrained in order to adjust prediction quality.
  • FIG. 6A and FIG. 6B are illustrations of an aspect of a manually-segmented dental image of a tooth.
  • a source image 601 from a first dataset comprising the plurality of CBCT dental images, shown in FIG. 6A may be manually segmented.
  • the user is able to manually assign labels to each pixel, a process creating ground truth data for training semantic segmentation protocols.
  • a manual segmentation via ‘brush tool’ S 630 allows for exact identification and assignment of a ‘tooth’ label to appropriate pixels of the source image 601 .
  • the manual segmentation tool may be selected from a group including but not limited to flood fill tool, smart polygon tool, and polygon tool.
  • the first dataset may be used to train a pretrained CNN classifier, such as, for instance, AlexNet, to perform per pixel predictions of ‘tooth’.
  • the pretrained CNN classifier may be further tuned according to a plurality of labeled pixel patches.
  • the manually segmented dental images described above may be converted to a plurality of pixel patches S 531 , wherein each of the plurality of generated pixel patches comprises 120 pixels surrounding a central pixel. Using pixel patches instead of individual pixels decreases training time without unnecessarily sacrificing classification accuracy.
  • the pretrained CNN classifier may be retrained on pixel patches from the ‘tooth’ class S 731 , as illustrated in FIG. 7A through FIG. 7U and wherein [1 0] indicates ‘tooth’ and [0 1] indicates ‘not tooth’, in order to predict whether the central pixel of each pixel patch belongs to the ‘tooth’ class.
  • FIG. 8A , FIG. 8B , FIG. 8C , and FIG. 8D are illustrations of aspects of a snakes segmentation of a dental image processing protocol, according to an embodiment of the present disclosure.
  • the predictions, generated for a source image 801 for instance, shown in FIG. 8A , may first be converted to segmentations.
  • the segmented predictions S 834 shown in FIG. 8B , may then be downsampled to obtain images prepared for snakes segmentation S 835 .
  • pixels of the dental images may then be thresholded according to HU S 836 , shown in FIG. 8C , wherein HU values reflect the radiodensity of a tissue and similar hues indicate similar tissue types.
  • a Gaussian model estimator may provide a speed map for snakes segmentation. The above-described snakes segmentation may then be performed as a final segmentation of the modified output of the retrained CNN classifier S 833 , as shown in FIG. 8D .
  • the resulting plurality of CBCT dental images segmented via snakes segmentation form an initial FCN training database.
  • the initial FCN training database can then be used to retrain a pretrained Unet-FCN.
  • FIG. 9 is a flowchart of an aspect of a training data generation protocol, according to an embodiment of the present disclosure, wherein the predictions from the retrained FCN classifier may be improved via enhancement of a 3D model.
  • predictions of the initial FCN training database from the retrained FCN classifier may be segmented S 968 . These segmentations may then be converted to a 3D polygonal surface S 969 . This allows for enhancement of the 3D surface model S 970 at a holistic level and with focus on the result, eliminating the need to enhance individual slices of the model.
  • this reduces computational burden from five-hundred 2D slices to one 3D polygonal surface.
  • extraneous anatomical data may then be removed from the 3D surface model S 971 in order to isolate anatomical features of interest.
  • the 3D surface model can be reverted to segmentation S 972 . Segmentations may then be confirmed for quality S 973 , relative to a pre-determined error threshold, and added to a subsequent FCN training database if of sufficient quality.
  • FIG. 10A , FIG. 10B , and FIG. 10C are illustrations of a segmentation of teeth of a dental image after prediction by the retrained FCN classifier.
  • FIG. 10A an illustration of a segmentation of an anterior aspect of a dental arch from a CBCT dental image, according to an embodiment of the present disclosure, a transverse plane segmentation 1062 with overlaid ‘tooth’ predictions is observed.
  • a full dental arch segmentation in a transverse plane 1063 in FIG.
  • FIG. 10B illustrates a segmented FCN classifier prediction across a cross-section of the dental arch, highlighting the crowns of each tooth.
  • a sagittal plane view of an aspect of a superior, or upper, and an inferior, or lower, dental arch 1064 in FIG. 10C , a complete view of a cross-section of an aspect of each dental arch is visible, including crowns and roots.
  • FIG. 11A , FIG. 11B , and FIG. 11C are illustrations of a segmentation of bone of a dental image after prediction by the retrained FCN classifier, according to an embodiment of the present disclosure.
  • an unknown source image 1101 for instance, shown in FIG. 11A , a sagittal view of the mouth of a patient, wherein the skull is on the left side of the image, a ‘bone’ classification is predicted.
  • a ‘bone’ classification is predicted.
  • continuous regions of ‘bone’ may be identified and relative centers of mass may be compared to determine anatomic identity, in the context of the dental image plane.
  • FIG. 11A a sagittal view of the mouth of a patient, wherein the skull is on the left side of the image
  • a ‘bone’ classification is predicted.
  • continuous regions of ‘bone’ may be identified and relative centers of mass may be compared to determine anatomic identity, in the context of the dental image plane.
  • a contiguous region of classified ‘bone’ proximate to the skull, or anatomically superior, is identified as the maxilla 1165
  • an inferior contiguous region is identified as the mandible 1166 , as shown in FIG. 11C .
  • a plurality of processed CBCT dental images generated via prediction by the retrained FCN classifier can be segmented and reconstructed to render a complex 3D model of dentition in the context of periodontal tissues, as reflected in FIG. 4 .
  • the resulting simple 3D model may be further enhanced to provide additional information related to the periodontal tissue environment. From the simple 3D model generated via marching cubes, for instance, a density measurement and distance measurement can be performed.
  • the density measurement 1254 and the distance measurement 1255 of the periodontal space may be computed from the surface reconstruction of the segmented predictions of the retrained FCN classifier S 1253 .
  • the distance measurement 1254 comprises locating and computing, for each point on the surface reconstruction describing the surface of a bone S 1220 , a distance to a nearest point on the surface reconstructions describing the root S 1221 . This distance, therefore, reflects a volume of alveolar process wherein a tooth may move.
  • the density measurement comprises locating and computing, from each point on a surface reconstruction describing the surface of a root S 1224 , a metric of the density of the surrounding tissue S 1225 .
  • This metric may be determined on the basis of mean voxel intensities surrounding a vertex-point coordinate of the surface reconstruction, reflecting the spatial arrangement of bone and the ability of a tooth to move, therein.
  • the density measurement S 1226 and distance measurement S 1222 may be mapped to the simple 3D model, rendering it a complex 3D model.
  • varying distances are denoted via heat map, wherein regions of minimal thickness and maximal thickness are of varying hues.
  • FIG. 13A and FIG. 13B are illustrations of a three-dimensional model of dentition, according to an embodiment of the present disclosure.
  • An inferior dental arch 1302 shown in FIG. 13A
  • superior dental arch 1303 shown in FIG. 13B
  • a tooth movement plan can be developed.
  • FIG. 14A , FIG. 14B , and FIG. 14C illustrations of a positioning of a three-dimensional model generated from a processed dental image
  • intermediary stages of tooth movement can be determined based upon a prescribed final tooth position and an initial tooth position.
  • the initial tooth position 1475 in FIG. 14A reflects a maligned dental arch and the varying thicknesses of alveolar process surrounding the root of the tooth.
  • the roots of a lateral incisor are deep to the buccal surface of the alveolar process, whereas an adjacent tooth, or a central incisor, may be relatively superficial with respect to the buccal surface of the alveolar process.
  • a prescribing dental professional determines that a lateral incisor, indicated by the left arrow of the initial tooth alignment 1475 , need be rotated about a transverse axis, the roots of the lateral incisor being moved anteriorly and proximate to the buccal surface of the alveolar process.
  • an intermediary stage 1476 shown in FIG. 14B , with the left arrow still indicating the lateral incisor, the required movement has been initiated.
  • the changing hue of the model proximate the roots of the lateral incisor reflect this movement.
  • a final tooth position 1477 shown in FIG. 14C , may be achieved. Consequently, the determined thickness of the alveolar process between the root surface and the buccal surface of the alveolar process is decreased, as indicated by the shifting hue at the left arrow of the complex 3D model of FIG. 14C .
  • intermediary tooth positions may be determined manually according to a final tooth position, an initial tooth position, and the movements of adjacent teeth.
  • intermediary tooth positions may be determined automatically, via a path determining protocol executed by the processing circuitry.
  • determined tooth movements may be informed by a quantitative model of expected bone growth and resorption at the root, lingual, and buccal surfaces of the alveolar process. For example, as a tooth movement results in anterior rotation of a root of a tooth, bone deposition, and thus thickening, may occur on the buccal surface of the alveolar process. Concurrently, bone resorption may occur on the lingual surface of the alveolar process. Expected bone growth or bone resorption can be added to the complex 3D model of the teeth and surrounding bone during rendering of intermediary tooth movements.
  • a prescribed final tooth position may not be a realizable final tooth position due to constraints of the facial skeleton, as informed by the above-described density measurement and distance measurement.
  • a realizable final tooth position is determined, with the input of the prescribing dental professional, and in the context of function and aesthetic.
  • dental aligners may be fabricated, accordingly.
  • the dental image processing device includes a CPU 1580 which performs the processes described above/below.
  • the processing device may be a GPU, GPGPU, or TPU.
  • the process data and instructions may be stored in memory 1581 .
  • These processes and instructions may also be stored on a storage medium disk 1582 such as a hard drive (HDD) or portable storage medium or may be stored remotely.
  • the claimed advancements are not limited by the form of the computer-readable media on which the instructions of the inventive process are stored.
  • the instructions may be stored on CDs, DVDs, in FLASH memory, RAM, ROM, PROM, EPROM, EEPROM, hard disk or any other information processing device with which the dental image processing device communicates, such as a server or computer.
  • claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU 1580 and an operating system such as Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
  • an operating system such as Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
  • CPU 1580 may be a Xenon or Core processor from Intel of America or an Opteron processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art.
  • the CPU 1580 may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize.
  • CPU 1580 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
  • the dental image processing device in FIG. 15 also includes a network controller 1583 , such as an Intel Ethernet PRO network interface card from Intel Corporation of America, for interfacing with network 1595 .
  • the network 1595 can be a public network, such as the Internet, or a private network such as an LAN or WAN network, or any combination thereof and can also include PSTN or ISDN sub-networks.
  • the network 1595 can also be wired, such as an Ethernet network, or can be wireless such as a cellular network including EDGE, 3G and 4G wireless cellular systems.
  • the wireless network can also be WiFi, Bluetooth®, or any other wireless form of communication that is known.
  • the dental image processing device further includes a display controller 1584 , such as a NVIDIA GeForce GTX® or Quadro® graphics adaptor from NVIDIA Corporation of America for interfacing with display 1585 , such as a Hewlett Packard HPL2445w® LCD monitor.
  • a general purpose I/O interface 1586 interfaces with a keyboard and/or mouse 1587 as well as a touch screen panel 1588 on or separate from display 1585 .
  • General purpose I/O interface also connects to a variety of peripherals 1589 including printers and scanners, such as an OfficeJet® or DeskJet® from Hewlett Packard.
  • a sound controller 1590 is also provided in the dental image processing device, such as Sound Blaster X-Fi Titanium from Creative, to interface with speakers/microphone 1591 thereby providing sounds and/or music.
  • the general purpose storage controller 1592 connects the storage medium disk 1582 with communication bus 1593 , which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the dental image processing device.
  • communication bus 1593 may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the dental image processing device.
  • a description of the general features and functionality of the display 1585 , keyboard and/or mouse 1587 , as well as the display controller 1584 , storage controller 1592 , network controller 1583 , sound controller 1590 , and general purpose I/O interface 1586 is omitted herein for brevity as these features are known.
  • Embodiments of the present disclosure may also be as set forth in the following parentheticals.
  • a method for generating an intermediate position of one or more teeth of a dental arch of a patient comprising classifying, via processing circuitry, pixels of one or more medical images into classes corresponding to biological structure types, segmenting, via the processing circuitry, the classified pixels of the one or more medical images into biological structures, rendering, via the processing circuitry, a three-dimensional model of the biological structures based on the segmented classified pixels, determining, via the processing circuitry, one or more metrics, based upon the three-dimensional model, describing characteristics of a bone of the biological structures, acquiring, via the processing circuitry, a final position of each of the one or more teeth of the dental arch based upon the three-dimensional model, and generating, via the processing circuitry, the intermediate position of each of the one or more teeth of the patient based upon the one or more metrics and the acquired final position.
  • the classifying further comprises training, via the processing circuitry, a classifier on a training database, and classifying, via the processing circuitry, the pixels of the one or more medical images based upon the classifier, wherein the training database includes a corpus of reference medical images, each reference medical image comprising at least one identifiable reference biological structure associated in the training database with at least one corresponding description of the biological structure type.
  • the training further comprises training, via the processing circuitry, a first neural network according to a first dataset, training, via the processing circuitry, a second neural network according to a second dataset, the second dataset comprising a plurality of classification predictions of the first neural network, and generating, via the processing circuitry, the training database based upon a plurality of classification predictions of the second neural network.
  • one of the one or more metrics is a distance metric, the distance metric being defined as a distance between a surface of a root of one of the one or more teeth of the dental arch and a surface of an alveolar process.
  • one of the one or more metrics is a density metric, the density metric being defined as a measure of mean intensity of voxels adjacent to a central voxel.
  • a non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by a computer having a processing circuitry, cause the computer to perform a method, the method comprising classifying pixels of one or more medical images into classes corresponding to biological structure types, segmenting the classified pixels of the one or more medical images into biological structures, rendering a three-dimensional model of the biological structures based on the segmented classified pixels, determining one or more metrics, based upon the three-dimensional model, describing characteristics of a bone of the biological structures, acquiring a final position of each of the one or more teeth of the dental arch based upon the three-dimensional model, and generating the intermediate position of each of the one or more teeth of the patient based upon the one or more metrics and the acquired final position.
  • the classifying further comprises training a classifier on a training database, and classifying the pixels of the one or more medical images based upon the classifier, wherein the training database includes a corpus of reference medical images, each reference medical image comprising at least one identifiable reference biological structure associated in the training database with at least one corresponding description of the biological structure type.
  • training further comprises training a first neural network according to a first dataset, training a second neural network according to a second dataset, the second dataset comprising a plurality of classification predictions of the first neural network, and generating the training database based upon a plurality of classification predictions of the second neural network.
  • one of the one or more metrics is a distance metric, the distance metric being defined as a distance between a surface of a root of one of the one or more teeth of the dental arch and a surface of an alveolar process.
  • one of the one or more metrics is a density metric, the density metric being defined as a measure of mean intensity of voxels adjacent to a central voxel.
  • An apparatus for processing of dental images comprising a processing circuitry configured to classify pixels of one or more medical images into classes corresponding to biological structure types, segment the classified pixels of the one or more medical images into biological structures, render a three-dimensional model of the biological structures based on the segmented classified pixels, determine one or more metrics, based upon the three-dimensional model, describing a bone of the biological structures, acquire a final position of each of the one or more teeth of the dental arch based upon the three-dimensional model, and generate the intermediate position of each of the one or more teeth of the patient based upon the one or more metrics and the acquired final position.
  • processing circuitry is further configured to train a classifier on a training database, and classify the pixels of the one or more medical images based upon the classifier, wherein the training database includes a corpus of reference medical images, each reference medical image comprising at least one identifiable reference biological structure associated in the training database with at least one corresponding description of the biological structure type.
  • training further comprises training a first neural network according to a first dataset, training a second neural network according to a second dataset, the second dataset comprising a plurality of classification predictions of the first neural network, and generating the training database based upon a plurality of classification predictions of the second neural network.
  • one of the one or more metrics is a distance metric, the distance metric being defined as a distance between a surface of a root of one of the one or more teeth of the dental arch and a surface of an alveolar process.
  • one of the one or more metrics is a density metric, the density metric being defined as a measure of mean intensity of voxels adjacent to a central voxel.
  • a method of pixel-based classification of medical images comprising training a neural network to perform the pixel-based classification of the medical images, the training comprising performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled, applying the classifier after the first training to a second training set of medical images to classify pixels of the second training set of medical images, identifying and correcting incorrectly classified pixels of the second training set of medical images, and performing a second training on the classifier after the first training using the manually labeled first training set of medical images and the correctly classified second training set of medical images, and applying the classifier after the second training to the medical images to perform the pixel-based classification, the pixel-based classification of the medical images including assigning pixels of each medical image to a biological structure type.
  • training the neural network further comprises generating a third training set of medical images that includes the manually labeled first training set of medical images and the correctly classified second training set of medical images, allocating each medical image of the third training set of medical images to one of a first subset of the third training set of medical images or a second subset of the third training set of medical images, and performing a third training on the classifier after the second training using the first subset of the third training set of medical images.
  • training the neural network further comprises applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified by the classifier, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating a number of the identified incorrectly classified images of the second subset of the third training set of medical images to the first subset of the third training set of medical images, and reallocating a number of images, corresponding to the number of images reallocated to the first subset of the third training set of medical images, from the first subset of the third training set of medical images to the second subset of the third training set of medical images, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images of the second subset of the third training set of medical images.
  • training the neural network further comprises applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating images between the first subset of the third training set of medical images and the second subset of the third training set of medical images based on a comparison of a quantity of the identified incorrectly classified images and an identification threshold, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images the third training set of medical images.
  • An apparatus for pixel-based classification of medical images comprising processing circuitry configured to train a neural network to perform the pixel-based classification of the medical images, the training comprising performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled, applying the classifier after the first training to a second training set of medical images to classify pixels of the second training set of medical images, identifying and correcting incorrectly classified pixels of the second training set of medical images, and performing a second training on the classifier after the first training using the manually labeled first training set of medical images and the correctly classified second training set of medical images, and apply the classifier after the second training to the medical images to perform the pixel-based classification, the pixel-based classification of the medical images including assigning pixels of each medical image to a biological structure type.
  • processing circuitry is further configured to train the neural network by generating a third training set of medical images that includes the manually labeled first training set of medical images and the correctly classified second training set of medical images, allocating each medical image of the third training set of medical images to one of a first subset of the third training set of medical images or a second subset of the third training set of medical images, and performing a third training on the classifier after the second training using the first subset of the third training set of medical images.
  • processing circuitry is further configured to train the neural network by applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified by the classifier, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating a number of the identified incorrectly classified images of the second subset of the third training set of medical images to the first subset of the third training set of medical images, and reallocating a number of images, corresponding to the number of images reallocated to the first subset of the third training set of medical images, from the first subset of the third training set of medical images to the second subset of the third training set of medical images, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images of the second subset of the third training set of medical images.
  • processing circuitry is further configured to train the neural network by applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating images between the first subset of the third training set of medical images and the second subset of the third training set of medical images based on a comparison of a quantity of the identified incorrectly classified images and an identification threshold, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images the third training set of medical images.
  • a non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by a computer, cause the computer to perform a method of pixel-based classification of medical images, the method comprising training a neural network to perform the pixel-based classification of the medical images, the training comprising performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled, applying the classifier after the first training to a second training set of medical images to classify pixels of the second training set of medical images, identifying and correcting incorrectly classified pixels of the second training set of medical images, and performing a second training on the classifier after the first training using the manually labeled first training set of medical images and the correctly classified second training set of medical images; and applying the classifier after the second training to the medical images to perform the pixel-based classification, the pixel-based classification of the medical images including assigning pixels of each medical image to a biological structure type.
  • training the neural network further comprises generating a third training set of medical images that includes the manually labeled first training set of medical images and the correctly classified second training set of medical images, allocating each medical image of the third training set of medical images to one of a first subset of the third training set of medical images or a second subset of the third training set of medical images, and performing a third training on the classifier after the second training using the first subset of the third training set of medical images.
  • the training the neural network further comprises applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified by the classifier, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating a number of the identified incorrectly classified images of the second subset of the third training set of medical images to the first subset of the third training set of medical images, and reallocating a number of images, corresponding to the number of images reallocated to the first subset of the third training set of medical images, from the first subset of the third training set of medical images to the second subset of the third training set of medical images, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images of the second subset of the third training set of medical images.
  • the training the neural network further comprises applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating images between the first subset of the third training set of medical images and the second subset of the third training set of medical images based on a comparison of a quantity of the identified incorrectly classified images and an identification threshold, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images the third training set of medical images.

Abstract

The present disclosure relates to a method of pixel-based classification of medical images. The method includes training a neural network to perform the pixel-based classification, the training comprising performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled as a biological structure type, applying the classifier after the first training to a second training set of medical, identifying and correcting incorrectly classified pixels of the second training set of medical images, and performing a second training on the classifier after the first training using the manually labeled first training set and the correctly classified second training set, and applying the classifier after the second training to the medical images to perform the pixel-based classification.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a divisional of U.S. application Ser. No. 16/929,588, filed Jul. 15, 2020, which is a continuation of U.S. application Ser. No. 16/017,687, filed Jun. 25, 2018, (now U.S. Pat. No. 10,748,651), which claims priority to Eurasian Patent Application No. 201700561, filed Nov. 16, 2017, the content of which is hereby incorporated herein by reference in its entirety.
  • BACKGROUND Field of the Disclosure
  • The present disclosure relates to a dental image processing protocol for the design of dental aligners.
  • Description of the Related Art
  • Orthodontics, generally, and dental alignment, in particular, is a well-developed area of dental care. For patients with maligned teeth, traditional braces or, more recently, clear aligners, offer a strategy for improved dental function and aesthetics through gradual teeth movements. These gradual teeth movements slowly move a crown of a tooth until a desired final position is reached. These approaches, however, fail to appropriately consider the impact of corresponding root movements, in the context of surrounding soft and hard tissues, on the final position of the crown, focusing instead on an aesthetically and functionally ideal crown position. An approach for determining crown position that adequately incorporates the impact of root movement and the root environment has yet to be developed.
  • The foregoing “Background” description is for the purpose of generally presenting the context of the disclosure. Work of the inventors, to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, are neither expressly or impliedly admitted as prior art against the present invention.
  • SUMMARY
  • The present disclosure relates to a method, apparatus, and computer-readable medium comprising a processing circuitry configured to classify pixels of one or more medical images into classes corresponding to biological structure types, segment the classified pixels of the one or more medical images into biological structures, render a three-dimensional model of the biological structures based on the segmented classified pixels, determine one or more metrics, based upon the three-dimensional model, describing a bone of the biological structures, acquire a final position of each of the one or more teeth of the dental arch based upon the three-dimensional model, and generate the intermediate position of each of the one or more teeth of the patient based upon the one or more metrics and the acquired final position.
  • According to an embodiment, the present disclosure further relates to a method of pixel-based classification of medical images, comprising training a neural network to perform the pixel-based classification of the medical images, the training comprising performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled, applying the classifier after the first training to a second training set of medical images to classify pixels of the second training set of medical images, identifying and correcting incorrectly classified pixels of the second training set of medical images, and performing a second training on the classifier after the first training using the manually labeled first training set of medical images and the correctly classified second training set of medical images; and applying the classifier after the second training to the medical images to perform the pixel-based classification, the pixel-based classification of the medical images including assigning pixels of each medical image to a biological structure type.
  • The foregoing paragraphs have been provided by way of general introduction, and are not intended to limit the scope of the following claims. The described embodiments, together with further advantages, will be best understood by reference to the following detailed description taken in conjunction with the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the office upon request and payment of the necessary fee.
  • A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
  • FIG. 1 is an illustration of dental aligners, according to an embodiment of the present disclosure;
  • FIG. 2 is an illustration of a dental image of a tooth, according to an embodiment of the present disclosure;
  • FIG. 3 is a flowchart of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 4 is an illustration of a complex three-dimensional model generated from a plurality of processed dental images and annotated with a surface heat map, according to an embodiment of the present disclosure;
  • FIG. 5 is a flowchart of an aspect of a training protocol of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 6A is an illustration of an aspect of a manually-segmented dental image of a tooth, according to an embodiment of the present disclosure;
  • FIG. 6B is an illustration of an aspect of a manually-segmented dental image of a tooth, according to an embodiment of the present disclosure;
  • FIG. 7A is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7B is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7C is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7D is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7E is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7F is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7G is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7H is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7I is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7J is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7K is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7L is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7M is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7N is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7O is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7P is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7Q is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7R is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7S is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7T is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 7U is an illustration of an exemplary pixel patch of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 8A is an illustration of an aspect of a snakes segmentation of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 8B is an illustration of an aspect of a snakes segmentation of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 8C is an illustration of an aspect of a snakes segmentation of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 8D is an illustration of an aspect of a snakes segmentation of a dental image processing protocol, according to an embodiment of the present disclosure;
  • FIG. 9 is a flowchart of an aspect of a training data generation protocol, according to an embodiment of the present disclosure;
  • FIG. 10A is an illustration of a segmentation of teeth of a dental image, according to an embodiment of the present disclosure;
  • FIG. 10B is an illustration of a segmentation of teeth of a dental image, according to an embodiment of the present disclosure;
  • FIG. 10C is an illustration of a segmentation of teeth of a dental image, according to an embodiment of the present disclosure;
  • FIG. 11A is an illustration of a source image of a segmentation of bone of a dental image, according to an embodiment of the present disclosure;
  • FIG. 11B is an illustration of a segmentation of bone of a dental image, according to an embodiment of the present disclosure;
  • FIG. 11C is an illustration of a segmentation of bone of a dental image, according to an embodiment of the present disclosure;
  • FIG. 12A is a flowchart of a determination of a distance metric of a three-dimensional model, according to an embodiment of the present disclosure;
  • FIG. 12B is a flowchart of a determination of a density metric of a three-dimensional model, according to an embodiment of the present disclosure;
  • FIG. 13A is an illustration of a three-dimensional model of dentition, according to an embodiment of the present disclosure;
  • FIG. 13B is an illustration of a three-dimensional model of dentition, according to an embodiment of the present disclosure;
  • FIG. 14A is an illustration of a three-dimensional model of an initial tooth position, according to an embodiment of the present disclosure;
  • FIG. 14B is an illustration of a three-dimensional model of an intermediary tooth position, according to an embodiment of the present disclosure;
  • FIG. 14C is an illustration of a three-dimensional model of a final tooth position, according to an embodiment of the present disclosure; and
  • FIG. 15 is a schematic of exemplary hardware for implementation of a dental image processing protocol, according to an embodiment of the present disclosure.
  • DETAILED DESCRIPTION
  • The terms “a” or “an”, as used herein, are defined as one or more than one. The term “plurality”, as used herein, is defined as two or more than two. The term “another”, as used herein, is defined as at least a second or more. The terms “including” and/or “having”, as used herein, are defined as comprising (i.e., open language). Reference throughout this document to “one embodiment”, “certain embodiments”, “an embodiment”, “an implementation”, “an example” or similar terms means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of such phrases or in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments without limitation.
  • Currently, orthodontist and dental technicians develop teeth movement plans based upon initial and ideal final crown positions. Considering only teeth crowns, however, ignoring root movements and the root environment, makes it possible that root collisions or other damage may occur to the tooth or surrounding bone tissue at intermediary tooth positions. In an example, an intermediary tooth position may result in collision of the roots of adjacent teeth. In another example, a thickness of the alveolar process, the bone tissue surrounding a tooth root, can limit the ability of the tooth root to move to an intermediary step. When the surrounding bone tissue is of insufficient thickness, a realizable movement of a tooth may be less than an ideal movement of the tooth, resulting in impaired treatment and sub-optimal teeth function and aesthetic. In yet another example, varying densities of periodontal bone can impact potential root movements and realignment, thereof.
  • Based upon the insufficiencies of current methodologies, the present disclosure describes an orthodontic treatment approach that considers an evaluation of the condition of the tissues surrounding the tooth and the alveolar process, in particular. Moreover, the evaluation of the condition of the tissues surrounding the tooth is patient-specific, reflecting the unique density and thickness of an individual patient's periodontal bone.
  • FIG. 1 is an illustration of dental aligners, according to an embodiment of the present disclosure. Following a determination of an initial tooth position and a realizable final tooth position, intermediary tooth movements can be determined and dental aligners 100 can be fabricated, accordingly, to slowly move and realign a patient's teeth. Oftentimes, however, as described above, these determinations are based only upon an ideal final tooth position and the position of the crown of the tooth relative to adjacent teeth, which can lead to root, periodontal ligament, or periodontal bone damage upon movement. In order to incorporate information related to the environment of the tooth and surrounding tissues during dental aligner 100 fabrication, an approach for identifying tissue-types, generating three-dimensional (3D) models, and determining periodontal tissue characteristics, thereof, is required. To this end, it becomes necessary to develop a strategy for discerning soft tissues from hard tissues and tooth roots from surrounding bone of varying densities. FIG. 2 is an illustration of a dental image of a tooth, according to an embodiment of the present disclosure. In an embodiment, a dental image of a tooth may be but is not limited to an image acquired via intraoral optical imaging, impressions, dental models, ultrasound, or radiography. In an example, a plurality of images, or slices, may be acquired via radiography and reconstructed to render a 3D model. With reference again to FIG. 2, a tooth 204 comprises a crown 206 and one or more roots 205. The one or more roots 205 are resident within an alveolar process, a thickened ridge of bone containing dental alveoli, or tooth sockets. The alveolar process is comprised of cortical bone 215, a compact, relatively dense bone, and cancellous bone 210, a spongy, relatively porous bone. Together, cortical bone 215 and cancellous bone 210 provide a strong foundation from which the one or more roots 205 of the tooth 204 are anchored. As related to the present disclosure, cortical bone 215 and cancellous bone 210, as periodontal tissues, contribute to the determination of possible movements of a tooth.
  • In planning a tooth movement such that the tooth and periodontal environment are considered concurrently, a variety of structures, including the above-described features, must be identified. Moreover, once these features have been identified for a single two-dimensional (2D) dental, image, the same can be performed for additional 2D dental images, or slices, until a 3D model can be rendered, therefrom. In addition to providing for aesthetic evaluation, a 3D model synthesizes information regarding periodontal tissue density and thickness, thereby bounding possible tooth movements and providing a prescribing dental professional a tool from which to determine a tooth movement. The process alluded to above is described in FIG. 3, a flowchart of a dental image processing protocol implemented via a dental image processing device comprising a processing circuitry.
  • According to an embodiment of the present disclosure, the dental image processing protocol described herein can be appreciated in context of a full dental arch or an individual tooth. Initially, digital representations of an initial position of a patient's teeth must be acquired. A variety of techniques including but not limited to impressions, dental scanner of impressions or dental models, intraoral scanners for digital impressions, intraoral X-ray, ultrasound, and computed tomography can be used individually or in combination to acquire digital representations of the initial position of the patient's teeth. In an embodiment of the present disclosure, in order to create a digital impression, an intraoral scanner S341 may be employed to acquire topographical characteristics of crowns of the teeth. The intraoral scanner may employ a modality selected from a group including but not limited to lasers, infrared light, and structured light. So that tooth movements can be determined in the context of crowns and roots, a radiographic imaging modality may be employed in order to acquire spatial information relating to the roots and periodontal tissues, including soft tissues and hard tissues (e.g. alveolar process). In an embodiment, the radiographic imaging modality may be selected from the group including but not limited to projection radiography, computed tomography (CT), dual energy X-ray absorptiometry, fluoroscopy, and contrast radiography. In an example, the radiographic imaging modality may be cone beam computed tomography (CBCT) S350. Radiographic images may comprise multi-planar radiographic images including but not limited to sagittal, transverse, and coronal. It should be appreciated that, apart from radiographic techniques, a variety of imaging modalities including but not limited to ultrasound may be used for acquisition of images describing spatial information of the roots and periodontal tissues.
  • Following acquisition of a plurality of dental images of a patient via CBCT, various biological structures, including the teeth and the jaw, must be digitally identified so that they can be later incorporated into a holistic 3D model of the dental environment. As an alternative to manual classification of individual biological structures of a plurality of dental images, the present disclosure employs a machine learning approach, a platform for rapid evaluation of the plurality of dental images, to classify the various biological structures of each dental image of a patient S351. In an exemplary embodiment, the machine learning approach may be a fully convolutional neural network (FCN). Unlike similar approaches that perform patch-wise predictions, an FCN classifier evaluates and predicts the classification of each pixel of an unknown image. Per-pixel classification allows the resulting predictions to be segmented into multiple tissue classes S352, combining adjacent pixels of similar classification and density and defining the shape of each type of tissue, or biological structure, as building blocks for a 3D model. In turn, via a surface reconstruction technique such as, for instance, marching cubes, each of the plurality of classified and segmented dental images of a patient are combined and reconstructed to form a 3D model of dentition, including a patient's dental arches and surrounding tissues S353. In an exemplary embodiment, the surface reconstruction technique may be selected from a group including but not limited to marching tetrahedrons and marching cubes, a sequential surface rendering model wherein a polygonal mesh is fitted to a surface interacting with pixels from adjacent slices.
  • The reconstructed surfaces can then be integrated into the digital impressions acquired via intraoral 3D imaging to create a simple 3D model of the mouth of a patient S343, including crowns, roots and periodontal tissues. In an embodiment, this data integration may be accomplished via point-based alignment of two surface models, an interactive method of registration of polygonal meshes. First, at least three corresponding points on each of the two surface models are selected. Next, a transformation matrix is computed and applied via translation and rotation or quaternion. If the resulting overlap between the two surface models is greater than a pre-determined threshold, a new transformation matrix must be determined and applied such that the resulting overlap between the two surface models is less than the pre-determined threshold. In an exemplary embodiment, the at least three corresponding points on each of the two surface models are selected manually. In another embodiment, corresponding points on each of the two surface models may be selected automatically via software, wherein the corresponding points are features selected from a group including but not limited to cusps, gloves, offsets and pits of molars, or central points of cutting edges on incisors.
  • Concurrently, in order to better define the periodontal environment of the simple 3D model and provide a realistic model of potential tooth and root movement, characteristics of the alveolar process must be determined. To this end, a density measurement S354 and a distance measurement S355 of the periodontal space may be computed from the surface reconstruction, or simple 3D model, of the classified and segmented predictions. The density measurement comprises computing, from each point on a mesh describing the surface of a root, a metric of the density of the surrounding tissue. This metric may be determined on the basis of mean voxel intensities surrounding a vertex-point coordinate of the segmentation, reflecting the spatial qualities of bone and the ability of a tooth to move, therein. The distance measurement comprises computing, for each point on the mesh describing the surface of a bone, such as the buccal surface of the alveolar bone, a distance to a nearest point on the mesh describing the root. This distance, therefore, reflects a volume of the alveolar process wherein a tooth may move.
  • Once calculated for each point within the mesh, or model, the density measurement and distance measurement may be mapped to the simple 3D model, creating a complex 3D model S356. To allow for rapid visualization of the distance measurement, varying distances are denoted via heat map, wherein regions of minimal thickness and regions of maximal thickness are represented with varying colors.
  • Having now generated a complex 3D model of a patient's mouth, including dental arches and alveolar process, annotated to denote tissue thicknesses and densities, the complex 3D model may be manipulated to determine realizable final tooth positions S360. In doing so, intermediary tooth positions and movements necessary to achieve such positions may be determined. FIG. 4 is an illustration of a complex 3D model generated from a plurality of processed dental images, annotated with a surface heat map, according to an embodiment of the present disclosure.
  • As described above, following acquisition and processing of intraoral 3D scans and radiographic images, surface mesh data may be integrated to create a 3D model of an initial position of the dental arches of a patient. A heat map, overlaid on the 3D model, indicates local thicknesses of the alveolar process, the periodontal environment therein varying across individual teeth of the dental arches. For example, with reference to FIG. 4, a canine 408 and an adjacent premolar 407 have disparate local periodontal environments. The canine 408 may be positioned closer to a buccal surface of an alveolar process 409, as indicated by a darker shade, intense red, while the premolar 407 may be positioned posteriorly with respect to the buccal surface of the alveolar process 409, proximate to a lingual surface of the alveolar process 409, as indicated by light shades of the heat map. This heat map feature allows a prescribing dental professional to visualize possible and impossible tooth movements and select appropriate intermediary movements within skeletal constraints.
  • Critical to the success of the dental image processing protocol is the ability to accurately classify biological structures, or tissues, of dental images, and radiographic slices, in particular. To this end, FIG. 5 is a flowchart of an aspect of a training protocol of a classification approach of a dental image processing protocol, according to an embodiment of the present disclosure. Generally, the training approach prepares an FCN classifier to be applied to the binary classification of ‘bone’ or the binary classification of ‘teeth’. Specifically, the training approach provides annotated training data, manually or automatically generated, directed to the above-described classes. In an example, when applied, the FCN classifier is meant to predict, for each pixel of a slice, class 1 if a tooth is present and class 0 if a tooth is not present.
  • The process of generating the annotated training data is described in FIG. 5. Generally, the annotated training data may be based, in part, on combinations of pixel intensities that are considered visual features. Initially, from a first dataset comprising a plurality of CBCT dental images, a manual segmentation tool may be applied in order to label the ‘tooth’ pixels S530. In an embodiment, the manual segmentation tool can be a ‘brush tool’. Following manual labeling of ‘tooth’ of each pixel of each dental image of the dataset, a pretrained convolutional neural network (CNN), referred to herein as “retrained CNN”, and CNN classifier therein, may be trained to perform per pixel predictions of ‘tooth’ based upon the labeled images of the first dataset. In an embodiment, the pretrained CNN classifier can be based on AlexNet. In another embodiment, the pretrained CNN classifier can be further tuned according to a plurality of labeled pixel patches S532. To this end, the pixel-wise manually segmented dental images described above are converted to a plurality of pixel patches S531, wherein each of the plurality of generated pixel patches may comprise 120 pixels surrounding a central pixel. Compared with pixel-wise training, patch-wise training decreases training time without unnecessarily sacrificing resulting classification accuracy.
  • Following training, the retrained CNN classifier may be applied to a second dataset comprising a plurality of CBCT dental images. In order to prepare the resulting retrained CNN classifier predictions for active contour modeling, or snakes segmentation S533, the retrained CNN classifier predictions can be converted to segmentations S534. The segmented predictions may then be downsampled to obtain prepared images for snakes segmentation S535, a framework in computer vision for delineating an object outline from a 2D image. To this end, pixels of the dental images may first be thresholded according to Hounsfield units (HU), wherein HU values reflect the radiodensity of a biological structure S536. Table 1 describes exemplary HU thresholding values, according to an exemplary embodiment of the present disclosure. Every second pixel may be taken into a sample dataset to generate a Gaussian model estimator for given biological structure types, or classes S537, thus providing a speed-map for snakes segmentation. The above-described snakes segmentation may be performed as a final segmentation of the modified output of the retrained CNN classifier. The resulting plurality of CBCT dental images segmented via snakes segmentation form an initial FCN training database. The initial FCN training database can then be used to retrain a pretrained Unet-FCN S538, referred to herein as “retrained FCN”, and an FCN classifier therein. In an embodiment, false pixels adjacent to two true pixels may have added weight.
  • TABLE 1
    Radiodensity Assignments
    Label Tissue Type Radiodensity (HU)
    #0 Clear <−990 HU
    #1 Teeth Segmented by CNN
    #2 Bone Not teeth > 650 HU
    #3 Soft tissue 0 HU < not teeth < 650 HU
    #4 Liquids −800 HU < not teeth < 0 HU
  • In order to evaluate the predictive value of the retrained FCN classifier, and improve its future predictive power, the initial FCN training database can be continuously updated through a process of 3D prediction enhancement S539. Broadly, the 3D prediction enhancement process follows a similar, run-time, process employed for unknown images during implementation of the retrained FCN classifier. First, predictions of the initial FCN training database by the retrained FCN classifier may be segmented. These segmentations may then be converted to a 3D polygonal surface. This allows for enhancement of the 3D surface model at a holistic level and with focus on the result, eliminating the laborious task of enhancing individual slices of the 3D polygonal surface. Once enhanced, the 3D polygonal surface model can then be converted back into segmentation and, upon confirming the segmentation quality, returned to a subsequent FCN training database.
  • Specifically, 3D prediction enhancement comprises surface reconstruction via marching cubes, for instance, followed by manual adjustments to apply filters and correct prediction errors in the reconstructed surface. With manual adjustments completed at the level of the 3D model, the surface may be re-segmented into 2D slices and returned to the initial FCN training database, thus forming the subsequent training database. When the subsequent FCN training database has doubled in size, the retrained FCN classifier may be further retrained on the enhanced, subsequent FCN training database S540 in order to further improve classification accuracy. As described, the above process may be iterative.
  • According to an embodiment, the initial FCN training database and subsequent FCN training databases, therefrom, comprise approximately 50,000 dental images, based upon the quality of the produced data. The enhancement and retraining process may be repeated when an FCN training database doubles in size, the dental images with lowest prediction quality being enhanced, as described above, and the FCN classifier being retrained in order to adjust prediction quality.
  • Each of the steps of the above-described training protocol will be further discussed below.
  • FIG. 6A and FIG. 6B are illustrations of an aspect of a manually-segmented dental image of a tooth. According to an embodiment of the present disclosure, a source image 601 from a first dataset comprising the plurality of CBCT dental images, shown in FIG. 6A, may be manually segmented. Through manual segmentation, the user is able to manually assign labels to each pixel, a process creating ground truth data for training semantic segmentation protocols. In FIG. 6B, a manual segmentation via ‘brush tool’ S630, for instance, allows for exact identification and assignment of a ‘tooth’ label to appropriate pixels of the source image 601. In another embodiment, the manual segmentation tool may be selected from a group including but not limited to flood fill tool, smart polygon tool, and polygon tool.
  • Following manual labeling of ‘tooth’ of each pixel of each dental image of the first dataset, the first dataset may be used to train a pretrained CNN classifier, such as, for instance, AlexNet, to perform per pixel predictions of ‘tooth’. In an embodiment, the pretrained CNN classifier may be further tuned according to a plurality of labeled pixel patches. To this end, the manually segmented dental images described above may be converted to a plurality of pixel patches S531, wherein each of the plurality of generated pixel patches comprises 120 pixels surrounding a central pixel. Using pixel patches instead of individual pixels decreases training time without unnecessarily sacrificing classification accuracy. FIG. 7A through FIG. 7U are illustrations of exemplary pixel patches of a dental image processing protocol, according to an embodiment of the present disclosure. The pretrained CNN classifier may be retrained on pixel patches from the ‘tooth’ class S731, as illustrated in FIG. 7A through FIG. 7U and wherein [1 0] indicates ‘tooth’ and [0 1] indicates ‘not tooth’, in order to predict whether the central pixel of each pixel patch belongs to the ‘tooth’ class.
  • Following training, the retrained CNN classifier may be applied to a second dataset comprising a plurality of CBCT dental images. FIG. 8A, FIG. 8B, FIG. 8C, and FIG. 8D are illustrations of aspects of a snakes segmentation of a dental image processing protocol, according to an embodiment of the present disclosure. In order to prepare the retrained CNN classifier predictions for snakes segmentation S833, the predictions, generated for a source image 801, for instance, shown in FIG. 8A, may first be converted to segmentations. The segmented predictions S834, shown in FIG. 8B, may then be downsampled to obtain images prepared for snakes segmentation S835. To this end, pixels of the dental images may then be thresholded according to HU S836, shown in FIG. 8C, wherein HU values reflect the radiodensity of a tissue and similar hues indicate similar tissue types. A Gaussian model estimator may provide a speed map for snakes segmentation. The above-described snakes segmentation may then be performed as a final segmentation of the modified output of the retrained CNN classifier S833, as shown in FIG. 8D.
  • The resulting plurality of CBCT dental images segmented via snakes segmentation form an initial FCN training database. The initial FCN training database can then be used to retrain a pretrained Unet-FCN.
  • As described above, and with reference to FIG. 9, predictions from the retrained FCN classifier may be evaluated and improved via a 3D model enhancement. FIG. 9 is a flowchart of an aspect of a training data generation protocol, according to an embodiment of the present disclosure, wherein the predictions from the retrained FCN classifier may be improved via enhancement of a 3D model. First, predictions of the initial FCN training database from the retrained FCN classifier may be segmented S968. These segmentations may then be converted to a 3D polygonal surface S969. This allows for enhancement of the 3D surface model S970 at a holistic level and with focus on the result, eliminating the need to enhance individual slices of the model. In an example, this reduces computational burden from five-hundred 2D slices to one 3D polygonal surface. Having improved the 3D surface model by applying filters and correcting prediction errors, extraneous anatomical data may then be removed from the 3D surface model S971 in order to isolate anatomical features of interest. Once ‘enhanced’, the 3D surface model can be reverted to segmentation S972. Segmentations may then be confirmed for quality S973, relative to a pre-determined error threshold, and added to a subsequent FCN training database if of sufficient quality.
  • Following enhancement of CBCT dental images of the subsequent FCN training database, as described above, and retraining of the retrained FCN classifier, the run-time in FIG. 3 may be implemented with the retrained FCN classifier. FIG. 10A, FIG. 10B, and FIG. 10C are illustrations of a segmentation of teeth of a dental image after prediction by the retrained FCN classifier. In FIG. 10A, an illustration of a segmentation of an anterior aspect of a dental arch from a CBCT dental image, according to an embodiment of the present disclosure, a transverse plane segmentation 1062 with overlaid ‘tooth’ predictions is observed. A full dental arch segmentation in a transverse plane 1063, in FIG. 10B, illustrates a segmented FCN classifier prediction across a cross-section of the dental arch, highlighting the crowns of each tooth. In a sagittal plane view of an aspect of a superior, or upper, and an inferior, or lower, dental arch 1064, in FIG. 10C, a complete view of a cross-section of an aspect of each dental arch is visible, including crowns and roots.
  • Conversely, FIG. 11A, FIG. 11B, and FIG. 11C are illustrations of a segmentation of bone of a dental image after prediction by the retrained FCN classifier, according to an embodiment of the present disclosure. After applying the retrained FCN classifier to an unknown source image 1101, for instance, shown in FIG. 11A, a sagittal view of the mouth of a patient, wherein the skull is on the left side of the image, a ‘bone’ classification is predicted. Follow classification of ‘bone’, continuous regions of ‘bone’ may be identified and relative centers of mass may be compared to determine anatomic identity, in the context of the dental image plane. As shown in FIG. 11B, a contiguous region of classified ‘bone’ proximate to the skull, or anatomically superior, is identified as the maxilla 1165, while an inferior contiguous region is identified as the mandible 1166, as shown in FIG. 11C.
  • A plurality of processed CBCT dental images generated via prediction by the retrained FCN classifier, similar to those observed in FIG. 10A through FIG. 11C, can be segmented and reconstructed to render a complex 3D model of dentition in the context of periodontal tissues, as reflected in FIG. 4. To this end, following rendering of the segmentation of the above-described predictions via the retrained FCN classifier, the resulting simple 3D model may be further enhanced to provide additional information related to the periodontal tissue environment. From the simple 3D model generated via marching cubes, for instance, a density measurement and distance measurement can be performed. FIG. 12A and FIG. 12B are flowcharts of a determination of a distance measurement and a density measurement of a three-dimensional model, respectively, according to an embodiment of the present disclosure. To this end, the density measurement 1254 and the distance measurement 1255 of the periodontal space may be computed from the surface reconstruction of the segmented predictions of the retrained FCN classifier S1253.
  • The distance measurement 1254 comprises locating and computing, for each point on the surface reconstruction describing the surface of a bone S1220, a distance to a nearest point on the surface reconstructions describing the root S1221. This distance, therefore, reflects a volume of alveolar process wherein a tooth may move.
  • The density measurement comprises locating and computing, from each point on a surface reconstruction describing the surface of a root S1224, a metric of the density of the surrounding tissue S1225. This metric may be determined on the basis of mean voxel intensities surrounding a vertex-point coordinate of the surface reconstruction, reflecting the spatial arrangement of bone and the ability of a tooth to move, therein.
  • Once calculated for each point within the surface reconstruction, or simple 3D model, the density measurement S1226 and distance measurement S1222 may be mapped to the simple 3D model, rendering it a complex 3D model. To allow for rapid visualization of the distance measurement, with reference again to FIG. 4, varying distances are denoted via heat map, wherein regions of minimal thickness and maximal thickness are of varying hues.
  • Having generated a ‘color’ mapped complex 3D model of an initial tooth position of dentition of a patient, virtual setup and visualization of intermediary tooth movements during realignment can be envisioned. In an embodiment, a dental professional can prescribe a final tooth position for each tooth of each dental arch. Further, this final tooth position can be determined in context of a control arrangement, or dentition, shown in FIG. 13A and FIG. 13B. FIG. 13A and FIG. 13B are illustrations of a three-dimensional model of dentition, according to an embodiment of the present disclosure. An inferior dental arch 1302, shown in FIG. 13A, and superior dental arch 1303, shown in FIG. 13B, represent an optimal crown position. Using the dental control and optimal crown positions of FIG. 13A and FIG. 13B, a tooth movement plan can be developed.
  • Specifically, as shown in FIG. 14A, FIG. 14B, and FIG. 14C, illustrations of a positioning of a three-dimensional model generated from a processed dental image, intermediary stages of tooth movement can be determined based upon a prescribed final tooth position and an initial tooth position. The initial tooth position 1475 in FIG. 14A reflects a maligned dental arch and the varying thicknesses of alveolar process surrounding the root of the tooth. In an example, the roots of a lateral incisor are deep to the buccal surface of the alveolar process, whereas an adjacent tooth, or a central incisor, may be relatively superficial with respect to the buccal surface of the alveolar process. In an embodiment, a prescribing dental professional determines that a lateral incisor, indicated by the left arrow of the initial tooth alignment 1475, need be rotated about a transverse axis, the roots of the lateral incisor being moved anteriorly and proximate to the buccal surface of the alveolar process. At an intermediary stage 1476, shown in FIG. 14B, with the left arrow still indicating the lateral incisor, the required movement has been initiated. The changing hue of the model proximate the roots of the lateral incisor reflect this movement. Following subsequent intermediary movements, a final tooth position 1477, shown in FIG. 14C, may be achieved. Consequently, the determined thickness of the alveolar process between the root surface and the buccal surface of the alveolar process is decreased, as indicated by the shifting hue at the left arrow of the complex 3D model of FIG. 14C.
  • According to an embodiment, intermediary tooth positions may be determined manually according to a final tooth position, an initial tooth position, and the movements of adjacent teeth. In another embodiment, intermediary tooth positions may be determined automatically, via a path determining protocol executed by the processing circuitry.
  • In another embodiment, determined tooth movements may be informed by a quantitative model of expected bone growth and resorption at the root, lingual, and buccal surfaces of the alveolar process. For example, as a tooth movement results in anterior rotation of a root of a tooth, bone deposition, and thus thickening, may occur on the buccal surface of the alveolar process. Concurrently, bone resorption may occur on the lingual surface of the alveolar process. Expected bone growth or bone resorption can be added to the complex 3D model of the teeth and surrounding bone during rendering of intermediary tooth movements.
  • In still another embodiment, upon evaluation of a complex 3D model, a prescribed final tooth position may not be a realizable final tooth position due to constraints of the facial skeleton, as informed by the above-described density measurement and distance measurement. In such case, a realizable final tooth position is determined, with the input of the prescribing dental professional, and in the context of function and aesthetic.
  • Having determined intermediary and final tooth positions, and with reference again to FIG. 1, dental aligners may be fabricated, accordingly.
  • Next, a hardware description of the dental image processing device according to exemplary embodiments is described with reference to FIG. 15. In FIG. 15, the dental image processing device includes a CPU 1580 which performs the processes described above/below. In another embodiment, the processing device may be a GPU, GPGPU, or TPU. The process data and instructions may be stored in memory 1581. These processes and instructions may also be stored on a storage medium disk 1582 such as a hard drive (HDD) or portable storage medium or may be stored remotely. Further, the claimed advancements are not limited by the form of the computer-readable media on which the instructions of the inventive process are stored. For example, the instructions may be stored on CDs, DVDs, in FLASH memory, RAM, ROM, PROM, EPROM, EEPROM, hard disk or any other information processing device with which the dental image processing device communicates, such as a server or computer.
  • Further, the claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU 1580 and an operating system such as Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
  • The hardware elements in order to achieve the dental image processing device may be realized by various circuitry elements, known to those skilled in the art. For example, CPU 1580 may be a Xenon or Core processor from Intel of America or an Opteron processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art. Alternatively, the CPU 1580 may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, CPU 1580 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
  • The dental image processing device in FIG. 15 also includes a network controller 1583, such as an Intel Ethernet PRO network interface card from Intel Corporation of America, for interfacing with network 1595. As can be appreciated, the network 1595 can be a public network, such as the Internet, or a private network such as an LAN or WAN network, or any combination thereof and can also include PSTN or ISDN sub-networks. The network 1595 can also be wired, such as an Ethernet network, or can be wireless such as a cellular network including EDGE, 3G and 4G wireless cellular systems. The wireless network can also be WiFi, Bluetooth®, or any other wireless form of communication that is known.
  • The dental image processing device further includes a display controller 1584, such as a NVIDIA GeForce GTX® or Quadro® graphics adaptor from NVIDIA Corporation of America for interfacing with display 1585, such as a Hewlett Packard HPL2445w® LCD monitor. A general purpose I/O interface 1586 interfaces with a keyboard and/or mouse 1587 as well as a touch screen panel 1588 on or separate from display 1585. General purpose I/O interface also connects to a variety of peripherals 1589 including printers and scanners, such as an OfficeJet® or DeskJet® from Hewlett Packard.
  • A sound controller 1590 is also provided in the dental image processing device, such as Sound Blaster X-Fi Titanium from Creative, to interface with speakers/microphone 1591 thereby providing sounds and/or music.
  • The general purpose storage controller 1592 connects the storage medium disk 1582 with communication bus 1593, which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the dental image processing device. A description of the general features and functionality of the display 1585, keyboard and/or mouse 1587, as well as the display controller 1584, storage controller 1592, network controller 1583, sound controller 1590, and general purpose I/O interface 1586 is omitted herein for brevity as these features are known.
  • Embodiments of the present disclosure may also be as set forth in the following parentheticals.
  • (1) A method for generating an intermediate position of one or more teeth of a dental arch of a patient, comprising classifying, via processing circuitry, pixels of one or more medical images into classes corresponding to biological structure types, segmenting, via the processing circuitry, the classified pixels of the one or more medical images into biological structures, rendering, via the processing circuitry, a three-dimensional model of the biological structures based on the segmented classified pixels, determining, via the processing circuitry, one or more metrics, based upon the three-dimensional model, describing characteristics of a bone of the biological structures, acquiring, via the processing circuitry, a final position of each of the one or more teeth of the dental arch based upon the three-dimensional model, and generating, via the processing circuitry, the intermediate position of each of the one or more teeth of the patient based upon the one or more metrics and the acquired final position.
  • (2) The method according to (1), wherein the classifying further comprises training, via the processing circuitry, a classifier on a training database, and classifying, via the processing circuitry, the pixels of the one or more medical images based upon the classifier, wherein the training database includes a corpus of reference medical images, each reference medical image comprising at least one identifiable reference biological structure associated in the training database with at least one corresponding description of the biological structure type.
  • (3) The method according to either (1) or (2), wherein the intermediate position of each of the one or more teeth is determined based upon a position of an aspect of a proximate tooth of the one or more teeth of the dental arch.
  • (4) The method according to any of (1) to (3), wherein the training further comprises training, via the processing circuitry, a first neural network according to a first dataset, training, via the processing circuitry, a second neural network according to a second dataset, the second dataset comprising a plurality of classification predictions of the first neural network, and generating, via the processing circuitry, the training database based upon a plurality of classification predictions of the second neural network.
  • (5) The method according to any of (1) to (4), wherein the second neural network is a fully convolutional neural network.
  • (6) The method according to any of (1) to (5), wherein one of the one or more metrics is a distance metric, the distance metric being defined as a distance between a surface of a root of one of the one or more teeth of the dental arch and a surface of an alveolar process.
  • (7) The method according to any of (1) to (6), wherein one of the one or more metrics is a density metric, the density metric being defined as a measure of mean intensity of voxels adjacent to a central voxel.
  • (8) A non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by a computer having a processing circuitry, cause the computer to perform a method, the method comprising classifying pixels of one or more medical images into classes corresponding to biological structure types, segmenting the classified pixels of the one or more medical images into biological structures, rendering a three-dimensional model of the biological structures based on the segmented classified pixels, determining one or more metrics, based upon the three-dimensional model, describing characteristics of a bone of the biological structures, acquiring a final position of each of the one or more teeth of the dental arch based upon the three-dimensional model, and generating the intermediate position of each of the one or more teeth of the patient based upon the one or more metrics and the acquired final position.
  • (9) The method according to (8), wherein the classifying further comprises training a classifier on a training database, and classifying the pixels of the one or more medical images based upon the classifier, wherein the training database includes a corpus of reference medical images, each reference medical image comprising at least one identifiable reference biological structure associated in the training database with at least one corresponding description of the biological structure type.
  • (10) The method according to either (8) or (9), wherein the intermediate position of each of the one or more teeth is determined based upon a position of an aspect of a proximate tooth of the one or more teeth of the dental arch.
  • (11) The method according to any of (8) to (10), wherein the training further comprises training a first neural network according to a first dataset, training a second neural network according to a second dataset, the second dataset comprising a plurality of classification predictions of the first neural network, and generating the training database based upon a plurality of classification predictions of the second neural network.
  • (12) The method according to any of (8) to (11), wherein the second neural network is a fully convolutional neural network.
  • (13) The method according to any of (8) to (12), wherein one of the one or more metrics is a distance metric, the distance metric being defined as a distance between a surface of a root of one of the one or more teeth of the dental arch and a surface of an alveolar process.
  • (14) The method according to any of (8) to (13), wherein one of the one or more metrics is a density metric, the density metric being defined as a measure of mean intensity of voxels adjacent to a central voxel.
  • (15) An apparatus for processing of dental images, comprising a processing circuitry configured to classify pixels of one or more medical images into classes corresponding to biological structure types, segment the classified pixels of the one or more medical images into biological structures, render a three-dimensional model of the biological structures based on the segmented classified pixels, determine one or more metrics, based upon the three-dimensional model, describing a bone of the biological structures, acquire a final position of each of the one or more teeth of the dental arch based upon the three-dimensional model, and generate the intermediate position of each of the one or more teeth of the patient based upon the one or more metrics and the acquired final position.
  • (16) The apparatus according to (15), wherein the processing circuitry is further configured to train a classifier on a training database, and classify the pixels of the one or more medical images based upon the classifier, wherein the training database includes a corpus of reference medical images, each reference medical image comprising at least one identifiable reference biological structure associated in the training database with at least one corresponding description of the biological structure type.
  • (17) The apparatus according to either (15) or (16), wherein the intermediate position of each of the one or more teeth is determined based upon a position of an aspect of a proximate tooth of the one or more teeth of the dental arch.
  • (18) The apparatus according to any of (15) to (17), wherein the training further comprises training a first neural network according to a first dataset, training a second neural network according to a second dataset, the second dataset comprising a plurality of classification predictions of the first neural network, and generating the training database based upon a plurality of classification predictions of the second neural network.
  • (19) The apparatus according to any of (15) to (18), wherein one of the one or more metrics is a distance metric, the distance metric being defined as a distance between a surface of a root of one of the one or more teeth of the dental arch and a surface of an alveolar process.
  • (20) The apparatus according to any of (15) to (19), wherein one of the one or more metrics is a density metric, the density metric being defined as a measure of mean intensity of voxels adjacent to a central voxel.
  • (21) A method of pixel-based classification of medical images, comprising training a neural network to perform the pixel-based classification of the medical images, the training comprising performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled, applying the classifier after the first training to a second training set of medical images to classify pixels of the second training set of medical images, identifying and correcting incorrectly classified pixels of the second training set of medical images, and performing a second training on the classifier after the first training using the manually labeled first training set of medical images and the correctly classified second training set of medical images, and applying the classifier after the second training to the medical images to perform the pixel-based classification, the pixel-based classification of the medical images including assigning pixels of each medical image to a biological structure type.
  • (22) The method of (21), wherein the training the neural network further comprises generating a third training set of medical images that includes the manually labeled first training set of medical images and the correctly classified second training set of medical images, allocating each medical image of the third training set of medical images to one of a first subset of the third training set of medical images or a second subset of the third training set of medical images, and performing a third training on the classifier after the second training using the first subset of the third training set of medical images.
  • (23) The method of either (21) or (22), wherein the training the neural network further comprises applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified by the classifier, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating a number of the identified incorrectly classified images of the second subset of the third training set of medical images to the first subset of the third training set of medical images, and reallocating a number of images, corresponding to the number of images reallocated to the first subset of the third training set of medical images, from the first subset of the third training set of medical images to the second subset of the third training set of medical images, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images of the second subset of the third training set of medical images.
  • (24) The method of any one of (21) to (23), wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
  • (25) The method of any one of (21) to (24), wherein the training the neural network further comprises applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating images between the first subset of the third training set of medical images and the second subset of the third training set of medical images based on a comparison of a quantity of the identified incorrectly classified images and an identification threshold, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images the third training set of medical images.
  • (26) The method of any one of (21) to (25), wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
  • (27) The method of any one of (21) to (26), wherein the neural network is a fully convolutional neural network.
  • (28) The method of any one of (21) to (27), wherein the biological structure type is one of a hard tissue or a soft tissue.
  • (29) An apparatus for pixel-based classification of medical images, comprising processing circuitry configured to train a neural network to perform the pixel-based classification of the medical images, the training comprising performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled, applying the classifier after the first training to a second training set of medical images to classify pixels of the second training set of medical images, identifying and correcting incorrectly classified pixels of the second training set of medical images, and performing a second training on the classifier after the first training using the manually labeled first training set of medical images and the correctly classified second training set of medical images, and apply the classifier after the second training to the medical images to perform the pixel-based classification, the pixel-based classification of the medical images including assigning pixels of each medical image to a biological structure type.
  • (30) The apparatus of (29), wherein the processing circuitry is further configured to train the neural network by generating a third training set of medical images that includes the manually labeled first training set of medical images and the correctly classified second training set of medical images, allocating each medical image of the third training set of medical images to one of a first subset of the third training set of medical images or a second subset of the third training set of medical images, and performing a third training on the classifier after the second training using the first subset of the third training set of medical images.
  • (31) The apparatus of either of (29) or (30), wherein the processing circuitry is further configured to train the neural network by applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified by the classifier, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating a number of the identified incorrectly classified images of the second subset of the third training set of medical images to the first subset of the third training set of medical images, and reallocating a number of images, corresponding to the number of images reallocated to the first subset of the third training set of medical images, from the first subset of the third training set of medical images to the second subset of the third training set of medical images, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images of the second subset of the third training set of medical images.
  • (32) The apparatus of any one of (29) to (31), wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
  • (33) The apparatus of any one of (29) to (32), wherein the processing circuitry is further configured to train the neural network by applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating images between the first subset of the third training set of medical images and the second subset of the third training set of medical images based on a comparison of a quantity of the identified incorrectly classified images and an identification threshold, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images the third training set of medical images.
  • (34) The apparatus of any one of (29) to (33), wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
  • (35) A non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by a computer, cause the computer to perform a method of pixel-based classification of medical images, the method comprising training a neural network to perform the pixel-based classification of the medical images, the training comprising performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled, applying the classifier after the first training to a second training set of medical images to classify pixels of the second training set of medical images, identifying and correcting incorrectly classified pixels of the second training set of medical images, and performing a second training on the classifier after the first training using the manually labeled first training set of medical images and the correctly classified second training set of medical images; and applying the classifier after the second training to the medical images to perform the pixel-based classification, the pixel-based classification of the medical images including assigning pixels of each medical image to a biological structure type.
  • (36) The non-transitory computer-readable storage medium of (35), wherein the training the neural network further comprises generating a third training set of medical images that includes the manually labeled first training set of medical images and the correctly classified second training set of medical images, allocating each medical image of the third training set of medical images to one of a first subset of the third training set of medical images or a second subset of the third training set of medical images, and performing a third training on the classifier after the second training using the first subset of the third training set of medical images.
  • (37) The non-transitory computer-readable storage medium of either (35) or (36), wherein the training the neural network further comprises applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified by the classifier, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating a number of the identified incorrectly classified images of the second subset of the third training set of medical images to the first subset of the third training set of medical images, and reallocating a number of images, corresponding to the number of images reallocated to the first subset of the third training set of medical images, from the first subset of the third training set of medical images to the second subset of the third training set of medical images, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images of the second subset of the third training set of medical images.
  • (38) The non-transitory computer-readable storage medium of any one of (35) to (37), wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
  • (39) The non-transitory computer-readable storage medium of any one of (35) to (38), wherein the training the neural network further comprises applying the classifier after the third training to the second subset of the third training set of medical images, identifying images of the second subset of the third training set of medical images that are incorrectly classified, the identified images having a pixel classification error rate above a pixel classification threshold, reallocating images between the first subset of the third training set of medical images and the second subset of the third training set of medical images based on a comparison of a quantity of the identified incorrectly classified images and an identification threshold, and training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images the third training set of medical images.
  • (40) The non-transitory computer-readable storage medium of any one of (35) to (39), wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
  • Obviously, numerous modifications and variations are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein.
  • Thus, the foregoing discussion discloses and describes merely exemplary embodiments of the present invention. As will be understood by those skilled in the art, the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Numerous modification and variations on the present invention are possible in light of the above teachings. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting of the scope of the invention, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, defines, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.
  • All publications, patent applications, patents, and other references mentioned herein are incorporated by reference in their entirety. Further, the materials, methods, and examples are illustrative only and are not intended to be limiting, unless otherwise specified.

Claims (20)

1. A method of pixel-based classification of medical images, comprising:
training a neural network to perform the pixel-based classification of the medical images, the training comprising
performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled,
applying the classifier after the first training to a second training set of medical images to classify pixels of the second training set of medical images,
identifying and correcting incorrectly classified pixels of the second training set of medical images, and
performing a second training on the classifier after the first training using the manually labeled first training set of medical images and the correctly classified second training set of medical images; and
applying the classifier after the second training to the medical images to perform the pixel-based classification, the pixel-based classification of the medical images including assigning pixels of each medical image to a biological structure type.
2. The method of claim 1, wherein the training the neural network further comprises
generating a third training set of medical images that includes the manually labeled first training set of medical images and the correctly classified second training set of medical images,
allocating each medical image of the third training set of medical images to one of a first subset of the third training set of medical images or a second subset of the third training set of medical images, and
performing a third training on the classifier after the second training using the first subset of the third training set of medical images.
3. The method of claim 2, wherein the training the neural network further comprises
applying the classifier after the third training to the second subset of the third training set of medical images,
identifying images of the second subset of the third training set of medical images that are incorrectly classified by the classifier, the identified images having a pixel classification error rate above a pixel classification threshold,
reallocating a number of the identified incorrectly classified images of the second subset of the third training set of medical images to the first subset of the third training set of medical images, and reallocating a number of images, corresponding to the number of images reallocated to the first subset of the third training set of medical images, from the first subset of the third training set of medical images to the second subset of the third training set of medical images, and
training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images of the second subset of the third training set of medical images.
4. The method of claim 3, wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
5. The method of claim 2, wherein the training the neural network further comprises
applying the classifier after the third training to the second subset of the third training set of medical images,
identifying images of the second subset of the third training set of medical images that are incorrectly classified, the identified images having a pixel classification error rate above a pixel classification threshold,
reallocating images between the first subset of the third training set of medical images and the second subset of the third training set of medical images based on a comparison of a quantity of the identified incorrectly classified images and an identification threshold, and
training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images the third training set of medical images.
6. The method of claim 5, wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
7. The method of claim 1, wherein the neural network is a fully convolutional neural network.
8. The method of claim I , wherein the biological structure type is one of a hard tissue or a soft tissue.
9. An apparatus for pixel-based classification of medical images, comprising:
processing circuitry configured to
train a neural network to perform the pixel-based classification of the medical images, the training comprising
performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled,
applying the classifier after the first training to a second training set of medical images to classify pixels of the second training set of medical images,
identifying and correcting incorrectly classified pixels of the second training set of medical images, and
performing a second training on the classifier after the first training using the manually labeled first training set of medical images and the correctly classified second training set of medical images, and
apply the classifier after the second training to the medical images to perform the pixel-based classification, the pixel-based classification of the medical images including assigning pixels of each medical image to a biological structure type.
10. The apparatus of claim 9, wherein the processing circuitry is further configured to train the neural network by
generating a third training set of medical images that includes the manually labeled first training set of medical images and the correctly classified second training set of medical images,
allocating each medical image of the third training set of medical images to one of a first subset of the third training set of medical images or a second subset of the third training set of medical images, and
performing a third training on the classifier after the second training using the first subset of the third training set of medical images.
11. The apparatus of claim 10, wherein the processing circuitry is further configured to train the neural network by
applying the classifier after the third training to the second subset of the third training set of medical images,
identifying images of the second subset of the third training set of medical images that are incorrectly classified by the classifier, the identified images having a pixel classification error rate above a pixel classification threshold,
reallocating a number of the identified incorrectly classified images of the second subset of the third training set of medical images to the first subset of the third training set of medical images, and reallocating a number of images, corresponding to the number of images reallocated to the first subset of the third training set of medical images, from the first subset of the third training set of medical images to the second subset of the third training set of medical images, and
training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images of the second subset of the third training set of medical images.
12. The apparatus of claim 11, wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
13. The apparatus of claim 10, wherein the processing circuitry is further configured to train the neural network by
applying the classifier after the third training to the second subset of the third training set of medical images,
identifying images of the second subset of the third training set of medical images that are incorrectly classified, the identified images having a pixel classification error rate above a pixel classification threshold,
reallocating images between the first subset of the third training set of medical images and the second subset of the third training set of medical images based on a comparison of a quantity of the identified incorrectly classified images and an identification threshold, and
training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images the third training set of medical images.
14. The apparatus of claim 13, wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
15. A non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by a computer, cause the computer to perform a method of pixel-based classification of medical images, the method comprising:
training a neural network to perform the pixel-based classification of the medical images, the training comprising
performing a first training on a classifier using a first training set of medical images, pixels of the first training set of medical images being manually labeled,
applying the classifier after the first training to a second training set of medical images to classify pixels of the second training set of medical images,
identifying and correcting incorrectly classified pixels of the second training set of medical images, and
performing a second training on the classifier after the first training using the manually labeled first training set of medical images and the correctly classified second training set of medical images; and
applying the classifier after the second training to the medical images to perform the pixel-based classification, the pixel-based classification of the medical images including assigning pixels of each medical image to a biological structure type.
16. The non-transitory computer-readable storage medium of claim 15, wherein the training the neural network further comprises
generating a third training set of medical images that includes the manually labeled first training set of medical images and the correctly classified second training set of medical images,
allocating each medical image of the third training set of medical images to one of a first subset of the third training set of medical images or a second subset of the third training set of medical images, and
performing a third training on the classifier after the second training using the first subset of the third training set of medical images.
17. The non-transitory computer-readable storage medium of claim 16, wherein the training the neural network further comprises
applying the classifier after the third training to the second subset of the third training set of medical images, p1 identifying images of the second subset of the third training set of medical images that are incorrectly classified by the classifier, the identified images having a pixel classification error rate above a pixel classification threshold,
reallocating a number of the identified incorrectly classified images of the second subset of the third training set of medical images to the first subset of the third training set of medical images, and reallocating a number of images, corresponding to the number of images reallocated to the first subset of the third training set of medical images, from the first subset of the third training set of medical images to the second subset of the third training set of medical images, and
training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images of the second subset of the third training set of medical images.
18. The non-transitory computer-readable storage medium of claim 17, wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
19. The non-transitory computer-readable storage medium of claim 16, wherein the training the neural network further comprises
applying the classifier after the third training to the second subset of the third training set of medical images,
identifying images of the second subset of the third training set of medical images that are incorrectly classified, the identified images having a pixel classification error rate above a pixel classification threshold,
reallocating images between the first subset of the third training set of medical images and the second subset of the third training set of medical images based on a comparison of a quantity of the identified incorrectly classified images and an identification threshold, and
training the classifier after the third training using the first subset of the third training set of medical images including the reallocated images the third training set of medical images.
20. The non-transitory computer-readable storage medium of claim 19, wherein a cycle including the applying, the identifying, the reallocating, and the training is performed two or more times.
US17/010,079 2017-11-16 2020-09-02 Dental image processing protocol for dental aligners Abandoned US20200402647A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/010,079 US20200402647A1 (en) 2017-11-16 2020-09-02 Dental image processing protocol for dental aligners

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
EA201700561A EA201700561A1 (en) 2017-11-16 2017-11-16 METHOD AND SYSTEM OF TEETH ALIGNMENT ON THE BASIS OF MODELING THE CROWN AND ROOTS MOVEMENT
EA201700561 EA043155B1 (en) 2017-11-16 METHOD AND SYSTEM FOR ALIGNING TEETH ON THE BASIS OF MODELING THE MOVEMENT OF CROWN AND ROOTS
US16/017,687 US10748651B2 (en) 2017-11-16 2018-06-25 Method and system of teeth alignment based on simulating of crown and root movement
US16/929,588 US20200350059A1 (en) 2017-11-16 2020-07-15 Method and system of teeth alignment based on simulating of crown and root movement
US17/010,079 US20200402647A1 (en) 2017-11-16 2020-09-02 Dental image processing protocol for dental aligners

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/929,588 Division US20200350059A1 (en) 2017-11-16 2020-07-15 Method and system of teeth alignment based on simulating of crown and root movement

Publications (1)

Publication Number Publication Date
US20200402647A1 true US20200402647A1 (en) 2020-12-24

Family

ID=66433600

Family Applications (3)

Application Number Title Priority Date Filing Date
US16/017,687 Active US10748651B2 (en) 2017-11-16 2018-06-25 Method and system of teeth alignment based on simulating of crown and root movement
US16/929,588 Abandoned US20200350059A1 (en) 2017-11-16 2020-07-15 Method and system of teeth alignment based on simulating of crown and root movement
US17/010,079 Abandoned US20200402647A1 (en) 2017-11-16 2020-09-02 Dental image processing protocol for dental aligners

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US16/017,687 Active US10748651B2 (en) 2017-11-16 2018-06-25 Method and system of teeth alignment based on simulating of crown and root movement
US16/929,588 Abandoned US20200350059A1 (en) 2017-11-16 2020-07-15 Method and system of teeth alignment based on simulating of crown and root movement

Country Status (3)

Country Link
US (3) US10748651B2 (en)
EA (1) EA201700561A1 (en)
WO (1) WO2019098887A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554607A (en) * 2021-07-15 2021-10-26 四川大学 Tooth body detection model, generation method and tooth body segmentation method
WO2022172201A1 (en) * 2021-02-11 2022-08-18 Axial Medical Printing Limited Systems and methods for automated segmentation of patient specific anatomies for pathology specific measurements
US11443423B2 (en) * 2018-10-30 2022-09-13 Dgnct Llc System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex
US11497557B2 (en) 2016-10-14 2022-11-15 Axial Medical Printing Limited Method for generating a 3D physical model of a patient specific anatomic feature from 2D medical images
US11551420B2 (en) 2016-10-14 2023-01-10 Axial Medical Printing Limited Method for generating a 3D physical model of a patient specific anatomic feature from 2D medical images
US11750794B2 (en) 2015-03-24 2023-09-05 Augmedics Ltd. Combining video-based and optic-based augmented reality in a near eye display
US11766296B2 (en) 2018-11-26 2023-09-26 Augmedics Ltd. Tracking system for image-guided surgery
US11801115B2 (en) 2019-12-22 2023-10-31 Augmedics Ltd. Mirroring in image guided surgery
US11896445B2 (en) 2021-07-07 2024-02-13 Augmedics Ltd. Iliac pin and adapter

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10916053B1 (en) 2019-11-26 2021-02-09 Sdc U.S. Smilepay Spv Systems and methods for constructing a three-dimensional model from two-dimensional images
US11403813B2 (en) * 2019-11-26 2022-08-02 Sdc U.S. Smilepay Spv Systems and methods for constructing a three-dimensional model from two-dimensional images
US11270523B2 (en) * 2017-11-29 2022-03-08 Sdc U.S. Smilepay Spv Systems and methods for constructing a three-dimensional model from two-dimensional images
EP3620130A1 (en) * 2018-09-04 2020-03-11 Promaton Holding B.V. Automated orthodontic treatment planning using deep learning
US11645746B2 (en) * 2018-11-28 2023-05-09 Orca Dental AI Ltd. Dental image segmentation and registration with machine learning
US11030801B2 (en) 2019-05-17 2021-06-08 Standard Cyborg, Inc. Three-dimensional modeling toolkit
US20200383758A1 (en) * 2019-06-04 2020-12-10 SmileDirectClub LLC Systems and methods for analyzing dental impressions
CN110287965A (en) * 2019-06-18 2019-09-27 成都玻尔兹曼智贝科技有限公司 The method that multilayer neural network is automatically separated root of the tooth and alveolar bone in CBCT image
TWI712396B (en) * 2020-01-16 2020-12-11 中國醫藥大學 Method and system of repairing oral defect model
US11488371B2 (en) * 2020-12-17 2022-11-01 Concat Systems, Inc. Machine learning artificial intelligence system for producing 360 virtual representation of an object
GB202107492D0 (en) * 2021-05-26 2021-07-07 Vitaware Ltd Image processing method
CN113244001B (en) * 2021-06-02 2021-12-31 微适美科技(北京)有限公司 Invisible correction and consultation management system based on image processing
US11423697B1 (en) * 2021-08-12 2022-08-23 Sdc U.S. Smilepay Spv Machine learning architecture for imaging protocol detector

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030194124A1 (en) * 2002-04-12 2003-10-16 The University Of Chicago Massive training artificial neural network (MTANN) for detecting abnormalities in medical images
US20160174902A1 (en) * 2013-10-17 2016-06-23 Siemens Aktiengesellschaft Method and System for Anatomical Object Detection Using Marginal Space Deep Neural Networks

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6632089B2 (en) * 1999-11-30 2003-10-14 Orametrix, Inc. Orthodontic treatment planning with user-specified simulation of tooth movement
FR2922753B1 (en) 2007-10-31 2010-12-10 Patrick Curiel METHOD FOR PRODUCING INDIVIDUALIZED ORTHODONTIC DEVICE
DK3583910T3 (en) 2010-02-25 2022-09-05 3Shape As DYNAMIC VIRTUAL ARTICULATOR
WO2012129160A2 (en) * 2011-03-21 2012-09-27 Carestream Health, Inc. A method for tooth surface classification
US20130051516A1 (en) 2011-08-31 2013-02-28 Carestream Health, Inc. Noise suppression for low x-ray dose cone-beam image reconstruction
RU132340U1 (en) 2013-04-22 2013-09-20 Леонид Семёнович Персин ORTHODONTIC APPARATUS
US9972083B2 (en) 2013-04-22 2018-05-15 Carestream Dental Technology Topco Limited Detection of tooth fractures in CBCT image
RU2559762C1 (en) 2014-05-22 2015-08-10 Юлия Сергеевна Липова Method for transversal expansion of upper dental arch
US10169871B2 (en) * 2016-01-21 2019-01-01 Elekta, Inc. Systems and methods for segmentation of intra-patient medical images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030194124A1 (en) * 2002-04-12 2003-10-16 The University Of Chicago Massive training artificial neural network (MTANN) for detecting abnormalities in medical images
US20160174902A1 (en) * 2013-10-17 2016-06-23 Siemens Aktiengesellschaft Method and System for Anatomical Object Detection Using Marginal Space Deep Neural Networks

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11750794B2 (en) 2015-03-24 2023-09-05 Augmedics Ltd. Combining video-based and optic-based augmented reality in a near eye display
US11715210B2 (en) 2016-10-14 2023-08-01 Axial Medical Printing Limited Method for generating a 3D physical model of a patient specific anatomic feature from 2D medical images
US11497557B2 (en) 2016-10-14 2022-11-15 Axial Medical Printing Limited Method for generating a 3D physical model of a patient specific anatomic feature from 2D medical images
US11551420B2 (en) 2016-10-14 2023-01-10 Axial Medical Printing Limited Method for generating a 3D physical model of a patient specific anatomic feature from 2D medical images
US11922631B2 (en) 2016-10-14 2024-03-05 Axial Medical Printing Limited Method for generating a 3D physical model of a patient specific anatomic feature from 2D medical images
US11443423B2 (en) * 2018-10-30 2022-09-13 Dgnct Llc System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex
US11766296B2 (en) 2018-11-26 2023-09-26 Augmedics Ltd. Tracking system for image-guided surgery
US11801115B2 (en) 2019-12-22 2023-10-31 Augmedics Ltd. Mirroring in image guided surgery
US11626212B2 (en) 2021-02-11 2023-04-11 Axial Medical Printing Limited Systems and methods for automated segmentation of patient specific anatomies for pathology specific measurements
WO2022172201A1 (en) * 2021-02-11 2022-08-18 Axial Medical Printing Limited Systems and methods for automated segmentation of patient specific anatomies for pathology specific measurements
US11869670B2 (en) 2021-02-11 2024-01-09 Axial Medical Printing Limited Systems and methods for automated segmentation of patient specific anatomies for pathology specific measurements
US11896445B2 (en) 2021-07-07 2024-02-13 Augmedics Ltd. Iliac pin and adapter
CN113554607A (en) * 2021-07-15 2021-10-26 四川大学 Tooth body detection model, generation method and tooth body segmentation method

Also Published As

Publication number Publication date
EA201700561A1 (en) 2019-05-31
WO2019098887A1 (en) 2019-05-23
US20200350059A1 (en) 2020-11-05
US10748651B2 (en) 2020-08-18
US20190148005A1 (en) 2019-05-16

Similar Documents

Publication Publication Date Title
US20200402647A1 (en) Dental image processing protocol for dental aligners
US11642195B2 (en) Visual presentation of gingival line generated based on 3D tooth model
US10945813B2 (en) Providing a simulated outcome of dental treatment on a patient
US11651494B2 (en) Apparatuses and methods for three-dimensional dental segmentation using dental image data
US20210393375A1 (en) Photo realistic rendering of smile image after treatment
US11464467B2 (en) Automated tooth localization, enumeration, and diagnostic system and method
US11443423B2 (en) System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex
US11793605B2 (en) Apparatus and methods for orthodontic treatment planning
Barone et al. CT segmentation of dental shapes by anatomy‐driven reformation imaging and B‐spline modelling
US11026767B1 (en) Systems and methods for planning an orthodontic treatment
US20220084267A1 (en) Systems and Methods for Generating Quick-Glance Interactive Diagnostic Reports
US20210217170A1 (en) System and Method for Classifying a Tooth Condition Based on Landmarked Anthropomorphic Measurements.
US20230132201A1 (en) Systems and methods for orthodontic and restorative treatment planning
Wu et al. Model-based orthodontic assessments for dental panoramic radiographs
US20220358740A1 (en) System and Method for Alignment of Volumetric and Surface Scan Images
Qian et al. An automatic tooth reconstruction method based on multimodal data
US20230419631A1 (en) Guided Implant Surgery Planning System and Method
US20230298272A1 (en) System and Method for an Automated Surgical Guide Design (SGD)
US20220361992A1 (en) System and Method for Predicting a Crown and Implant Feature for Dental Implant Planning
Orlowska et al. Virtual tooth extraction from cone beam computed tomography scans
US20230051400A1 (en) System and Method for Fusion of Volumetric and Surface Scan Images
US20230013902A1 (en) System and Method for Correcting for Distortions of a Diagnostic Image
US20230252748A1 (en) System and Method for a Patch-Loaded Multi-Planar Reconstruction (MPR)

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: 3D SMILE USA, INC., PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DOMMAR LLC;REEL/FRAME:054476/0496

Effective date: 20200917

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION