WO2021236918A1 - Systems and methods for generating orthodontic templates - Google Patents

Systems and methods for generating orthodontic templates Download PDF

Info

Publication number
WO2021236918A1
WO2021236918A1 PCT/US2021/033381 US2021033381W WO2021236918A1 WO 2021236918 A1 WO2021236918 A1 WO 2021236918A1 US 2021033381 W US2021033381 W US 2021033381W WO 2021236918 A1 WO2021236918 A1 WO 2021236918A1
Authority
WO
WIPO (PCT)
Prior art keywords
dental
images
dental images
structured
orthodontic
Prior art date
Application number
PCT/US2021/033381
Other languages
French (fr)
Inventor
Arel CORDERO
Caglayan DICLE
Melih MOTRO
Original Assignee
Phimentum Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Phimentum Corp. filed Critical Phimentum Corp.
Publication of WO2021236918A1 publication Critical patent/WO2021236918A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C7/00Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
    • A61C7/002Orthodontic computer assisted systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • Embodiments in accordance with this disclosure provide for automatic image templating which takes an unstructured set of images and automatically edits and/or arranges the images into a desired image template, or instructions to create a template, with little or no effort on the part of a user.
  • a system may include an imaging device and a controller.
  • the imaging device is operative to acquire one or more dental images of a patient.
  • the controller is operative to receive the one or more dental images, and to generate an instruction data set for producing an orthodontic template based at least in part on the one or more dental images.
  • the controller is further operative to generate the instruction data set by tagging each of the one or more images with a dental image type label.
  • the dental image type labels correspond to at least one dental image type selected from the dental image types consisting of: Profile; Profile-Smile; Frontal; Frontal- Smile; Upper-Occlusal; Lower-Occlusal; Intraoral-Right; Intraoral-Frontal; Intraoral-Left; Lateral Ceph; PA Ceph; Panoramic X-Ray; or Other.
  • the controller is further operative to generate the instruction data set by determining a cropping shape for each of the one or more dental images.
  • a system may include an imaging device and a controller.
  • the imaging device is operative to acquire one or more dental images of a patient.
  • the controller is operative to receive the one or more dental images; and to generate an orthodontic template based at least in part on the one or more dental images.
  • a system may include an imaging device and a controller. The imaging device is operative to acquire one or more dental images of a patient.
  • the controller is operative to receive the one or more dental images; tag each of the one or more dental images with a dental image type label via a first convolutional neural network; crop each of the one or more tagged dental images via a second convolutional neural network; and generate an orthodontic template based at least in part on the one or more cropped and tagged dental images.
  • a method may include receiving one or more dental images of a patient; tagging each of the one or more dental images with a dental image type; determining a cropping shape for each of the one or more tagged dental images based at least in part on the tagging; and generating an instruction data set for producing an orthodontic template based at least in part on the one or more tagged dental images and the cropping shapes.
  • a method may include receiving one or more dental images of a patient; tagging each of the one or more dental images with a dental image type; cropping each of the one or more tagged dental images; and generating an orthodontic template based at least in part on the one or more cropped and tagged dental images.
  • the method may further include pairing physical materials for use with the generated orthodontic template.
  • a system may include at least one processor and a memory device. The memory device stores a set of instructions that adapts the at least one processor to: receive one or more dental images; and generate an instruction data set for producing an orthodontic template based at least in part on the one or more dental images.
  • a system may include at least one processor and a memory device.
  • the memory device stores a set of instructions that adapts the at least one processor to: receive one or more dental images; and generate an orthodontic template based at least in part on the one or more dental images.
  • a system includes an imaging device and a controller.
  • the imaging device is structured to acquire one or more dental images of a patient.
  • the controller is structured to interpret the one or more dental images and generate an instruction data set for producing an orthodontic template based at least in part on the one or more dental images.
  • a system includes an imaging device and a controller.
  • the imaging device is structured to acquire one or more dental images of a patient.
  • a method includes interpreting a plurality of dental images of a patient and tagging each of the plurality of dental images with a dental image type. The method further includes determining at least one of a cropping shape or a transformation for each of the plurality of dental images, and generating an instruction data set for producing an orthodontic template based at least in part on the plurality of tagged dental images and at least one of the cropping shapes or transformations. [0018] In yet other embodiments, a method includes interpreting a plurality of dental images of a patient and tagging each of the plurality of dental images with a dental image type.
  • an apparatus includes a dental image interpretation circuit, a tagging circuit, at least one of a cropping circuit or a transformation circuit, an orthodontic data generation circuit, and an orthodontic data provisioning circuit.
  • the dental image interpretation circuit is structured to interpret one or more dental images of a patient.
  • the tagging circuit is structured to tag each of the one or more dental images with a dental image type.
  • the cropping circuit is structured to determine a cropping shape value for each of the one or more dental images.
  • the transformation circuit is structured to determine a transformation value for each of the one or more dental images.
  • the orthodontic data generation circuit is structured to generate, based at least in part on the one or more tagged dental images and the corresponding one or more cropping shape values, orthodontic data that defines, in part, an orthodontic template.
  • the orthodontic data provisioning circuit is structured to transmit the orthodontic data.
  • Fig.1 depicts a diagram of a system for generating orthodontic templates in accordance with an embodiment of this disclosure
  • Fig.2 depicts a block diagram of a machine learning model of the system of Fig.1, in accordance with an embodiment of this disclosure
  • Fig.3 depicts a diagram of an orthodontic template, in accordance with an embodiment of this disclosure
  • Fig.4 depicts another block diagram of the machine learning model of the system of Fig.1, in accordance with an embodiment of this disclosure
  • Fig.5 depicts a detection box and reference points overlaid on a dental image acquired by the system of Fig.1, in
  • Non-limiting examples of image templates include electronic files and/or physical objects that present one or more related images in a desired format.
  • an orthodontic template may present dental images in a desired arrangement in which certain types of images have a particular orientation, location, cropping, and/or other desired feature.
  • An image’s location in a template may be based on one or more relationships to other images included in the template.
  • an orthodontic template may locate, from left to right, an Intraoral-Right image next to an Intraoral-Frontal image next to an Intraoral-Left image.
  • an orthodontic template may locate, from left to right, an Intraoral-Right image next to an Intraoral-Frontal image next to an Intraoral-Left image.
  • the controller 14 may be operative to generate an orthodontic template 18 (Fig.2) and/or instructions 20 (Fig.2) for generating an orthodontic template.
  • the imaging device 12 may be a single device, while in other embodiments, the imaging device 12 may be multiple devices, e.g., separate x-ray and optical cameras.
  • the imaging device 12 may include an x-ray device for acquiring x-ray images and one or more handheld cameras for acquiring optical (human visible spectrum) images. [0035] As shown in Figs.1 and 2, the imaging device 12 may take x-ray and/or optical, i.e., in the human visible spectrum, images 16 of a patient.
  • the images 16 may be two- dimensional (2D) and/or three-dimensional (3D).
  • the images 16 may be digitized and transmitted electronically to the controller 14, i.e., delivered to the controller 14 by any electronic means possible, e.g., network transmission, scanner, USB and/or other memory device such as a CD, DVD, etc.
  • additional information e.g., patient medical history
  • patient medical history include name, age, gender, records of tooth extractions or implants, diagnosis information, and/or other types of medial information.
  • such information may be included on a generated template for reference use by a medical professional, e.g., an orthodontist, when using the template to diagnose a patient.
  • a patient’s medical history may be used by one or more models, as described herein, e.g., knowledge of an extracted tooth could inform the disambiguation of the left and right sides of the mouth.
  • the controller 14 may be located apart from the imaging device 12 such that the images 16 are transmitted over a network 22.
  • the imaging device 12 and controller 14 may be located in the same room, building and/or compound.
  • the imaging device 12 and controller 14 may be in separate rooms in the same building, in different buildings and/or in different geographic regions.
  • the network 22 may include an intranet and/or the Internet.
  • Fig.1 depicts the imaging device 12 and the controller 14 in communication over the network 22, it will be understood that, in embodiments, the controller 14 may be incorporated into the imaging device 12 or placed in electronic communication with the controller 14 via a direct connection, e.g., serial, USB, etc.
  • the controller 14 may include at least one processor and/or a memory device.
  • the controller 14 may be a remote server connected to the network 22.
  • the controller 14 may transmit the template 18 and/or the instructions 20 to the dental professional via the network 22.
  • a dental professional may acquire an unordered and/or uncropped set of images 16 from a patient and desire to have them arranged in the orthodontic template 18.
  • the dental professional may transmit the images 16 to the controller 14.
  • the images 16 may be provided to the controller 14 via a web interface, e.g., the user may drag and drop the images 16 onto a page for uploading to the controller 14.
  • the imaging device 12 may transmit the images 16 to the controller 14 via a non-web-based interface, e.g., an application programming interface (API).
  • API application programming interface
  • an algorithm/model 24 executing on the controller 14 may take as input the set of images 16 and output information/instructions 20 for editing and/or positioning the images 16 into any desired template, e.g., template 18.
  • the information 20 may include a label (also referred to herein as a “dental image type label”) for each image in the set 16.
  • the dental image types may include "Profile” 310; “Profile- Smile” 312; “Frontal” 314; “Frontal-Smile” 316; “Upper-Occlusal” 318; “Lower-Occlusal” 320; "Intraoral-Right” 322; “Intraoral-Frontal” 324; “Intraoral-Left” 326; “PA Ceph.328”; “Lateral Ceph.” 330; and/or “Panoramic X-Ray” 332.
  • the information 20 may also include the detection of relevant parts/reference points 26 (Fig.5) of an image 16 (such as the location of the face, mouth, or other reference points) that can be used to best crop the image 16.
  • relevant parts/reference points 26 Fig.5
  • reference points may be defined arbitrarily, generally, but in such a manner that they are consistent in any set of data. For example, reference points may be defined as the four (4) corners of a rectangle used to crop an image.
  • reference points may be defined on, within, or otherwise based in part on, points, such as the mouth or teeth (or structures therein), which can be reliably identified and/or labeled.
  • reliable points include the point between the central incisor at the line of occlusion and the point of the first molar at the line of occlusion.
  • the model 24 may employ a machine learning approach to determine the label for each image 16 and the information 20 for editing and cropping the image 16.
  • some embodiments may include one or more artificial intelligence/machine learning models 28 and/or 30 in which a first machine learning model 28 may be trained to determine and assign a label to an image 16, and the second machine learning model 30 may be trained to determine a cropping shape for the image 16.
  • the second learning model 30 may also determine one or more transformations, e.g., rotations, reflections, scaling, etc. for the image 16.
  • the machine learning models 28 and/or 30 may be and/or include one or more members of a family of functions known as Artificial Neural Networks for the classification and regression tasks, e.g., convolutional neural network architectures pre-trained on a large set of images labeled for different tasks.
  • the models 28 and/or 30 may then be fine-tuned using the images and ground-truth labels and annotations, e.g., supervised learning.
  • the models 28 and/or 30 may be trained to identify reference points, as described herein, and/or to predict a label, predict a cropping, and/or predict a transformation without use of reference points.
  • some embodiments of the models may directly generate ⁇ (Fig.5), scale, and/or resize a dental image.
  • reference points may be defined as data which is annotated and/or later predicted to achieve a desired image transformation.
  • the task of determining the label of an image 16 may be treated as a classification problem.
  • the first machine learning model 28 may be trained using images that have ground-truth labels assigned to them. In such embodiments, the first machine learning model 28 learns to classify/label new images correctly, which in turn, provides for the images 16 to be correctly placed for any desired template 18.
  • a training data set may be created by grouping a large set of images into classes by their image label, e.g., “Intraoral-Frontal”, and then associating each image and/or group with a one or more desired transformations, e.g., rotations, reflections, croppings, etc.
  • the grouped images may then be annotated with data describing how each image was transformed in a way understandable by a machine, e.g., JavaScript Object Notation (JSON) 600 (Fig.6) and/or XML files.
  • JSON JavaScript Object Notation
  • Embodiments of such training data may also include annotations identifying reference points and/or other values corresponding to the transformations.
  • the task of determining the best (or otherwise appropriate) edit and/or cropping (cropping shape) of an image 16 may be treated a combined detection and regression problem.
  • editing of the image 16 may include scaling, translating, rotating by an amount ⁇ (Fig.5), and/or reflecting the image.
  • the second machine learning model 30 may be trained to identify reference points 26 (Fig.5) for deriving/adjusting a cropping of the image 16, and to generate a cropping window (which may be based on a detection window 32) anchored around the detected reference points 26.
  • the second machine learning model 30 may also be trained using images that have ground-truth labels assigned to them, to include any ground-truth annotations needed to determine the best (or otherwise appropriate) edit and crop.
  • images that have ground-truth labels assigned to them might have an annotation for the region to crop around the mouth, as well as two reference points for the lower incisor edge (L1) and the lower 1st molar occlusal (L6).
  • the second machine learning model 30 may learn to detect these annotations in new “Intraoral-Left” images. The predicted annotations can then be used to determine the optimal edit and/or crop.
  • model 24 includes two machine learning models 28 and 30, it will be understood that other embodiments may employ a single machine learning model that performs the functions of both models 28 and 30. Further, other embodiments may employ three (3) or more machine learning models.
  • physical materials e.g., wires, braces, molds, retainers, oral cement, rubber bands, etc.
  • the paired materials may be shipped to a user, e.g., dental professional, of the information 20 and/or template 18.
  • Fig.2 depicts a single template 18 with a particular layout
  • embodiments of the disclosure may be configured to output any number of different templates, e.g., templates 34 and 36 as shown in Fig.7, which may be selected by the user/medical professional submitting the images 16 to the controller 14.
  • Shown in Fig.6 is an example of an instructions 600 for assembling/laying out a template.
  • the instructions 600 may be in machine-readable format, e.g., JSON, or in human-readable form, e.g., a set of step-by-step instructions for cropping and labeling the raw dental images.
  • Fig.8 an apparatus 800 for generating orthodontic data 810, is shown.
  • the apparatus 800 may form part of the controller 14 (Fig.1) and/or other type of computing device described herein.
  • the apparatus 800 includes a dental image interpretation circuit 812 structured to interpret one or more dental images 16 of a patient.
  • the apparatus 800 further includes a tagging circuit 814 structured to tag each of the one or more dental images 16 with a dental image type, as described herein, to generate tagged dental images 816.
  • the apparatus 800 further includes a cropping circuit 818 structured to determine a cropping shape value 820 for each of the one or more dental images 16 and/or tagged dental images 816.
  • the apparatus 800 may include a transformation circuit 817 structured to determine one or more transformation values 819, e.g., computer-readable instructions that specify rotations, reflections, scaling, etc. for the one or more dental images 16.
  • the apparatus 800 further includes an orthodontic data generation circuit 822 structured to generate, based at least in part on the one or more tagged dental images 816 and the corresponding cropping shape values 820, the orthodontic data 810.
  • the orthodontic data 810 may define, in part, an orthodontic template, as described herein.
  • the orthodontic data 810 may define an instruction data set and/or an orthodontic template.
  • the apparatus 800 may further include an orthodontic provisioning circuit 824 structured to transmit the orthodontic data 810, e.g., to an orthodontic practitioner.
  • the tagging circuit 814 may include a machine learning model, e.g., 28 (Fig.4) structured to identify the dental image type for each of the one or more dental images 16.
  • the cropping circuit 818 may include a machine learning model, e.g., 30 (Fig.4) structured to identify the one or more cropping shape values.
  • the apparatus 800 may further include a pairing circuit 826 structured to determine one or more physical materials for use with the orthodontic template.
  • the orthodontic data generation circuit 822 may be further structured to configure the orthodontic data 810 to indicate the one or more physical materials for use with the orthodontic template defined in part by the orthodontic data 810.
  • Shown in Fig.9 is a method 900 for generating an instruction set for an orthodontic template.
  • the method 900 may be performed by the apparatus 800 (Fig.8), the controller 14 (Fig.1) and/or any other computing device described herein.
  • the method 900 includes interpreting 910 a plurality of dental images 16 of a patient and tagging 912 each of the dental images 16 with a dental image type, as described herein.
  • the method 900 further includes determining 914 a cropping shape for each of the dental images 16 and/or tagged dental images.
  • the method 900 may further include determining 917 one or more transformations for each of the dental images 16 and/or tagged dental images.
  • the method 900 further includes generating 916 an instruction data set for producing an orthodontic template based at least in part on the tagged dental images and the cropping shapes.
  • the method 900 further includes electrically transmitting 918 the instruction data set over a network and/or printing 920 a hardcopy of the instruction data set and shipping 922 the hardcopy.
  • tagging 912 the dental images may include processing 924 the dental images with a neural network, e.g., model 28 (Fig.4).
  • determining 914 the cropping shapes may include identifying 926 one or more reference points within each of the dental images.
  • a method 100 for generating an orthodontic template may be performed by the apparatus 800 (Fig.8), the controller 14 (Fig.1), and/or any other computing device described herein.
  • the method 100 may include interpreting 110 a plurality of dental images 16 of a patient, tagging 112 each of the dental images with a dental image type, as described herein, and cropping 114 each of the dental images.
  • the method 100 may further include determining 117 one or more transformations for each of the dental images.
  • the method 100 may further include generating 116 an orthodontic template.
  • the method 100 may further include electrically transmitting 118 the orthodontic template and/or printing 120 a hardcopy of the orthodontic template and shipping 122 the hardcopy.
  • the method 100 may further include pairing 124 one or more physical materials for use with the orthodontic template.
  • tagging 112 the dental images may include processing 126 the dental images with a neural network, e.g., model 28 (Fig.4).
  • determining 114 the cropping shapes may include identifying 128 one or more reference points in each dental image.
  • the controller is structured to interpret the one or more dental images and generate the instruction data set for producing an orthodontic template based at least in part on the one or more dental images.
  • the controller is further structured to generate the instruction data set by tagging each of the one or more dental images with a dental image type label.
  • the dental image type label corresponds to at least one of: a Profile; a Profile- Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; or a Panoramic X-Ray.
  • the controller is further structured to determine the dental image type label for each of the one or more dental images via a neural network. In certain embodiments, the controller is further structured to generate the instruction data set by determining at least one of a cropping shape or a transformation for each of the one or more dental images. In certain embodiments, the controller is structured to generate the instruction data set by determining a cropping shape for each of the one or more dental images and is further structured to determine the cropping shape via a neural network. In certain embodiments, the neural network is trained to identify one or more reference points in each of the one or more dental images. The controller determines the cropping shape based at least in part on the one or more reference points.
  • the one or more reference points are based at least in part on at least one of: a mouth; or a face.
  • the controller is further structured to generate the instruction data set based at least in part on: a first machine learning model structured to identify a dental image type for each of the one or more dental images; and a second machine learning model structured to identify a cropping shape for each of the one or more dental images.
  • a first machine learning model structured to identify a dental image type for each of the one or more dental images
  • a second machine learning model structured to identify a cropping shape for each of the one or more dental images.
  • the controller is structured to receive the one or more dental images and generate the orthodontic template based at least in part on the one or more dental images.
  • the method further includes at least one of: electrically transmitting the orthodontic template over a network; or printing a hardcopy of the orthodontic template and shipping the hardcopy.
  • tagging each of the plurality of dental images with a dental image type includes processing each of the plurality of dental images with a neural network.
  • the dental image type corresponds to at least one of: a Profile; a Profile-Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; or a Panoramic X-Ray.
  • the method further includes pairing physical materials for use with the generated orthodontic template.
  • the method includes interpreting a plurality of dental images of a patient and tagging each of the plurality of dental images with a dental image type.
  • the method further includes determining at least one of a cropping shape or a transformation for each of the plurality of dental images and generating an instruction data set for producing an orthodontic template based at least in part on the plurality of tagged dental images and at least one of the cropping shapes or transformations.
  • the method further includes at least one of: electrically transmitting the instruction data set over a network; or printing a hardcopy of the instruction data set and shipping the hardcopy.
  • tagging each of the plurality of dental images with a dental image type includes processing each of the plurality of dental images with a neural network.
  • the dental image type corresponds to at least one of: a Profile; a Profile-Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; or a Panoramic X-Ray.
  • the method includes determining a cropping shape for each of the plurality of dental images and further includes identifying, via a neural network, one or more reference points in each of the plurality of dental images.
  • the one or more reference points may be based at least in part on at least one of: a mouth; or a face. In certain embodiments, the one or more reference points may be within the mouth or the face.
  • Another example embodiment of the present disclosure incudes a method for generating an orthodontic template. The method includes interpreting a plurality of dental images of a patient and tagging each of the plurality of dental images with a dental image type. The method further includes at least one of cropping or transforming each of the plurality of tagged dental images; and generating an orthodontic template based at least in part on the one or more dental images.
  • the method further includes at least one of: electrically transmitting the orthodontic template over a network; or printing a hardcopy of the orthodontic template and shipping the hardcopy.
  • tagging each of the plurality of dental images with a dental image type includes processing each of the plurality of dental images with a neural network.
  • the dental image type corresponds to at least one of: a Profile; a Profile-Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower- Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; a Panoramic X-Ray; and/or Other.
  • the method further includes pairing physical materials for use with the generated orthodontic template.
  • the apparatus includes a dental image interpretation circuit, a tagging circuit, at least one of a cropping circuit or a transformation circuit, an orthodontic data generation circuit, and an orthodontic data provisioning circuit.
  • the dental image interpretation circuit is structured to interpret one or more dental images of a patient.
  • the tagging circuit is structured to tag each of the one or more dental images with a dental image type.
  • the cropping circuit is structured to determine a cropping shape value for each of the one or more dental images.
  • the transformation circuit is structured to determine a transformation value for each of the one or more dental images.
  • the orthodontic data generation circuit is structured to generate, based at least in part on the one or more tagged dental images and at least of the corresponding one or more cropping shape values or transformation values, orthodontic data that defines, in part, an orthodontic template.
  • the orthodontic data provisioning circuit is structured to transmit the orthodontic data.
  • the tagging circuit includes a machine learning model structured to identify the dental image type for each of the one or more dental images.
  • the dental image type corresponds to at least one of: a Profile; a Profile- Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; a Intraoral-Left; a Lateral Ceph; a PA Ceph; a Panoramic X-Ray; and/or Other.
  • the apparatus includes the cropping circuit and the cropping circuit includes a machine learning model structured to identify one or more reference points in each of the one or more dental images.
  • the one or more reference points include at least one of: a mouth; or a face.
  • the apparatus includes a pairing circuit structured to determine one or more physical materials for use with the generated orthodontic template.
  • An example computing device includes a computer of any type, capable to access instructions stored in communication thereto such as upon a non-transient computer readable medium, whereupon the computer performs operations of the computing device upon executing the instructions.
  • such instructions themselves comprise a computing device.
  • a computing device may be a separate hardware device, one or more computing resources distributed across hardware devices, and/or may include such aspects as logical circuits, embedded circuits, sensors, actuators, input and/or output devices, network and/or communication resources, memory resources of any type, processing resources of any type, and/or hardware devices configured to be responsive to determined conditions to functionally execute one or more operations of systems and methods herein.
  • Network and/or communication resources include, without limitation, local area network, wide area network, wireless, internet, or any other known communication resources and protocols.
  • Example and non-limiting hardware and/or computing devices include, without limitation, a general purpose computer, a server, an embedded computer, a mobile device, a virtual machine, and/or an emulated computing device.
  • a computing device may be a distributed resource included as an aspect of several devices, included as an interoperable set of resources to perform described functions of the computing device, such that the distributed resources function together to perform the operations of the computing device.
  • each computing device may be on separate hardware, and/or one or more hardware devices may include aspects of more than one computing device, for example as separately executable instructions stored on the device, and/or as logically partitioned aspects of a set of executable instructions, with some aspects comprising a part of one of a first computing device, and some aspects comprising a part of another of the computing devices.
  • a computing device may be part of a server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platform.
  • a processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions and the like.
  • the processor may be or include a signal processor, digital processor, embedded processor, microprocessor or any variant such as a co-processor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon.
  • the processor may enable execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application.
  • methods, program codes, program instructions and the like described herein may be implemented in one or more threads.
  • the thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code.
  • the processor may include memory that stores methods, codes, instructions and programs as described herein and elsewhere.
  • the processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere.
  • the storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache and the like.
  • a processor may include one or more cores that may enhance speed and performance of a multiprocessor.
  • the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die).
  • the methods and systems described herein may be deployed in part or in whole through a machine that executes computer readable instructions on a server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware.
  • the computer readable instructions may be associated with a server that may include a file server, print server, domain server, internet server, intranet server and other variants such as secondary server, host server, distributed server and the like.
  • the server may include one or more of memories, processors, computer readable transitory and/or non-transitory media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like.
  • the methods, programs, or codes as described herein and elsewhere may be executed by the server.
  • other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server.
  • the server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers, and the like. Additionally, this coupling and/or connection may facilitate remote execution of instructions across the network.
  • all the devices attached to the server through an interface may include at least one storage medium capable of storing methods, program code, instructions, and/or programs.
  • a central repository may provide program instructions to be executed on different devices.
  • the remote repository may act as a storage medium for methods, program code, instructions, and/or programs.
  • the methods, program code, instructions, and/or programs may be associated with a client that may include a file client, print client, domain client, internet client, intranet client and other variants such as secondary client, host client, distributed client and the like.
  • the client may include one or more of memories, processors, computer readable transitory and/or non-transitory media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers, machines, and devices through a wired or a wireless medium, and the like.
  • the methods, program code, instructions, and/or programs as described herein and elsewhere may be executed by the client.
  • other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client.
  • the client may provide an interface to other devices including, without limitation, servers, other clients, printers, database servers, print servers, file servers, communication servers, distributed servers, and the like.
  • this coupling and/or connection may facilitate remote execution of methods, program code, instructions, and/or programs across the network.
  • the networking of some or all of these devices may facilitate parallel processing of methods, program code, instructions, and/or programs at one or more locations without deviating from the scope of the disclosure.
  • all the devices attached to the client through an interface may include at least one storage medium capable of storing methods, program code, instructions, and/or programs.
  • a central repository may provide program instructions to be executed on different devices.
  • the remote repository may act as a storage medium for methods, program code, instructions, and/or programs.
  • the network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules, and/or components as known in the art.
  • the computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like.
  • the methods, program code, instructions, and/or programs described herein and elsewhere may be executed by one or more of the network infrastructural elements.
  • the methods, program code, instructions, and/or programs described herein and elsewhere may be implemented on a cellular network having multiple cells.
  • the cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network.
  • the cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like.
  • the methods, program code, instructions, and/or programs described herein and elsewhere may be implemented on or through mobile devices.
  • the mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices.
  • the computing devices associated with mobile devices may be enabled to execute methods, program code, instructions, and/or programs stored thereon.
  • the mobile devices may be configured to execute instructions in collaboration with other devices.
  • the mobile devices may communicate with base stations interfaced with servers and configured to execute methods, program code, instructions, and/or programs.
  • the mobile devices may communicate on a peer-to-peer network, mesh network, or other communications network.
  • the methods, program code, instructions, and/or programs may be stored on the storage medium associated with the server and executed by a computing device embedded within the server.
  • the base station may include a computing device and a storage medium.
  • the storage device may store methods, program code, instructions, and/or programs executed by the computing devices associated with the base station.
  • the methods, program code, instructions, and/or programs may be stored and/or accessed on machine readable transitory and/or non-transitory media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g.
  • RAM random access memory
  • mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types
  • processor registers cache memory, volatile memory, non-volatile memory
  • optical storage such as CD, DVD
  • removable media such as flash memory (e.g.
  • USB sticks or keys floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like.
  • Certain operations described herein include interpreting, receiving, and/or determining one or more values, parameters, inputs, data, or other information (“receiving data”).
  • Operations to receive data include, without limitation: receiving data via a user input; receiving data over a network of any type; reading a data value from a memory location in communication with the receiving device; utilizing a default value as a received data value; estimating, calculating, or deriving a data value based on other information available to the receiving device; and/or updating any of these in response to a later received data value.
  • a data value may be received by a first operation, and later updated by a second operation, as part of the receiving a data value. For example, when communications are down, intermittent, or interrupted, a first receiving operation may be performed, and when communications are restored an updated receiving operation may be performed.
  • the determining of the value may be required before that operational step in certain contexts, e.g., where the time delay of data for an operation to achieve a certain effect is important, but may not be required before that operation step in other contexts, e.g., where usage of the value from a previous execution cycle of the operations would be sufficient for those purposes. Accordingly, in certain embodiments an order of operations and grouping of operations as described is explicitly contemplated herein, and in certain embodiments re-ordering, subdivision, and/or different grouping of operations is explicitly contemplated herein. [0073] The methods and systems described herein may transform physical and/or or intangible items from one state to another.
  • the methods and systems described herein may also transform data representing physical and/or intangible items from one state to another.
  • the methods and/or processes described above, and steps thereof, may be realized in hardware, program code, instructions, and/or programs or any combination of hardware and methods, program code, instructions, and/or programs suitable for a particular application.
  • the hardware may include a dedicated computing device or specific computing device, a particular aspect or component of a specific computing device, and/or an arrangement of hardware components and/or logical circuits to perform one or more of the operations of a method and/or system.
  • the processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory.
  • the processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium.
  • the computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and computer readable instructions, or any other machine capable of executing program instructions.
  • each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof.
  • the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware.
  • the means for performing the steps associated with the processes described above may include any of the hardware and/or computer readable instructions described above. All such permutations and combinations are intended to fall within the scope of the present disclosure. [0077] Accordingly, as disclosed herein, embodiments of the current disclosure may provide for more accurate orthodontic templates, as compared to traditional processes, by reducing the amount of human involvement in processing dental images. Further, some embodiments of the current disclosure may improve the efficiency of a medical practice by reducing and/or eliminating the amount of time a medical practitioner needs to spend generating an orthodontic template.
  • Some embodiments of the current disclosure may provide for the generation of orthodontic templates and/or instruction data sets as a service.
  • an orthodontic office may subscribe to the service, provide dental images, either electronically or physically to an entity operating the controller 14, wherein the entity processes the dental images, as described herein, and sends an orthodontic template and/or instruction data sets back to the orthodontic office.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

A system for generating an orthodontic template is provided. The system includes an imaging device and a controller. The imaging device is operative to acquire one or more dental images of a patient. The controller is operative to receive the one or more dental images, tag each of the one or more dental images with a dental image type label via a first machine learning model; and crop each of the one or more tagged dental images via a second machine learning model. The controller is further operative to generate an orthodontic template based at least in part on the one or more cropped and tagged dental images.

Description

SYSTEMS AND METHODS FOR GENERATING ORTHODONTIC TEMPLATES CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to U.S. Provisional Patent Application Serial. No. 63/028,756, filed May 22, 2020 and entitled “SYSTEMS AND METHODS FOR GENERATING ORTHODONTIC TEMPLATES” (PHIM-0001-P01). [0002] All of the above patent documents are incorporated herein by reference in their entirety for all purposes. BACKGROUND [0001] Field: [0002] Embodiments of the present disclosure relate to orthodontics, and more specifically, to systems and methods for generating orthodontic templates. [0003] Description of the Related Art: [0004] In orthodontics, a common practice is to take photos from multiple perspectives of a patient’s face and mouth in addition to dental x-rays. The orthodontist or assistant typically edits and arranges the images into a layout called an image template, also referred to herein as an “orthodontic template” or simply as a “template”. Typically, the orthodontist uses an image template to form their diagnosis and/or as a communication or marketing device, for instance, to communicate the diagnosis and treatment plan to a patient. [0005] There are often many possible arrangements for an image template, with different orthodontists preferring certain templates over others, although most orthodontists use similar layouts. In addition to the arrangement, orthodontists must often edit (e.g., rotate, crop, or flip) the images to produce a consistent-looking image template. Thus, it often takes time to edit all the images and place them in their proper arrangement by hand. [0006] Accordingly, there remains a need for improved systems and methods for generating orthodontic templates. SUMMARY [0007] Embodiments in accordance with this disclosure provide for automatic image templating which takes an unstructured set of images and automatically edits and/or arranges the images into a desired image template, or instructions to create a template, with little or no effort on the part of a user. In certain aspects, machine learning is used to automatically tag dental images with appropriate dental image type labels and/or to crop, or otherwise manipulate, the dental images, thereby greatly improving the speed, accuracy, and/or efficiency of generating orthodontic templates. For example, some embodiments may include a remote server that provides orthodontic services as a cloud/Internet based service to dental offices across the World. [0008] Accordingly, in embodiments, a system may include an imaging device and a controller. The imaging device is operative to acquire one or more dental images of a patient. The controller is operative to receive the one or more dental images, and to generate an instruction data set for producing an orthodontic template based at least in part on the one or more dental images. In certain aspects, the controller is further operative to generate the instruction data set by tagging each of the one or more images with a dental image type label. In certain aspects, the dental image type labels correspond to at least one dental image type selected from the dental image types consisting of: Profile; Profile-Smile; Frontal; Frontal- Smile; Upper-Occlusal; Lower-Occlusal; Intraoral-Right; Intraoral-Frontal; Intraoral-Left; Lateral Ceph; PA Ceph; Panoramic X-Ray; or Other. In certain aspects, the controller is further operative to generate the instruction data set by determining a cropping shape for each of the one or more dental images. [0009] In other embodiments, a system may include an imaging device and a controller. The imaging device is operative to acquire one or more dental images of a patient. The controller is operative to receive the one or more dental images; and to generate an orthodontic template based at least in part on the one or more dental images. [0010] In yet other embodiments, a system may include an imaging device and a controller. The imaging device is operative to acquire one or more dental images of a patient. The controller is operative to receive the one or more dental images; tag each of the one or more dental images with a dental image type label via a first convolutional neural network; crop each of the one or more tagged dental images via a second convolutional neural network; and generate an orthodontic template based at least in part on the one or more cropped and tagged dental images. [0011] In yet other embodiments, a method may include receiving one or more dental images of a patient; tagging each of the one or more dental images with a dental image type; determining a cropping shape for each of the one or more tagged dental images based at least in part on the tagging; and generating an instruction data set for producing an orthodontic template based at least in part on the one or more tagged dental images and the cropping shapes. [0012] In yet other embodiments, a method may include receiving one or more dental images of a patient; tagging each of the one or more dental images with a dental image type; cropping each of the one or more tagged dental images; and generating an orthodontic template based at least in part on the one or more cropped and tagged dental images. In certain aspects, the method may further include pairing physical materials for use with the generated orthodontic template. [0013] In yet other embodiments, a system may include at least one processor and a memory device. The memory device stores a set of instructions that adapts the at least one processor to: receive one or more dental images; and generate an instruction data set for producing an orthodontic template based at least in part on the one or more dental images. [0014] In yet other embodiments, a system may include at least one processor and a memory device. The memory device stores a set of instructions that adapts the at least one processor to: receive one or more dental images; and generate an orthodontic template based at least in part on the one or more dental images. [0015] In yet other embodiments, a system includes an imaging device and a controller. The imaging device is structured to acquire one or more dental images of a patient. The controller is structured to interpret the one or more dental images and generate an instruction data set for producing an orthodontic template based at least in part on the one or more dental images. [0016] In yet other embodiments, a system includes an imaging device and a controller. The imaging device is structured to acquire one or more dental images of a patient. The controller is structured to receive the one or more dental images and generate an orthodontic template based at least in part on the one or more dental images. [0017] In yet other embodiments, a method includes interpreting a plurality of dental images of a patient and tagging each of the plurality of dental images with a dental image type. The method further includes determining at least one of a cropping shape or a transformation for each of the plurality of dental images, and generating an instruction data set for producing an orthodontic template based at least in part on the plurality of tagged dental images and at least one of the cropping shapes or transformations. [0018] In yet other embodiments, a method includes interpreting a plurality of dental images of a patient and tagging each of the plurality of dental images with a dental image type. The method further includes at least one of cropping or transforming each of the plurality of tagged dental images; and generating an orthodontic template based at least in part on the one or more dental images. [0019] In yet other embodiments, an apparatus includes a dental image interpretation circuit, a tagging circuit, at least one of a cropping circuit or a transformation circuit, an orthodontic data generation circuit, and an orthodontic data provisioning circuit. The dental image interpretation circuit is structured to interpret one or more dental images of a patient. The tagging circuit is structured to tag each of the one or more dental images with a dental image type. The cropping circuit is structured to determine a cropping shape value for each of the one or more dental images. The transformation circuit is structured to determine a transformation value for each of the one or more dental images. The orthodontic data generation circuit is structured to generate, based at least in part on the one or more tagged dental images and the corresponding one or more cropping shape values, orthodontic data that defines, in part, an orthodontic template. The orthodontic data provisioning circuit is structured to transmit the orthodontic data. [0020] These and other systems, methods, objects, features, and advantages of the present disclosure will be apparent to those skilled in the art from the following detailed description of the preferred embodiment and the drawings. [0021] All documents mentioned herein are hereby incorporated in their entirety by reference. References to items in the singular should be understood to include items in the plural, and vice versa, unless explicitly stated otherwise or clear from the text. Grammatical conjunctions are intended to express any and all disjunctive and conjunctive combinations of conjoined clauses, sentences, words, and the like, unless otherwise stated or clear from the context. BRIEF DESCRIPTION OF THE FIGURES [0022] The disclosure and the following detailed description of certain embodiments thereof may be understood by reference to the following figures: [0023] Fig.1 depicts a diagram of a system for generating orthodontic templates in accordance with an embodiment of this disclosure; [0024] Fig.2 depicts a block diagram of a machine learning model of the system of Fig.1, in accordance with an embodiment of this disclosure; [0025] Fig.3 depicts a diagram of an orthodontic template, in accordance with an embodiment of this disclosure; [0026] Fig.4 depicts another block diagram of the machine learning model of the system of Fig.1, in accordance with an embodiment of this disclosure; [0027] Fig.5 depicts a detection box and reference points overlaid on a dental image acquired by the system of Fig.1, in accordance with an embodiment of this disclosure; [0028] Fig.6 depicts machine-readable instructions for generating an orthodontic template, in accordance with an embodiment of this disclosure; [0029] Fig.7 depicts a block diagram of a method for generating orthodontic templates utilizing the system of Fig.1, in accordance with an embodiment of this disclosure; [0030] Fig.8 depicts an apparatus for generating orthodontic data, in accordance with an embodiment of this disclosure; [0031] Fig.9 depicts a method for generating an instruction data set for producing an orthodontic template, in accordance with an embodiment of this disclosure; and [0032] Fig.10 depicts a method for generating an orthodontic template, in accordance with an embodiment of this disclosure. DETAILED DESCRIPTION [0033] Disclosed herein are embodiments of systems and methods for providing automatic image templating which may take unstructured set of images and automatically edit and/or arrange them into desired orthodontic templates, or instructions to create a template, with little or no effort on the part of a user. Non-limiting examples of image templates include electronic files and/or physical objects that present one or more related images in a desired format. For example, an orthodontic template may present dental images in a desired arrangement in which certain types of images have a particular orientation, location, cropping, and/or other desired feature. An image’s location in a template may be based on one or more relationships to other images included in the template. For example, an orthodontic template may locate, from left to right, an Intraoral-Right image next to an Intraoral-Frontal image next to an Intraoral-Left image. Further, while many of the embodiments herein are described with respect to orthodontic templates, it should be understood that embodiments of the current disclosure are applicable to other fields, e.g., dentistry and/or dental surgery. [0034] Accordingly, referring to Fig.1, a system 10 for generating orthodontic templates, in accordance with an embodiment of this disclosure, is shown. The system 10 may include an imaging device 12 and at least one controller 14. As will be explained in greater detail below, the imaging device 12 may be operative to acquire one or more dental images 16 (Fig. 2) of a patient, and the controller 14 may be operative to generate an orthodontic template 18 (Fig.2) and/or instructions 20 (Fig.2) for generating an orthodontic template. In embodiments, the imaging device 12 may be a single device, while in other embodiments, the imaging device 12 may be multiple devices, e.g., separate x-ray and optical cameras. For example, in some embodiments, the imaging device 12 may include an x-ray device for acquiring x-ray images and one or more handheld cameras for acquiring optical (human visible spectrum) images. [0035] As shown in Figs.1 and 2, the imaging device 12 may take x-ray and/or optical, i.e., in the human visible spectrum, images 16 of a patient. The images 16 may be two- dimensional (2D) and/or three-dimensional (3D). The images 16 may be digitized and transmitted electronically to the controller 14, i.e., delivered to the controller 14 by any electronic means possible, e.g., network transmission, scanner, USB and/or other memory device such as a CD, DVD, etc. In embodiments, additional information, e.g., patient medical history, may also be electronically transmitted to the controller 14. Non-limiting examples of patient medical history include name, age, gender, records of tooth extractions or implants, diagnosis information, and/or other types of medial information. In embodiments, such information may be included on a generated template for reference use by a medical professional, e.g., an orthodontist, when using the template to diagnose a patient. In embodiments, a patient’s medical history may be used by one or more models, as described herein, e.g., knowledge of an extracted tooth could inform the disambiguation of the left and right sides of the mouth. [0036] As further shown in Fig.1, the controller 14 may be located apart from the imaging device 12 such that the images 16 are transmitted over a network 22. As will be understood, the imaging device 12 and controller 14 may be located in the same room, building and/or compound. In embodiments, the imaging device 12 and controller 14 may be in separate rooms in the same building, in different buildings and/or in different geographic regions. Thus, in embodiments, the network 22 may include an intranet and/or the Internet. Further, while Fig.1 depicts the imaging device 12 and the controller 14 in communication over the network 22, it will be understood that, in embodiments, the controller 14 may be incorporated into the imaging device 12 or placed in electronic communication with the controller 14 via a direct connection, e.g., serial, USB, etc. In embodiments, the controller 14 may include at least one processor and/or a memory device. For example, as shown in Fig.1, the controller 14 may be a remote server connected to the network 22. In embodiments, the controller 14 may transmit the template 18 and/or the instructions 20 to the dental professional via the network 22. [0037] Referring to Fig.2, a dental professional may acquire an unordered and/or uncropped set of images 16 from a patient and desire to have them arranged in the orthodontic template 18. To effect this, the dental professional may transmit the images 16 to the controller 14. In embodiments, the images 16 may be provided to the controller 14 via a web interface, e.g., the user may drag and drop the images 16 onto a page for uploading to the controller 14. In other embodiments, the imaging device 12 may transmit the images 16 to the controller 14 via a non-web-based interface, e.g., an application programming interface (API). [0038] In embodiments, an algorithm/model 24 executing on the controller 14 may take as input the set of images 16 and output information/instructions 20 for editing and/or positioning the images 16 into any desired template, e.g., template 18. In embodiments, the information 20 may include a label (also referred to herein as a “dental image type label”) for each image in the set 16. For example, with reference to the non-limiting orthodontic template 300 shown in Fig.3, the dental image types may include "Profile" 310; “Profile- Smile" 312; "Frontal" 314; "Frontal-Smile" 316; "Upper-Occlusal" 318; "Lower-Occlusal" 320; "Intraoral-Right" 322; "Intraoral-Frontal" 324; "Intraoral-Left" 326; "PA Ceph.328"; "Lateral Ceph." 330; and/or "Panoramic X-Ray" 332. Other non-limiting examples of dental image types may include “Periapical X-Ray”, “Bitewing X-Ray”, and/or "Other". Referring back to Fig.2, it will be understood that the foregoing list of example dental image type labels may be represented by other strings and/or symbols. The information 20 may also include the detection of relevant parts/reference points 26 (Fig.5) of an image 16 (such as the location of the face, mouth, or other reference points) that can be used to best crop the image 16. In embodiments, reference points may be defined arbitrarily, generally, but in such a manner that they are consistent in any set of data. For example, reference points may be defined as the four (4) corners of a rectangle used to crop an image. In embodiments, reference points may be defined on, within, or otherwise based in part on, points, such as the mouth or teeth (or structures therein), which can be reliably identified and/or labeled. Non- limiting examples of reliable points include the point between the central incisor at the line of occlusion and the point of the first molar at the line of occlusion. [0039] It is also possible to implicitly provide the information 20 by outputting the edited images or layout 18 directly. As will be understood, the information 20 may also be used by the controller 14 to create/generate the automated image template 18, which may be further refined or adjusted by an orthodontist or other user. [0040] Moving to Fig.4, in embodiments of the model 24 may employ a machine learning approach to determine the label for each image 16 and the information 20 for editing and cropping the image 16. For example, some embodiments may include one or more artificial intelligence/machine learning models 28 and/or 30 in which a first machine learning model 28 may be trained to determine and assign a label to an image 16, and the second machine learning model 30 may be trained to determine a cropping shape for the image 16. As will be understood, in embodiments, the second learning model 30 may also determine one or more transformations, e.g., rotations, reflections, scaling, etc. for the image 16. In embodiments, the machine learning models 28 and/or 30 may be and/or include one or more members of a family of functions known as Artificial Neural Networks for the classification and regression tasks, e.g., convolutional neural network architectures pre-trained on a large set of images labeled for different tasks. The models 28 and/or 30 may then be fine-tuned using the images and ground-truth labels and annotations, e.g., supervised learning. The models 28 and/or 30 may be trained to identify reference points, as described herein, and/or to predict a label, predict a cropping, and/or predict a transformation without use of reference points. For example, some embodiments of the models may directly generate θ (Fig.5), scale, and/or resize a dental image. As such, in embodiments, reference points may be defined as data which is annotated and/or later predicted to achieve a desired image transformation. [0041] For example, in embodiments, the task of determining the label of an image 16 may be treated as a classification problem. As such, the first machine learning model 28 may be trained using images that have ground-truth labels assigned to them. In such embodiments, the first machine learning model 28 learns to classify/label new images correctly, which in turn, provides for the images 16 to be correctly placed for any desired template 18. A training data set, in accordance with an embodiment of the current disclosure, may be created by grouping a large set of images into classes by their image label, e.g., “Intraoral-Frontal”, and then associating each image and/or group with a one or more desired transformations, e.g., rotations, reflections, croppings, etc. The grouped images may then be annotated with data describing how each image was transformed in a way understandable by a machine, e.g., JavaScript Object Notation (JSON) 600 (Fig.6) and/or XML files. Embodiments of such training data may also include annotations identifying reference points and/or other values corresponding to the transformations. [0042] As another example, in embodiments, the task of determining the best (or otherwise appropriate) edit and/or cropping (cropping shape) of an image 16 may be treated a combined detection and regression problem. In embodiments, editing of the image 16 may include scaling, translating, rotating by an amount θ (Fig.5), and/or reflecting the image. For example, the second machine learning model 30 may be trained to identify reference points 26 (Fig.5) for deriving/adjusting a cropping of the image 16, and to generate a cropping window (which may be based on a detection window 32) anchored around the detected reference points 26. [0043] The second machine learning model 30 may also be trained using images that have ground-truth labels assigned to them, to include any ground-truth annotations needed to determine the best (or otherwise appropriate) edit and crop. For example, an image labeled “Intraoral-Left” might have an annotation for the region to crop around the mouth, as well as two reference points for the lower incisor edge (L1) and the lower 1st molar occlusal (L6). The second machine learning model 30 may learn to detect these annotations in new “Intraoral-Left” images. The predicted annotations can then be used to determine the optimal edit and/or crop. [0044] While the above examples disclosed embodiments in which the model 24 includes two machine learning models 28 and 30, it will be understood that other embodiments may employ a single machine learning model that performs the functions of both models 28 and 30. Further, other embodiments may employ three (3) or more machine learning models. [0045] Additionally, in embodiments, physical materials, e.g., wires, braces, molds, retainers, oral cement, rubber bands, etc., for use with the generated information 20 and/or template 18 may be paired with the information 20 and/or template 18. In such embodiments, the paired materials may be shipped to a user, e.g., dental professional, of the information 20 and/or template 18. [0046] Further, while Fig.2 depicts a single template 18 with a particular layout, it will be understood that embodiments of the disclosure may be configured to output any number of different templates, e.g., templates 34 and 36 as shown in Fig.7, which may be selected by the user/medical professional submitting the images 16 to the controller 14. [0047] Shown in Fig.6 is an example of an instructions 600 for assembling/laying out a template. As stated above, the instructions 600 may be in machine-readable format, e.g., JSON, or in human-readable form, e.g., a set of step-by-step instructions for cropping and labeling the raw dental images. While Fig.6 depicts the instructions 600 in JASON format, it will be understood that any computer-readable format may be used. [0048] Turning to Fig.8 an apparatus 800 for generating orthodontic data 810, is shown. The apparatus 800 may form part of the controller 14 (Fig.1) and/or other type of computing device described herein. In embodiments, the apparatus 800 includes a dental image interpretation circuit 812 structured to interpret one or more dental images 16 of a patient. The apparatus 800 further includes a tagging circuit 814 structured to tag each of the one or more dental images 16 with a dental image type, as described herein, to generate tagged dental images 816. The apparatus 800 further includes a cropping circuit 818 structured to determine a cropping shape value 820 for each of the one or more dental images 16 and/or tagged dental images 816. The apparatus 800 may include a transformation circuit 817 structured to determine one or more transformation values 819, e.g., computer-readable instructions that specify rotations, reflections, scaling, etc. for the one or more dental images 16. The apparatus 800 further includes an orthodontic data generation circuit 822 structured to generate, based at least in part on the one or more tagged dental images 816 and the corresponding cropping shape values 820, the orthodontic data 810. The orthodontic data 810 may define, in part, an orthodontic template, as described herein. For example, in embodiments, the orthodontic data 810 may define an instruction data set and/or an orthodontic template. In embodiments, the apparatus 800 may further include an orthodontic provisioning circuit 824 structured to transmit the orthodontic data 810, e.g., to an orthodontic practitioner. [0049] In embodiments, the tagging circuit 814 may include a machine learning model, e.g., 28 (Fig.4) structured to identify the dental image type for each of the one or more dental images 16. In embodiments, the cropping circuit 818 may include a machine learning model, e.g., 30 (Fig.4) structured to identify the one or more cropping shape values. [0050] In embodiments, the apparatus 800 may further include a pairing circuit 826 structured to determine one or more physical materials for use with the orthodontic template. The orthodontic data generation circuit 822 may be further structured to configure the orthodontic data 810 to indicate the one or more physical materials for use with the orthodontic template defined in part by the orthodontic data 810. [0051] Shown in Fig.9 is a method 900 for generating an instruction set for an orthodontic template. The method 900 may be performed by the apparatus 800 (Fig.8), the controller 14 (Fig.1) and/or any other computing device described herein. The method 900 includes interpreting 910 a plurality of dental images 16 of a patient and tagging 912 each of the dental images 16 with a dental image type, as described herein. The method 900 further includes determining 914 a cropping shape for each of the dental images 16 and/or tagged dental images. The method 900 may further include determining 917 one or more transformations for each of the dental images 16 and/or tagged dental images. The method 900 further includes generating 916 an instruction data set for producing an orthodontic template based at least in part on the tagged dental images and the cropping shapes. The method 900 further includes electrically transmitting 918 the instruction data set over a network and/or printing 920 a hardcopy of the instruction data set and shipping 922 the hardcopy. In embodiments, tagging 912 the dental images may include processing 924 the dental images with a neural network, e.g., model 28 (Fig.4). In embodiments, determining 914 the cropping shapes may include identifying 926 one or more reference points within each of the dental images. [0052] Shown in Fig.10 is a method 100 for generating an orthodontic template. The method 100 may be performed by the apparatus 800 (Fig.8), the controller 14 (Fig.1), and/or any other computing device described herein. The method 100 may include interpreting 110 a plurality of dental images 16 of a patient, tagging 112 each of the dental images with a dental image type, as described herein, and cropping 114 each of the dental images. The method 100 may further include determining 117 one or more transformations for each of the dental images. The method 100 may further include generating 116 an orthodontic template. In embodiments, the method 100 may further include electrically transmitting 118 the orthodontic template and/or printing 120 a hardcopy of the orthodontic template and shipping 122 the hardcopy. In embodiments, the method 100 may further include pairing 124 one or more physical materials for use with the orthodontic template. In embodiments, tagging 112 the dental images may include processing 126 the dental images with a neural network, e.g., model 28 (Fig.4). In embodiments, determining 114 the cropping shapes may include identifying 128 one or more reference points in each dental image. [0053] An example embodiment of the present disclosure, utilizing one or more aspects as set forth herein, includes a system for generating an instruction data set for producing an orthodontic template. The system includes an imaging device and a controller. The imaging device is structured to acquire one or more dental images of a patient. The controller is structured to interpret the one or more dental images and generate the instruction data set for producing an orthodontic template based at least in part on the one or more dental images. In certain embodiments, the controller is further structured to generate the instruction data set by tagging each of the one or more dental images with a dental image type label. In certain embodiments, the dental image type label corresponds to at least one of: a Profile; a Profile- Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; or a Panoramic X-Ray. In certain embodiments, the controller is further structured to determine the dental image type label for each of the one or more dental images via a neural network. In certain embodiments, the controller is further structured to generate the instruction data set by determining at least one of a cropping shape or a transformation for each of the one or more dental images. In certain embodiments, the controller is structured to generate the instruction data set by determining a cropping shape for each of the one or more dental images and is further structured to determine the cropping shape via a neural network. In certain embodiments, the neural network is trained to identify one or more reference points in each of the one or more dental images. The controller determines the cropping shape based at least in part on the one or more reference points. In certain embodiments, the one or more reference points are based at least in part on at least one of: a mouth; or a face. In certain embodiments, the controller is further structured to generate the instruction data set based at least in part on: a first machine learning model structured to identify a dental image type for each of the one or more dental images; and a second machine learning model structured to identify a cropping shape for each of the one or more dental images. [0054] Another example embodiment of the present disclosure, utilizing one or more aspects as set forth herein, incudes a system for generating an orthodontic template. The system includes an imaging device and a controller. The imaging device is structured to acquire one or more dental images of a patient. The controller is structured to receive the one or more dental images and generate the orthodontic template based at least in part on the one or more dental images. In certain embodiments, the method further includes at least one of: electrically transmitting the orthodontic template over a network; or printing a hardcopy of the orthodontic template and shipping the hardcopy. In certain embodiments, tagging each of the plurality of dental images with a dental image type includes processing each of the plurality of dental images with a neural network. In certain embodiments, the dental image type corresponds to at least one of: a Profile; a Profile-Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; or a Panoramic X-Ray. In certain embodiments, the method further includes pairing physical materials for use with the generated orthodontic template. [0055] Another example embodiment of the present disclosure, utilizing one or more aspects as set forth herein, incudes a method for generating an instruction data set for producing an orthodontic template. The method includes interpreting a plurality of dental images of a patient and tagging each of the plurality of dental images with a dental image type. The method further includes determining at least one of a cropping shape or a transformation for each of the plurality of dental images and generating an instruction data set for producing an orthodontic template based at least in part on the plurality of tagged dental images and at least one of the cropping shapes or transformations. In certain embodiments, the method further includes at least one of: electrically transmitting the instruction data set over a network; or printing a hardcopy of the instruction data set and shipping the hardcopy. In certain embodiments, tagging each of the plurality of dental images with a dental image type includes processing each of the plurality of dental images with a neural network. In certain embodiments, the dental image type corresponds to at least one of: a Profile; a Profile-Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; or a Panoramic X-Ray. In certain embodiments, the method includes determining a cropping shape for each of the plurality of dental images and further includes identifying, via a neural network, one or more reference points in each of the plurality of dental images. In certain embodiments, the one or more reference points may be based at least in part on at least one of: a mouth; or a face. In certain embodiments, the one or more reference points may be within the mouth or the face. [0056] Another example embodiment of the present disclosure, utilizing one or more aspects as set forth herein, incudes a method for generating an orthodontic template. The method includes interpreting a plurality of dental images of a patient and tagging each of the plurality of dental images with a dental image type. The method further includes at least one of cropping or transforming each of the plurality of tagged dental images; and generating an orthodontic template based at least in part on the one or more dental images. In certain embodiments, the method further includes at least one of: electrically transmitting the orthodontic template over a network; or printing a hardcopy of the orthodontic template and shipping the hardcopy. In certain embodiments, tagging each of the plurality of dental images with a dental image type includes processing each of the plurality of dental images with a neural network. In certain embodiments, the dental image type corresponds to at least one of: a Profile; a Profile-Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower- Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; a Panoramic X-Ray; and/or Other. In certain embodiments, the method further includes pairing physical materials for use with the generated orthodontic template. [0057] Another example embodiment of the present disclosure, utilizing one or more aspects as set forth herein, incudes an apparatus for generating orthodontic data. The apparatus includes a dental image interpretation circuit, a tagging circuit, at least one of a cropping circuit or a transformation circuit, an orthodontic data generation circuit, and an orthodontic data provisioning circuit. The dental image interpretation circuit is structured to interpret one or more dental images of a patient. The tagging circuit is structured to tag each of the one or more dental images with a dental image type. The cropping circuit is structured to determine a cropping shape value for each of the one or more dental images. The transformation circuit is structured to determine a transformation value for each of the one or more dental images. The orthodontic data generation circuit is structured to generate, based at least in part on the one or more tagged dental images and at least of the corresponding one or more cropping shape values or transformation values, orthodontic data that defines, in part, an orthodontic template. The orthodontic data provisioning circuit is structured to transmit the orthodontic data. In certain embodiments, the tagging circuit includes a machine learning model structured to identify the dental image type for each of the one or more dental images. In certain embodiments, the dental image type corresponds to at least one of: a Profile; a Profile- Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; a Intraoral-Left; a Lateral Ceph; a PA Ceph; a Panoramic X-Ray; and/or Other. In certain embodiments, the apparatus includes the cropping circuit and the cropping circuit includes a machine learning model structured to identify one or more reference points in each of the one or more dental images. In certain embodiments, the one or more reference points include at least one of: a mouth; or a face. In certain embodiments, the apparatus includes a pairing circuit structured to determine one or more physical materials for use with the generated orthodontic template. [0058] The methods and systems described herein may be deployed in part or in whole through a machine having a computer, computing device, processor, circuit, and/or server that executes computer readable instructions, program codes, instructions, and/or includes hardware configured to functionally execute one or more operations of the methods and systems herein. The terms computer, computing device, processor, circuit, and/or server, (“computing device”) as utilized herein, should be understood broadly. [0059] An example computing device includes a computer of any type, capable to access instructions stored in communication thereto such as upon a non-transient computer readable medium, whereupon the computer performs operations of the computing device upon executing the instructions. In certain embodiments, such instructions themselves comprise a computing device. Additionally or alternatively, a computing device may be a separate hardware device, one or more computing resources distributed across hardware devices, and/or may include such aspects as logical circuits, embedded circuits, sensors, actuators, input and/or output devices, network and/or communication resources, memory resources of any type, processing resources of any type, and/or hardware devices configured to be responsive to determined conditions to functionally execute one or more operations of systems and methods herein. [0060] Network and/or communication resources include, without limitation, local area network, wide area network, wireless, internet, or any other known communication resources and protocols. Example and non-limiting hardware and/or computing devices include, without limitation, a general purpose computer, a server, an embedded computer, a mobile device, a virtual machine, and/or an emulated computing device. A computing device may be a distributed resource included as an aspect of several devices, included as an interoperable set of resources to perform described functions of the computing device, such that the distributed resources function together to perform the operations of the computing device. In certain embodiments, each computing device may be on separate hardware, and/or one or more hardware devices may include aspects of more than one computing device, for example as separately executable instructions stored on the device, and/or as logically partitioned aspects of a set of executable instructions, with some aspects comprising a part of one of a first computing device, and some aspects comprising a part of another of the computing devices. [0061] A computing device may be part of a server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platform. A processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions and the like. The processor may be or include a signal processor, digital processor, embedded processor, microprocessor or any variant such as a co-processor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon. In addition, the processor may enable execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. By way of implementation, methods, program codes, program instructions and the like described herein may be implemented in one or more threads. The thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code. The processor may include memory that stores methods, codes, instructions and programs as described herein and elsewhere. The processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere. The storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache and the like. [0062] A processor may include one or more cores that may enhance speed and performance of a multiprocessor. In embodiments, the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die). [0063] The methods and systems described herein may be deployed in part or in whole through a machine that executes computer readable instructions on a server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware. The computer readable instructions may be associated with a server that may include a file server, print server, domain server, internet server, intranet server and other variants such as secondary server, host server, distributed server and the like. The server may include one or more of memories, processors, computer readable transitory and/or non-transitory media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like. The methods, programs, or codes as described herein and elsewhere may be executed by the server. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server. [0064] The server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers, and the like. Additionally, this coupling and/or connection may facilitate remote execution of instructions across the network. The networking of some or all of these devices may facilitate parallel processing of program code, instructions, and/or programs at one or more locations without deviating from the scope of the disclosure. In addition, all the devices attached to the server through an interface may include at least one storage medium capable of storing methods, program code, instructions, and/or programs. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for methods, program code, instructions, and/or programs. [0065] The methods, program code, instructions, and/or programs may be associated with a client that may include a file client, print client, domain client, internet client, intranet client and other variants such as secondary client, host client, distributed client and the like. The client may include one or more of memories, processors, computer readable transitory and/or non-transitory media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers, machines, and devices through a wired or a wireless medium, and the like. The methods, program code, instructions, and/or programs as described herein and elsewhere may be executed by the client. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client. [0066] The client may provide an interface to other devices including, without limitation, servers, other clients, printers, database servers, print servers, file servers, communication servers, distributed servers, and the like. Additionally, this coupling and/or connection may facilitate remote execution of methods, program code, instructions, and/or programs across the network. The networking of some or all of these devices may facilitate parallel processing of methods, program code, instructions, and/or programs at one or more locations without deviating from the scope of the disclosure. In addition, all the devices attached to the client through an interface may include at least one storage medium capable of storing methods, program code, instructions, and/or programs. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for methods, program code, instructions, and/or programs. [0067] The methods and systems described herein may be deployed in part or in whole through network infrastructures. The network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules, and/or components as known in the art. The computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like. The methods, program code, instructions, and/or programs described herein and elsewhere may be executed by one or more of the network infrastructural elements. [0068] The methods, program code, instructions, and/or programs described herein and elsewhere may be implemented on a cellular network having multiple cells. The cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. The cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like. [0069] The methods, program code, instructions, and/or programs described herein and elsewhere may be implemented on or through mobile devices. The mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices. The computing devices associated with mobile devices may be enabled to execute methods, program code, instructions, and/or programs stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices. The mobile devices may communicate with base stations interfaced with servers and configured to execute methods, program code, instructions, and/or programs. The mobile devices may communicate on a peer-to-peer network, mesh network, or other communications network. The methods, program code, instructions, and/or programs may be stored on the storage medium associated with the server and executed by a computing device embedded within the server. The base station may include a computing device and a storage medium. The storage device may store methods, program code, instructions, and/or programs executed by the computing devices associated with the base station. [0070] The methods, program code, instructions, and/or programs may be stored and/or accessed on machine readable transitory and/or non-transitory media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g. USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like. [0071] Certain operations described herein include interpreting, receiving, and/or determining one or more values, parameters, inputs, data, or other information (“receiving data”). Operations to receive data include, without limitation: receiving data via a user input; receiving data over a network of any type; reading a data value from a memory location in communication with the receiving device; utilizing a default value as a received data value; estimating, calculating, or deriving a data value based on other information available to the receiving device; and/or updating any of these in response to a later received data value. In certain embodiments, a data value may be received by a first operation, and later updated by a second operation, as part of the receiving a data value. For example, when communications are down, intermittent, or interrupted, a first receiving operation may be performed, and when communications are restored an updated receiving operation may be performed. [0072] Certain logical groupings of operations herein, for example methods or procedures of the current disclosure, are provided to illustrate aspects of the present disclosure. Operations described herein are schematically described and/or depicted, and operations may be combined, divided, re-ordered, added, or removed in a manner consistent with the disclosure herein. It is understood that the context of an operational description may require an ordering for one or more operations, and/or an order for one or more operations may be explicitly disclosed, but the order of operations should be understood broadly, where any equivalent grouping of operations to provide an equivalent outcome of operations is specifically contemplated herein. For example, if a value is used in one operational step, the determining of the value may be required before that operational step in certain contexts, e.g., where the time delay of data for an operation to achieve a certain effect is important, but may not be required before that operation step in other contexts, e.g., where usage of the value from a previous execution cycle of the operations would be sufficient for those purposes. Accordingly, in certain embodiments an order of operations and grouping of operations as described is explicitly contemplated herein, and in certain embodiments re-ordering, subdivision, and/or different grouping of operations is explicitly contemplated herein. [0073] The methods and systems described herein may transform physical and/or or intangible items from one state to another. The methods and systems described herein may also transform data representing physical and/or intangible items from one state to another. [0074] The methods and/or processes described above, and steps thereof, may be realized in hardware, program code, instructions, and/or programs or any combination of hardware and methods, program code, instructions, and/or programs suitable for a particular application. The hardware may include a dedicated computing device or specific computing device, a particular aspect or component of a specific computing device, and/or an arrangement of hardware components and/or logical circuits to perform one or more of the operations of a method and/or system. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium. [0075] The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and computer readable instructions, or any other machine capable of executing program instructions. [0076] Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or computer readable instructions described above. All such permutations and combinations are intended to fall within the scope of the present disclosure. [0077] Accordingly, as disclosed herein, embodiments of the current disclosure may provide for more accurate orthodontic templates, as compared to traditional processes, by reducing the amount of human involvement in processing dental images. Further, some embodiments of the current disclosure may improve the efficiency of a medical practice by reducing and/or eliminating the amount of time a medical practitioner needs to spend generating an orthodontic template. Some embodiments of the current disclosure may provide for the generation of orthodontic templates and/or instruction data sets as a service. For example, an orthodontic office may subscribe to the service, provide dental images, either electronically or physically to an entity operating the controller 14, wherein the entity processes the dental images, as described herein, and sends an orthodontic template and/or instruction data sets back to the orthodontic office. [0078] While the disclosure has been disclosed in connection with certain embodiments shown and described in detail, various modifications and improvements thereon will become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the present disclosure is not to be limited by the foregoing examples, but is to be understood in the broadest sense allowable by law.

Claims

What is Claimed is: 1. A system comprising: an imaging device structured to acquire one or more dental images of a patient; and a controller structured to: interpret the one or more dental images; and generate an instruction data set for producing an orthodontic template based at least in part on the one or more dental images.
2. The system of claim 1, wherein the controller is further structured to generate the instruction data set by tagging each of the one or more dental images with a dental image type label.
3. The system of claim 2, wherein the dental image type label corresponds to at least one of: a Profile; a Profile-Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; or a Panoramic X-Ray.
4. The system of claim 3, wherein the controller is further structured to determine the dental image type label for each of the one or more dental images via a neural network.
5. The system of claim 1, wherein the controller is further structured to generate the instruction data set by determining at least one of a cropping shape or a transformation for each of the one or more dental images.
6. The system of claim 5, wherein the controller is structured to generate the instruction data set by determining a cropping shape for each of the one or more dental images and is further structured to determine the cropping shape via a neural network.
7. The system of claim 6, wherein the neural network is trained to identify one or more reference points in each of the one or more dental images, wherein the controller determines the cropping shape based at least in part on the one or more reference points.
8. The system of claim 7, wherein the one or more reference points are based at least in part on at least one of: a mouth; or a face.
9. The system of claim 1, wherein the controller is further structured to generate the instruction data set based at least in part on: a first machine learning model structured to identify a dental image type for each of the one or more dental images; and a second machine learning model structured to identify a cropping shape for each of the one or more dental images.
10. A system comprising: an imaging device structured to acquire one or more dental images of a patient; and a controller structured to: receive the one or more dental images; and generate an orthodontic template based at least in part on the one or more dental images.
11. The system of claim 10, wherein the controller is further structured to generate the orthodontic template by tagging each of the one or more dental images with a dental image type label.
12. The system of claim 11, wherein the dental image type label corresponds to at least one of: a Profile; a Profile-Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; or a Panoramic X-Ray.
13. The system of claim 10, wherein the controller is further structured to: pair physical materials for use with the generated orthodontic template.
14. A method comprising: interpreting a plurality of dental images of a patient; tagging each of the plurality of dental images with a dental image type; determining at least one of a cropping shape or a transformation for each of the plurality of dental images; and generating an instruction data set for producing an orthodontic template based at least in part on the plurality of tagged dental images and at least one of the cropping shapes or transformations.
15. The method of claim 14 further comprising at least one of: electrically transmitting the instruction data set over a network; or printing a hardcopy of the instruction data set and shipping the hardcopy.
16. The method of claim 14, wherein tagging each of the plurality of dental images with a dental image type comprises processing each of the plurality of dental images with a neural network.
17. The method of claim 16, wherein the dental image type corresponds to at least one of: a Profile; a Profile-Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; or a Panoramic X-Ray.
18. The method of claim 14, wherein the method includes determining a cropping shape for each of the plurality of dental images and further comprises: identifying, via a neural network, one or more reference points in each of the plurality of dental images.
19. The method of claim 18, wherein the one or more reference points are based at least in part on at least one of: a mouth; or a face.
20. A method comprising: interpreting a plurality of dental images of a patient; tagging each of the plurality of dental images with a dental image type; at least one of cropping or transforming each of the plurality of tagged dental images; and generating an orthodontic template based at least in part on the one or more dental images.
21. The method of claim 20 further comprising at least one of: electrically transmitting the orthodontic template over a network; or printing a hardcopy of the orthodontic template and shipping the hardcopy.
22. The method of claim 20, wherein tagging each of the plurality of dental images with a dental image type comprises processing each of the plurality of dental images with a neural network.
23. The method of claim 22, wherein the dental image type corresponds to at least one of: a Profile; a Profile-Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; or a Panoramic X-Ray.
24. The method of claim 20 further comprising: pairing physical materials for use with the generated orthodontic template.
25. An apparatus comprising: a dental image interpretation circuit structured to interpret one or more dental images of a patient; a tagging circuit structured to tag each of the one or more dental images with a dental image type; at least one of: a cropping circuit structured to determine a cropping shape value for each of the one or more dental images, or a transformation circuit structured to determine a transformation value for each of the one or more dental images; an orthodontic data generation circuit structured to generate, based at least in part on the one or more tagged dental images and at last one of the corresponding one or more cropping shape values or transformation values, orthodontic data that defines, in part, an orthodontic template; and an orthodontic data provisioning circuit structured to transmit the orthodontic data.
26. The apparatus of claim 25, wherein the tagging circuit includes a machine learning model structured to identify the dental image type for each of the one or more dental images.
27. The apparatus of claim 26, wherein the dental image type corresponds to at least one of: a Profile; a Profile-Smile; a Frontal; a Frontal-Smile; an Upper-Occlusal; a Lower-Occlusal; an Intraoral-Right; an Intraoral-Frontal; an Intraoral-Left; a Lateral Ceph; a PA Ceph; or a Panoramic X-Ray.
28. The apparatus of claim 26, wherein apparatus includes the cropping circuit and the cropping circuit includes a machine learning model structured to identify one or more reference points in each of the one or more dental images.
29. The apparatus of claim 28, wherein the one or more reference points include at least one of: a mouth; or a face.
30. The apparatus of claim 25 further comprising: a pairing circuit structured to determine one or more physical materials for use with the generated orthodontic template.
PCT/US2021/033381 2020-05-22 2021-05-20 Systems and methods for generating orthodontic templates WO2021236918A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063028756P 2020-05-22 2020-05-22
US63/028,756 2020-05-22

Publications (1)

Publication Number Publication Date
WO2021236918A1 true WO2021236918A1 (en) 2021-11-25

Family

ID=78707641

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/033381 WO2021236918A1 (en) 2020-05-22 2021-05-20 Systems and methods for generating orthodontic templates

Country Status (1)

Country Link
WO (1) WO2021236918A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070254257A1 (en) * 2002-09-17 2007-11-01 Orametrix, Inc. Tooth templates for bracket positioning and other uses
US20150132708A1 (en) * 2004-02-27 2015-05-14 Align Technology, Inc. Method and system for providing dynamic orthodontic assessment and treatment profiles
US20190180443A1 (en) * 2017-11-07 2019-06-13 Align Technology, Inc. Deep learning for tooth detection and evaluation
US20190313963A1 (en) * 2018-04-17 2019-10-17 VideaHealth, Inc. Dental Image Feature Detection
US20190333627A1 (en) * 2018-04-25 2019-10-31 Sota Precision Optics, Inc. Dental imaging system utilizing artificial intelligence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070254257A1 (en) * 2002-09-17 2007-11-01 Orametrix, Inc. Tooth templates for bracket positioning and other uses
US20150132708A1 (en) * 2004-02-27 2015-05-14 Align Technology, Inc. Method and system for providing dynamic orthodontic assessment and treatment profiles
US20190180443A1 (en) * 2017-11-07 2019-06-13 Align Technology, Inc. Deep learning for tooth detection and evaluation
US20190313963A1 (en) * 2018-04-17 2019-10-17 VideaHealth, Inc. Dental Image Feature Detection
US20190333627A1 (en) * 2018-04-25 2019-10-31 Sota Precision Optics, Inc. Dental imaging system utilizing artificial intelligence

Similar Documents

Publication Publication Date Title
US10839514B2 (en) Methods and systems for dynamically training and applying neural network analyses to medical images
US11049606B2 (en) Dental imaging system utilizing artificial intelligence
US20210322136A1 (en) Automated orthodontic treatment planning using deep learning
Lian et al. Meshsnet: Deep multi-scale mesh feature learning for end-to-end tooth labeling on 3d dental surfaces
WO2019204520A1 (en) Dental image feature detection
US11158046B2 (en) Estimating measurements of craniofacial structures in dental radiographs
US11991439B2 (en) Systems, apparatus, and methods for remote orthodontic treatment
CA2529982A1 (en) Data migration and format transformation system
US10726948B2 (en) Medical imaging device- and display-invariant segmentation and measurement
JP2021514773A (en) Display of medical image data
Prados-Privado et al. A convolutional neural network for automatic tooth numbering in panoramic images
US11164309B2 (en) Image analysis and annotation
Kunt et al. Automatic caries detection in bitewing radiographs: part I—deep learning
WO2021236918A1 (en) Systems and methods for generating orthodontic templates
CN111429406B (en) Mammary gland X-ray image lesion detection method and device combining multi-view reasoning
CN114140810B (en) Method, apparatus and medium for structured recognition of documents
EP4383196A1 (en) Mesh topology adaptation
US20240122463A1 (en) Image quality assessment and multi mode dynamic camera for dental images
EP4287199A1 (en) Dental user interfaces
AbuSalim et al. Analysis of Deep Learning Techniques for Dental Informatics: A Systematic Literature Review. Healthcare 2022, 10, 1892
US20210090715A1 (en) Synthetic image generation
Rashmi et al. Lateral Cephalometric Landmark Annotation Using Histogram Oriented Gradients Extracted from Region of Interest Patches
EA045328B1 (en) DEVICE AND METHOD FOR DETERMINING PATHOLOGY OF CHEST ORGANS BASED ON X-RAY IMAGES
Zayniddinov et al. TOOLS FOR MANUAL 3D MEDICAL IMAGE SEGMENTATION AND DATA PREPROCESSING FOR AUTOMATIC 3D MEDICAL IMAGE SEGMENTATION

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21809069

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21809069

Country of ref document: EP

Kind code of ref document: A1