WO2024097777A2 - Systems for generating, storing, and using augmented digital tooth libraries design - Google Patents

Systems for generating, storing, and using augmented digital tooth libraries design Download PDF

Info

Publication number
WO2024097777A2
WO2024097777A2 PCT/US2023/078382 US2023078382W WO2024097777A2 WO 2024097777 A2 WO2024097777 A2 WO 2024097777A2 US 2023078382 W US2023078382 W US 2023078382W WO 2024097777 A2 WO2024097777 A2 WO 2024097777A2
Authority
WO
WIPO (PCT)
Prior art keywords
patient
tooth
computer system
color
zone
Prior art date
Application number
PCT/US2023/078382
Other languages
French (fr)
Other versions
WO2024097777A3 (en
Inventor
Michael C. Marshall
John Michael MADDEN
Stephen B. Siegfried Floe
Original Assignee
Voyager Dental, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Voyager Dental, Inc. filed Critical Voyager Dental, Inc.
Publication of WO2024097777A2 publication Critical patent/WO2024097777A2/en
Publication of WO2024097777A3 publication Critical patent/WO2024097777A3/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • This document generally describes devices, systems, and methods related to the generation, storage, and use of augmented digital tooth libraries, for example, which can include computer-based modeling a patient’s teeth and gingiva to generate a digital denture design for the patient with augmented features, such as different layers of the patient’s teeth and gingiva along with corresponding color, transparency, and/or texture data.
  • a denture is a dental prosthesis that is made to replace missing teeth.
  • Dentures are often supported by surrounding soft and hard tissue of a patient’s oral cavity.
  • a denture may be designed to fit over and be supported by a patient’s gum tissue.
  • Dentures may include a denture base region that is formed from an acrylic material and colored to appear similar to gum tissue.
  • Denture teeth formed from acrylic or other materials may be secured to the denture base. The denture teeth may also be colored to appear similar to real teeth.
  • dentures may be fixed or removable, implant-supported or non-implant supported. Additionally, dentures may be complete (e.g., replacing teeth of an entire dental arch) or partial (e.g., replacing less than all of the teeth of the dental arch).
  • a fixed denture is not intended to be removed by a patient during ordinary use. Typically, a fixed denture is placed by a care provider, such as a dentist or prosthodontist, and is removed, if necessary, by the care provider. A fixed denture may, for example, be secured to one or more dental implants.
  • a removable denture is made such that a patient may (and usually should) remove the denture during ordinary use. For example, the patient may remove the denture on a daily basis for overnight cleaning.
  • Non-implant supported, removable dentures are often held in place by a suction fit between a bottom of the denture and the patient’s gum tissue.
  • the bases of removable dentures are generally fabricated to closely follow a shape of the patient’s gum tissue. When the base is pressed against the patient’s gum tissue, air may be forced out, creating a low-pressure suction seal between the denture base and the patient’s tissue.
  • Partial removable dentures may include clasps that mechanically secure the denture to the patient’s remaining teeth.
  • Implant-supported dentures are designed to couple to dental implants that have been implanted in the patient’s mouth. Implant-supported dentures may be fixed or removable. Some implant-supported dentures may be removable by the patient to allow for cleaning.
  • dentures When properly made and fit, dentures may provide numerous benefits to the patient. These benefits include improved mastication (chewing) as the denture replaces edentulous (gum tissue) regions with denture teeth. Additional benefits include improved aesthetics when the patient’s mouth is open due to the presence of denture teeth and when the patient’s is closed due to cheek and lip support provided by the denture structure. Another benefit of dentures is improved pronunciation as the presence of properly sized front teeth is important for making several speech sounds.
  • This document generally describes technology for augmenting digital tooth libraries with color, texture, transparency/translucency, and/or layers data.
  • the augmented digital tooth libraries can then be retrieved during runtime for designing dental appliances for patients, such as dentures, and fabricating the dental appliances.
  • the disclosed technology can provide for anatomical tooth libraries where teeth are already in bridges and/or 2 or more teeth (or blocks), which can save time in design and use of compute processing power and resources.
  • Such tooth libraries can also be used to prevent unnecessary alterations, steps, and/or customization operations during the design process of dental appliances.
  • the disclosed technology can provide any anatomical libraries having a tooth and gingiva together in the library, which can also be used to improve an amount of time in designing and operating of compute processing power and other processing resources.
  • the disclosed technology can also be used to generate any standalone gingival library.
  • the disclosed technology can be used to generate any anatomical library where a tooth or teeth have multiple layers as well as internal segments, which can reflect individual tooth and/or root morphology to provide color depth and accuracy in 3D printed or other fabricated dental appliances.
  • the fabricated dental appliances can reflect a natural dentition, thereby improving patient experience and appearance.
  • any color dental library can also be generated and maintained, such as libraries for teeth and/or gingival.
  • the disclosed technology can provide for generation of dental libraries where teeth (e.g., groups of teeth and/or individual teeth) have layers of color, texture, and/or transparency properties to provide realistic color depth that also travels with the gingiva, which may also be all layered.
  • teeth e.g., groups of teeth and/or individual teeth
  • the components of the library e.g., tooth and gingiva
  • their corresponding layers can be moved and adjusted in parametric fashion and/or by ratios in a coordinated fashion in order to preserve an overall appearance of the library, regardless of how the library may be scaled and/or altered (e.g., to fit different arch forms).
  • the disclosed technology can be used to generate a full color library with layered teeth and gingiva where upper and lower tooth arrangements may already be completed and desired or preferred occlusion with each other.
  • the completed unit/arrangement of teeth and gingiva can be scaled without compromising the occlusion or function.
  • This completed unit/arrangement can also be quickly, easily, and efficiently adapted to a tissue surface to complete a dental appliance design in a single step.
  • the disclosed techniques can provide for reducing an amount of time needed to design dental appliances and more efficient use of processing power and other compute resources.
  • the disclosed technology can be used to generate and maintain any type of tooth library described herein that is in grey-scale or otherwise does not include color data (e.g., the tooth library has no color or otherwise is colorless).
  • the document also describes technology for generating a digital dental model for a patient with color data corresponding to teeth and gingiva in the patient’s mouth. More specifically, the disclosed technology provides for receiving a scan of the patient’s mouth, identifying color data for teeth and gingiva in the scan, and assigning the color data in layers and/or grades to the patient’s teeth and gingiva represented in a digital dental model of the patient’s mouth. The digital dental model with the assigned color data can then be used to fabricate dentures for the patient.
  • the disclosed technology is described for assigning color data to the patient’s teeth and gingiva in the digital dental model, the disclosed technology may similarly be used to assign texture data to the patient’s teeth and gingiva in the digital dental model.
  • Transferring accurate color from the scan of the patient’s mouth to the dentures is an important aspect of data collection.
  • a care provider such as a dentist or dental assistant, uses a shade guide and their own perception of color to choose best match color(s) for denture design.
  • the resulting color of manufactured dentures may not match colors of other teeth and/or gingiva in the patient’s mouth.
  • the manufactured dentures may look fake or unrealistic, especially in certain lighting conditions.
  • the disclosed techniques therefore, provide for accurately identifying various colors of the patient’s teeth and gingiva from the scan of the patient’s mouth and blending or otherwise adjusting those colors across one or more layers that are defined for the teeth and gingiva to generate a realistic-looking digital denture design.
  • Such color data can be provided, with the digital denture design, to a rapid fabrication machine, 3D printer, or other multi-layer manufacturing system/device to fabricate the dentures for the patient.
  • One or more implementations described herein can include method for modeling a patient’s mouth with color data, the method including: receiving, by a computer system, oral scan data for a patient, the oral scan data including at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system, identifying, by the computer system and for at least one tooth represented in the oral scan data for the patient, a first zone, a second zone, and a third zone, in which the first, second, and third zones may be non-overlapping zones, identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth, and generating, by the computer system, a digital dental model for the patient based on mapping the identified statistical color value for each identified zone for the at least one tooth to corresponding zones for at least one tooth represented in the digital dental model.
  • the implementations described herein can optionally include one or more of the following features.
  • the method can also include generating, by the computer system, a digital denture model for the patient based on the digital dental model having the mapped color values.
  • the method can also include transmitting, by the computer system to a rapid fabrication machine, instructions that, when executed by the rapid fabrication machine, cause the rapid fabrication machine to manufacture dentures based on the digital denture model for the patient.
  • the instructions may include at least one data fde having information about the mapped color values for printing the dentures in corresponding dental colors.
  • the method can include identifying, by the computer system and for each identified zone for the at least one tooth, at least one layer, and identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth can include identifying a statistical color value for the at least one layer for the identified zone.
  • the method can include assigning, by the computer system, the statistical color value for each identified zone to a predetermined dental color.
  • the method can also include adjusting, by the computer system, the statistical color value for at least one of the identified zones for the at least one tooth. Adjusting, by the computer system, the statistical color value may include blending the statistical color value across at least two of the identified zones.
  • the method can also include simulating, by the computer system, ray-tracing of the digital dental model for the patient to identify deviations in at least one of the statistical color values that exceeds a threshold color value.
  • the method can also include adjusting, by the computer system, the at least one of the statistical color values based on the identified deviation.
  • the method can include identifying, by the computer system and for the at least one tooth, a tooth type, retrieving, by the computer system from a data store, texture data associated with the identified tooth type, and assigning, by the computer system, the texture data to at least one zone for the at least one tooth.
  • the at least one zone for the at least one tooth can be the second zone.
  • the method can also include identifying, by the computer system and for at least one tooth represented in the oral scan data for the patient, a first zone, a second zone, and a third zone comprises identifying annotations for library teeth in the oral scan data for the patient.
  • Identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth can include: mapping a first color value representing a band of chroma to the first zone.
  • the first zone can represent a cervical third of the at least one tooth and the first color value can be mapped to the first zone using annotations in the oral scan data that indicate halo sockets as reference points.
  • the method can also include adjusting a dominance level of the first color value to exceed a threshold level of dominance.
  • identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth may include mapping a second color value to the second zone, the second zone representing a middle third of the at least one tooth.
  • the method can also include adjusting a level of opacity of the mapped second color value to exceeds a threshold level of opacity.
  • the second color value can be at least one of a yellow shade and a white shade.
  • identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth may include: mapping a third color value to the third zone, the third zone representing an incisal third of the at least one tooth.
  • the method can also include adjusting a level of translucency of the third color value to exceed a threshold level of translucency.
  • Mapping a third color value to the third zone may include mapping the third color value as a color band across an annotated incisal third of the at least one tooth in the oral scan data.
  • the incisal third of the at least one tooth can be annotated differently based on a tooth type.
  • Mapping a third color value to the third zone may include mapping the third color value onto an annotated incisal third of the at least one tooth based on landmarks in the oral scan data that indicate IP contacts.
  • the method can include adjusting, by the computer system, a gradient of the statistical color values across the first, second, and third zones.
  • the method can include performing, by the computer system, a color calibration process on the oral scan data.
  • the method can also include identifying, by the computer system, at least one zone across multiple teeth in the oral scan data, the multiple teeth in the oral scan data being a threshold distance from each other along a dental arch of the patient.
  • the statistical color value, for each zone can be an average of color values identified for the zone.
  • the statistical color value, for each zone can be a mean color value determined from a group of color values identified for the zone.
  • the statistical color value, for each zone can be a summation of a group of color values identified for the zone.
  • the method may also include identifying, by the computer system and for a gingiva represented in the oral scan data for the patient, at least one zone, and identifying, by the computer system, a statistical color value for the at least one zone for the gingiva.
  • the method can include identifying, by the computer system, a texture value for the at least one zone for the gingiva.
  • the at least one zone may include a first zone, a second zone, and a third zone.
  • the first, second, and third zones for the gingiva may be non-overlapping zones.
  • the method can also include mapping, by the computer system, at least one of a first color and a first texture to an annotated portion of the gingiva in the oral scan data that corresponds to socket halos.
  • the first color can be a top surface color for the gingiva and the first texture can be a top surface texture for the gingiva.
  • the method may include mapping, by the computer system, at least one of a second color and a second texture to an annotated portion of the gingiva in the oral scan data that corresponds to each root eminence in the gingiva.
  • the second color can be a middle surface color for the gingiva and the second texture can be a middle surface texture for the gingiva.
  • the method may include mapping, by the computer system, at least one of a third color and a third texture to an annotated portion of the gingiva in the oral scan data that corresponds to portions of the gingiva between annotations.
  • the third color can be a bottom surface color for the gingiva and the third texture can be a bottom surface texture for the gingiva.
  • One or more embodiments described herein can include a method for modeling a patient’s mouth with color data, the method including: receiving, by a computer system, oral scan data for a patient, the oral scan data including at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system, identifying, by the computer system and for at least a portion of teeth represented in the oral scan data for the patient, at least one zone, identifying, by the computer system, a statistical color value for the at least one zone for the portion of teeth, mapping, by the computer system, the statistical color value to a predetermined dental color value for teeth, generating, by the computer system, a digital dental model for the patient based at least in part on the oral scan data for the patient and the mapped predetermined dental color value for the teeth, and transmitting, by the computer system to a rapid fabrication machine, data representative of the
  • the method can optionally include any one or more of the abovementioned features.
  • One or more embodiments described herein can include a method for modeling a patient’s mouth with color data, the method including: receiving, by a computer system, oral scan data for a patient, the oral scan data including at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system, identifying, by the computer system and for at least a portion of gingiva represented in the oral scan data for the patient, at least one zone, identifying, by the computer system, a statistical color value for the at least one zone for the portion of gingiva, mapping, by the computer system, the statistical color value to a predetermined dental color value for gingiva, generating, by the computer system, a digital dental model for the patient based at least in part on the oral scan data for the patient and the mapped predetermined dental color value for the gingiva, and transmitting, by the computer system to a rapid fabrication machine,
  • the method can optionally include one or more of the abovementioned features.
  • One or more embodiments described herein include a method for modeling a patient’s mouth with color data, the method including: receiving, by a computer system, oral scan data for a patient, the oral scan data including at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system, identifying, by the computer system, (i) at least one zone for at least a portion of teeth represented in the oral scan data for the patient and (ii) at least one zone for at least a portion of gingiva represented in the oral scan data for the patient, identifying, by the computer system, (i) at least one statistical color value for the at least one zone for the portion of teeth and (ii) at least one statistical color value for the at least one zone for the portion of gingiva, generating, by the computer system, a digital dental model for the patient based at least in part on the oral scan data for the
  • One or more of the methods described throughout this document can optionally include the statistical color value being selected from among a plurality of predetermined standard dental colors that are part of a color tooth library.
  • the statistical color can be selected for each of the each identified zones for the at least one tooth.
  • the statistical color can be selected as a predetermined standard dental color from among the plurality of predetermined standard dental colors that most closely matches one or more color values derived from the oral scan data.
  • the color tooth library can be specific to one or more individual teeth.
  • the color tooth library can be for a full denture library.
  • the color tooth library can be archived in a data store and retrieved, by the computer system, for generating digital dental models for one or more patients.
  • One or more of the embodiments described herein can include a method for augmenting a digital tooth library with one or more properties, the method including: accessing, by an augmentation computer system from a non-augmented digital tooth libraries data store, a non-augmented digital tooth library, retrieving, by the augmentation computer system from at least one models and rulesets data store, models and rulesets for tooth layers and gingiva, generating, by the augmentation computer system, 3D layers for the non-augmented digital tooth library based on applying a first portion of the retrieved models and rulesets to the nonaugmented digital tooth library, the first portion corresponding to a first subset of the models and rulesets for teeth, generating, by the augmentation computer system, 3D layers for gingiva for the non-augmented digital tooth library based on applying a second portion of the retrieved models and rulesets to the non-augmented digital tooth library, the second portion corresponding to a second subset
  • the method can optionally include one or more of the abovementioned features.
  • the method can optionally include one or more of the following features.
  • generating, by the augmentation computer system, 3D layers for the nonaugmented digital tooth library can include generating: an opaque innermost layer, at least one translucent layer that can be larger than the opaque innermost layer and can overlay at least a portion of the opaque innermost layer, and at least one transparent outermost layer that can be larger than the at least one translucent layer and can overlay at least a portion of the opaque innermost layer and the at least one translucent layer.
  • the at least one translucent layer can include a translucency value that may be greater than a translucency value of the opaque innermost layer but may be less than a translucency value of the at least one transparent outermost layer.
  • One or more embodiments described herein can include a method for generating an augmented teeth setup using augmented digital tooth libraries, the method including: receiving, by a computer system, a tooth library selection and a dental appliance setup from a design computing system, retrieving, by the computer system, the selected tooth library from an augmented digital tooth libraries data store, the selected tooth library including 3D layers for a tooth, 3D layers for corresponding gingiva, and predefined values for the one or more properties for each of the 3D layers, the 3D layers including meshes and the one or more properties including at least one of color, texture, and transparency properties, retrieving, by the computer system and from one or more data stores, one or more rulesets for generating the dental appliance setup, applying, by the computer system, the dental appliance setup to the selected tooth library, adjusting, by the computer system, relative positioning of the meshes of the selected tooth library based on mesh reference points corresponding to the selected tooth library, generating, by the computer system, portions of the gingiva that may be missing from the selected tooth library
  • the method can optionally include one or more of the abovementioned features. In some implementations, the method can optionally include one or more of the following features.
  • the method can also include generating specific fabrication material, color, and printing instructions based on the augmented teeth setup and a fabrication device.
  • the fabrication device can be a 3D printer.
  • the method can include transmitting, by the computer system to the fabrication device, fabrication instructions for execution by the fabrication device in printing a dental appliance based on the augmented teeth setup.
  • the method can include generating, by the computer system, a graphical representation of the augmented teeth setup.
  • the method can include transmitting, by the computer system, the graphical representation to a display device.
  • the display device can be configured to output the graphical representation in a graphical user interface (GUI) at the display device.
  • GUI graphical user interface
  • the disclosed technology improves design automation by reducing an amount of time needed to design dentures for a patient and increasing accuracy in color assignment for the dentures design.
  • the disclosed technology can automatically analyze patient oral scan data to identify different colors and/or textures in the patient’s teeth and gingiva, then map the identified colors and/or textures to a digital dental model for the patient. This eliminates potential human error and time needed to visually inspect the patient’s mouth and guess what colors match the patient’s teeth and gingiva.
  • the disclosed technology can determine color data for the patient’s teeth and gingiva with high accuracy and in little time, thereby resulting in realistic-looking dentures and rapid design-to-manufacturing of the dentures.
  • the disclosed technology provides for generation of realistic-colored and realistic-textured dentures that blend in with other teeth and gingiva in the patient’s mouth.
  • Computer-simulated ray-tracing techniques may be used to identify shading and/or brightness of assigned colors in the digital dental model and accordingly adjust those colors to appear more realistic in similar lighting conditions when the dentures are worn by the patient.
  • the disclosed technology provides for manufacturing realistic-looking dentures by layering and blending the color data in the digital dental model. Colors can be identified for various zones on the patient’s teeth and gingiva. Some zones may be overlapping while other zones may not.
  • the disclosed technology can provide for adjusting various characteristics of colors assigned to overlapping zones such that the colors blend together and make the resulting dentures appear more realistic.
  • FIG. 1A is conceptual diagram of a system for generating a digital dental model for a patient with color data.
  • FIG. IB is a schematic block diagram of the system of FIG. 1A for fabricating a motion-based denture based on the digital dental model.
  • FIG. 2 is a schematic block diagram illustrating an example motion capture system for capturing jaw movement.
  • FIG. 3 illustrates a block diagram of an example patient assembly of FIG. 2.
  • FIG. 4 illustrates an example implementation of the clutch of FIG. 3.
  • FIGs. 5A-B are cross-sectional side views that illustrate attachment of a dentition coupling device of the clutch or reference structure of FIG. 2 to a dental implant.
  • FIG. 6 is an example of the motion capture system of FIGs. 1 A-B in which two screens are used.
  • FIG. 7 illustrates a top view of a reference structure of FIG. 3 and the imaging system of FIGs. 1A-B.
  • FIG. 8 illustrates a perspective view of the reference structure of FIG. 7 disposed between the screens of the imaging system of FIG. 7.
  • FIG. 9 is a flowchart of an example process for fabricating a denture for a patient.
  • FIGs. 10A-B is a flowchart of a process for determining teeth and gingiva color data for a digital dental model of a patient.
  • FIG. 11 A is a flowchart of a process for determining color data for gingiva in a digital dental model of a patient.
  • FIG. 1 IB illustrates an example digital dental model of the patient in FIG. 11 A for determining the gingiva color data.
  • FIGs. 12A-B is a flowchart of a process for determining color data for teeth in a digital dental model of a patient.
  • FIG. 12C illustrates an example digital dental model of the patient in FIGs. 12A-B for determining the teeth color data with horizontal slicing techniques.
  • FIG. 12D illustrates another example digital dental model of the patient in FIGs. 12A- B for determining the teeth color data with vertical slicing techniques.
  • FIG. 13 A is an example digital dental model having annotations for defining teeth and gingiva color data.
  • FIG 13B is an example digital dental model having cutback layers of teeth color data.
  • FIG. 13C is an example digital dental model having color and texture data for teeth and gingiva.
  • FIG. 14 is a schematic diagram that shows an example of a computing device and a mobile computing device.
  • FIG. 15A illustrates an example system for storing digital tooth libraries that are augmented with data such as color, layers, texture, and/or transparency data.
  • FIG. 15B illustrates an example system for generating augmented digital tooth libraries from new and/or preexisting digital tooth libraries.
  • FIG. 15C illustrates an example system for generating fabrication instructions and/or digital teeth graphics using augmented digital tooth libraries.
  • FIG. 16 illustrates example digital teeth with layers having different color, texture, and/or transparency properties.
  • FIG. 17 illustrates example morphology that can be sculpted on inside layers of different types of digital teeth.
  • FIG. 18 illustrates an example tooth with layers having different amounts of opacity, translucency, and/or transparency.
  • FIG. 19 illustrates a tooth cross section having mesh layers that can be exported to a 3D color printer.
  • a digital dental model can be generated for a patient using a variety of data, such as an oral scan.
  • the disclosed technology can process the oral scan to identify one or more colors corresponding to different zones and/or layers of the patient’s teeth and/or gingiva.
  • the identified colors can be mapped in layers to teeth and/or gingiva in the digital dental model for the patient and automatically adjusted to appear more realistic in various lighting or other conditions.
  • the digital dental model with the color mappings can then be used to manufacture dentures for the patient that look realistic and match teeth and/or gingiva already in the patient’s mouth.
  • a motion-based digital denture design system may be used to capture actual motion data from the patient to aid in the design of dentures, dental prosthetics, or other dental appliances (e.g., crowns).
  • the motion data may provide for dentures that fit the patient better than conventionally-designed dentures that may not use actual motion data.
  • teeth of the dentures may be positioned so as to avoid interfering with opposing teeth (e g., opposing actual teeth or denture teeth) during patient biting motion.
  • the disclosed technology may reduce chair time and number of visits required to fit dentures to the patient.
  • the disclosed technology may also provide for designing dentures that have balanced occlusal support throughout functional movements (e.g., excursive movements).
  • FIG. 1 A is conceptual diagram of a system 100 for generating a digital dental model for a patient with color data.
  • a computer system 152 may communicate (e.g., wired and/or wireless) with a denture design system 116, rapid fabrication machine 119, and denture library 154 via network(s) 110.
  • the computer system 152 can be any type of computing device, network of computing devices and/or systems, and/or cloud-based system described herein.
  • the computer system 152 can be configured to collect and/or capture data about a patient 150 in a dental office 102. Sometimes, the computer system 152 can be remote from the patient 150 and/or the dental office 102.
  • the computer system 152 can communicate via the network(s) 110 with a dental impression station 106, an image capture system 107, and a motion capture system 200. Sometimes, the computer system 152 can be integrated into or otherwise part of at least one of the dental impression station 106, the image capture system 107, and the motion capture system 200. The computer system 152 can generate instructions that cause the dental impression station 106, the image capture system 107, and/or the motion capture system 200 to capture data of the patient 150. Refer to FIGs. IB-2 for further discussion about the dental impression station 106, the image capture system 107, and the motion capture system 200.
  • the denture design system 116 can be configured to automatically design digital dental models for patients, such as the patient 150, based on data collected by the computer system 152.
  • the denture design system 116 can also assign and map color data to teeth and gingiva in the digital design model of the patient 150 to then be used in manufacturing dentures for the patient.
  • the denture design system 116 can be part of the computer system 152.
  • the rapid fabrication machine 119 can be configured to manufacture/produce dentures for the patient 150 based on design information that is generated and provided by the denture design system 116. In some implementations, the rapid fabrication machine 119 can be part of the denture design system 116 and/or the computer system 152.
  • the denture library 154 can be any type of database, data store, cloud-based storage, and/or data repository that is configured to store information about digital dental models and denture designs.
  • the denture library 154 can store the abovementioned information in association with particular patients, such as a digital dental model for the patient 150.
  • the denture library 154 can also store denture designs, for example, that are generic and may apply to a variety of patients. Refer to FIGs. 13A-C for further discussion about the information stored in the denture library 154.
  • the computer system 152 can capture patient dental and/or denture data in block A.
  • the data can include dental impression data, images of the patient 150’s mouth, and/or motion data, all of which may be captured and generated by the respective dental impression station 106, the image capture system 107, and the motion capture system 200.
  • the captured data can include oral scan data of the patient 150’s mouth. Refer to FIG. 1A for further discussion about capturing the patient dental and/or denture data.
  • the computer system 152 can transmit the patient data to the denture design system 116.
  • the patient data can be stored in a database, such as the denture library 154, and then retrieved at a later time for further processing.
  • the denture design system 116 can also retrieve denture color and/or texture data in block C.
  • the system 116 can retrieve information indicating one or more known color values used for coloring dentures.
  • the system 116 can then correlate colors identified in the patient data with one or more of the known color values for dentures to accurately identify what colors can be used for manufacturing dentures for the patient 150.
  • the system 116 can also assign one or more known texture values to the identified zones and/or layers.
  • the denture library 154 can maintain a mapping of textures to zones or layers of teeth and gingiva. The mapping can be used to select appropriate textures to apply to the zones or layers of teeth and gingiva that are identified for the particular patient 150 from the patient data.
  • block C can be performed in one or more other orders.
  • block C can be performed after block F, which is described further below.
  • the data retrieved in block C can be used to adjust one or more colors and/or texture that is mapped to a digital dental model for the patient 150.
  • block C can be performed after block E such that the retrieved data is used to perform the mapping described in block F.
  • block E can be performed after block E such that the retrieved data is used to perform the mapping described in block F.
  • One or more other orders of operations are also possible.
  • the denture design system 116 can generate a digital dental model based on the transmitted data. Refer to FIGs. IB-9 for further discussion about generating the digital dental model. [0071]
  • the denture design system 116 can identify zones of teeth and/or gingiva in the digital dental model and layers for each identified zone in block E.
  • the system 116 can apply one or more rules, algorithms, and/or machine learning models to identify various zones for each of the teeth and the gingiva appearing in the digital dental model for the patient 150.
  • the system 116 can identify 2 zones for each tooth, the zones including a top or enamel layer and a cutback layer.
  • the system 116 can also identify 2 zones for the gingiva, including a base layer and a festoon layer.
  • the digital dental model for the patient 150 can be built with 4 zones, each of which can be assigned different color data and/or texture data.
  • each zone can be defined independently of one another, all the zones can be printed by the rapid fabrication machine 119 as part of a singular object (e.g., dentures).
  • the zones may not be divided by physical space in the digital dental model but can have boundaries defined by characteristics corresponding to each zone.
  • the characteristics include color.
  • the characteristics may also include texture.
  • color assigned to each zone can be blended across zones or otherwise adjusted in some way that causes resulting dentures to appear realistic when manufactured and worn by the patient 150.
  • the denture design system 116 maps at least one color value and/or texture value to the layers in each identified zone.
  • different zones and layers per zone allow for making dentures having more color depth and a more realistic appearance once manufactured and worn by the patient 150.
  • portions of the digital dental model can be sampled to identify one or more colors per layer and/or per zone.
  • the identified colors can be mapped or otherwise assigned to known dental colors, which can be retrieved in block C from the denture library 154.
  • the identified colors can be adjusted using additional processing techniques (e.g., computer-simulated ray-tracing) to make dentures manufactured with the corresponding color data appear more realistic when worn by the patient 150.
  • one or more of the colors can be identified by a care provider, such as a dentist, and provided as user input to the denture design system 116 to be mapped to one or more layers in the identified zones in the digital dental model.
  • the disclosed techniques is used to map texture values to the teeth and/or gingiva in the digital dental model of the patient 150, thereby making the resulting dentures appear more realistic.
  • Mapping data can indicate mappings of various textures to different zones defined for teeth and gingiva.
  • the mapping data can therefore be used by the denture design system 116 to apply stock or expected texture values to specific zones defined in the digital denture model for the patient 150. For example, some zones of teeth may always have a ribbed texture and a base of dentures may always have a stippling texture.
  • the stock or expected texture values can therefore be stored in the denture library 154, retrieved therefrom, and applied to particular zones in any digital dental model for any patient.
  • the denture design system 116 can transmit the digital dental model with the mapped color and/or texture values to the rapid fabrication machine 119 (block G). Sometimes, the system 116 can store the model with the mapped values in the denture library 154 for future retrieval and use in manufacturing dentures for the patient 150.
  • the rapid fabrication machine 119 can fabricate dentures for the patient 150 based on the digital dental model and the mapped color and/or texture values (block H). As described herein, the machine 119 can directly 3D print the mapped color and/or texture values on the dentures rather than copying and/or printing dentures with existing or library-stock color and/or texture values. Refer to FIGs. IB and 9 for further discussion about manufacturing the dentures based on the digital dental model and mapped color and/or texture values.
  • FIG. IB is schematic block diagram of the system 100 of FIG. 1A for fabricating a motion-based denture based on a digital dental model.
  • the system 100 includes the dental office 102 of FIG. 1A and a dental lab 104.
  • the example dental office 102 includes the motion capture system 200 (described further with respect to at least FIG. 2), the dental impression station 106, the image capture system 107, and a dental therapy station 126.
  • the image capture system 107 may be a sub-component of the motion capture system 200 (as described elsewhere).
  • the dental office 102 includes multiple dental offices.
  • one or more of the dental impression station 106, the image capture system 107, and the motion capture system 200 can be in a different dental office than the dental therapy station 126. Further, one or more of the dental impression station 106, the motion capture system 200, and the dental therapy station 126 may not be located in a dental office.
  • the example dental impression station 106 is configured to generate a dental impression 108 of dentition of a patient (e.g., the patient 150 in FIG. 1A).
  • the dental impression 108 is a geometric representation of the dentition of the patient, which may include teeth (if any) and edentulous (gum) tissue, or gingiva as described herein.
  • the dental impression 108 is a physical impression captured using an impression material, such as sodium alginate, polyvinylsiloxane or another impression material.
  • the dental impression 108 is a digital impression.
  • the digital impression may be represented by one or more of a point cloud, a polygonal mesh, a parametric model, or voxel data.
  • the digital impression can be generated directly from the dentition of the patient, using for example an intraoral scanner.
  • Example intraoral scanners include the TRIOS Intra Oral Digital Scanner, the Lava Chairside Oral Scanner C.O.S., the Cadent iTero, the Cerec AC, the Cyrtina IntraOral Scanner, and the Lythos Digital Impression System from Ormco.
  • a digital impression is captured using other imaging technologies, such as computed tomography (CT), including cone beam computed tomography (CBCT), ultrasound, and magnetic resonance imaging (MR1).
  • CT computed tomography
  • CBCT cone beam computed tomography
  • MR1 magnetic resonance imaging
  • the digital impression is generated from a physical impression by scanning the impression or plaster model of the dentition of the patient created from the physical impression. Examples of technologies for scanning a physical impression or model include three-dimensional laser scanners and computed tomography (CT) scanners.
  • CT computed tomography
  • digital impressions can be created using other technologies.
  • the motion capture system 200 is configured to capture a representation of movement of dental arches relative to each other in the patient’s mouth.
  • the motion capture station generates motion data 110.
  • the dental impression 108 can also be used to generate a patient-specific dentition coupling device for capturing patient motion using the motion capture system 200.
  • Some implementations described herein may use other types of motion capture systems to generate motion data of the patient’s mouth.
  • the motion capture system 200 generates the motion data 110 from optical measurements of the dental arches that are captured while the dentition of the patient is moved.
  • the optical measurements can be extracted from image or video data recorded while the dentition of the patient is moved. Additionally, the optical measurements can be captured indirectly.
  • the optical measurements can be extracted from images or video data of one or more devices (e.g., a patient assembly such as the patient assembly 204 that is illustrated and described with respect to at least FIGs. 2-3) that are secured to a portion of the dentition of the patient.
  • the motion data 110 can be generated using other processes as well.
  • the motion data 110 may include transformation matrices that represent position and orientation of the dental arches.
  • the motion data 110 may include a series of transformation matrices that represent various motions or functional paths of movement for the patient’s dentition. Other implementations of the motion data 110 are possible as well.
  • Still images can be captured of the patient’s dentition while the dentition of the patient is positioned in a plurality of bite locations.
  • Image processing techniques can then be used by any of the disclosed computing systems to determine positions of the patient’s upper and lower arches relative to each other (either directly or based on the positions of the attached patient assembly 204).
  • the motion data 110 can be generated by interpolating between the positions of the upper and lower arches determined from at least some of the captured images.
  • the motion data 110 may be captured with the patient’s jaw in various static positions or moving through various motions.
  • the motion data 110 may include a static measurement representing a centric occlusion (e g., the patient’s mandible closed with teeth fully engaged) or centric relation (e.g., the patient’s mandible nearly closed, just before any shift occurs that is induced by tooth engagement or contact) bite of a patient.
  • a centric occlusion e g., the patient’s mandible closed with teeth fully engaged
  • centric relation e.g., the patient’s mandible nearly closed, just before any shift occurs that is induced by tooth engagement or contact
  • the motion data 110 may also include static measurements or sequences of data corresponding to protrusive (e.g., the patient’s mandible being shifted forward while closed), lateral excursive (e g., the patient’s mandible shifted/rotated left and right while closed), hinging (e.g., the patient’s mandible opening and closing without lateral movement), chewing (e.g., the patient’s mandible chewing naturally to, for example, determine the most commonly used tooth contact points), and border movements (e.g., the patient’s mandible is shifted in all directions while closed, for example, to determine the full range of motion) of the patient’s jaw.
  • protrusive e.g., the patient’s mandible being shifted forward while closed
  • lateral excursive e., the patient’s mandible shifted/rotated left and right while closed
  • hinging e.g., the patient’s mandible opening and closing without lateral movement
  • chewing e.g., the
  • the motion data is captured while the patient is using a Lucia jig or leaf gauge so that the patient’s teeth (for patients who are not completely edentulous) do not impact/ contribute to the movement data.
  • This motion data 110 may be used to determine properties of the patient’s temporomandibular joint (TMJ). For example, hinging motion of the motion data 110 may be used to determine the location of the hinge axis of the patient’s TMJ.
  • a representation of the motion of the hinge axis may be displayed while the motion data 110 is being captured.
  • a computing device may cause a line segment to be displayed in relation to a representation of the patient’s dentition.
  • the line segment may be displayed at a location that is approximately where the patient’s condyle is located.
  • the line segment may move in concert with the relative motion of the patient’s mandible (lower dentition).
  • the movement of the line may appear to rotate at a location approximately equal to the hinge axis of the patient’s TMJ.
  • the caregiver may annotate the motion data to identify portions of the motion data such as the motion data corresponding to hinging open/closed.
  • the caregiver may actuate an input such as a button on a user interface, a physical button, or a foot pedal to annotate portions of the motion data.
  • the image capture system 107 is configured to capture image data 109 of the patient.
  • the image data 109 may include one or more static images or videos of the patient.
  • the static images or frames with the image data 109 may be associated with the motion data 110.
  • a specific image from the image data 109 may be associated with a specific frame of the motion data 110, indicating that the specific image was captured while the patient’s jaw was in the position indicated by the specific frame of the motion data 110.
  • the image capture system 107 includes a three-dimensional camera and the image data 109 may include one or more three-dimensional images. Examples of three-dimensional cameras include stereo cameras (e.g., using two or more separate image sensors that are offset from one another).
  • the three-dimensional camera may also include a projector such as a light projector or laser projector that operates to project a pattern on the patient’s face.
  • a projector such as a light projector or laser projector that operates to project a pattern on the patient’s face.
  • the projector may be offset relative to the camera or cameras so that the images captured by the camera include distortions of the projected pattern caused by the patient’s face. Based on these distortions, the three-dimensional structure of portions of the patient’s face can be approximated.
  • Various implementations project various patterns such as one or more stripes or fringes (e.g., sinusoidally changing intensity values).
  • the three-dimensional image is captured in relation to the motion capture system 200 or a portion thereof so that the three-dimensional images can be related to the same coordinate system as the motion data.
  • the example dental lab 104 includes a 3D scanner 112, the denture design system 116, the rapid fabrication machine 119, and a denture fabrication station 122.
  • the dental lab 104 may also include multiple dental labs.
  • the 3D scanner 112 can be in a different dental lab than one or more of the other components shown in the dental lab 104.
  • one or more of the components shown in the dental lab 104 may not be physically located in a dental lab.
  • one or more of the 3D scanner 112, denture design system 116, rapid fabrication machine 119, and denture fabrication station 122 can be physically located in the dental office 102.
  • some implementations of the system 100 may not include all of the components shown in the dental lab 104 in FIG. IB.
  • the example 3D scanner 112 is a device that can be configured to create a three- dimensional digital representation of the dental impression 108.
  • the 3D scanner 112 generates a point cloud, a polygonal mesh, a parametric model, or voxel data representing the dental impression 108.
  • the 3D scanner 112 generates a digital dental model 114.
  • the 3D scanner 112 includes a laser scanner, a touch probe, and/or an industrial CT scanner. Yet other implementations of the 3D scanner 112 are possible as well.
  • the system 100 may not include the 3D scanner 112.
  • the dental impression 108 may be the digital dental model 114 or may be used directly to generate the digital dental model 114.
  • the denture design system 116 is a system that is configured to generate denture data 118.
  • the denture data 118 is three-dimensional digital data that represents a denture component 120 and is in a format suitable for fabrication using the rapid fabrication machine 119.
  • the denture design system 116 may use the digital dental model 114, the image data 109, and the motion data 110 to generate the denture data 118.
  • the denture design system 116 may generate a denture base having a geometric form that is shaped to fit a portion of the digital dental model 114 (e g., a portion of the model representing an edentulous region of the patient’s dentition).
  • the denture design system 116 may also determine various parameters that are used to generate the denture data 118 based on the image data 109. For example, implementations of the denture design system 116 may use various image processing techniques to estimate a vertical dimension parameter from the image data 109. Additionally, the denture design system 116 may use the motion data 110 to design the denture data 118. For example, the denture design system may use the motion data to ensure that the denture design avoids interferences with the opposing dentition (or dentures) during the bite motion represented by the motion data 110.
  • the denture design system 116 includes a computing device having one or more user input devices.
  • the denture design system 116 may include computer- aided-design (CAD) software that generates a graphical display of the denture data 118 and allows an operator to interact with and manipulate the denture data 118.
  • CAD computer- aided-design
  • the denture design system 116 may include a user interface that allows a user to specify or adjust parameters of the denture design such as vertical dimension, overbite, overjet, or tip, torque, and rotation parameters for one or more denture teeth.
  • the denture design system 116 may include virtual tools that mimic the tools and techniques used by a laboratory technician to physically design a denture.
  • the denture design system 116 includes a user interface tool to move a digital representation of the patient’s dentition (e.g., the digital dental model 114) according to the motion data 110 (which may be similar to a physical articulator).
  • the denture design system 116 can include a server that partially or fully automates generation of designs of the denture data 118, which may use the motion data 110.
  • the rapid fabrication machine 119 can include one or more three-dimensional printers. Another example of the rapid fabrication machine 119 is stereolithography equipment. Yet another example of the rapid fabrication machine 119 is a milling device, such as a computer numerically controlled (CNC) milling device. In some implementations, the rapid fabrication machine 119 is configured to receive files in STL format. The received files can include denture design data, including but not limited to color and/or texture data assigned to different layers and/or zones of teeth and gingiva in the denture design data. Other implemenations of the rapid fabrication machine 119 are possible as well.
  • the rapid fabrication machine 119 is configured to use the denture data 118 to fabricate a denture component 120.
  • the denture component 120 can be a physical component that is configured to be used as part or all of the denture 124.
  • the denture component 120 can be milled from zirconium, acrylic, or another material that is used directly as a dental appliance.
  • the denture component 120 can be a mold formed from wax or another material that is to be used indirectly (e.g., through a lost wax casting or ceramic pressing process) to fabricate the denture 124.
  • the denture component 120 can be formed using laser sintering technology.
  • the rapid fabrication machine 119 may include a 3D printer that fabricates the denture 124 directly from a material that is suitable for placement in the patient’s mouth.
  • the 3D printer may fabricate the denture teeth with the denture base.
  • the rapid fabrication machine 119 may print parts using multiple materials.
  • a first material may be used to fabricate the denture base and a second material may be used to fabricate the denture teeth.
  • the first material may have a different color, or multiple different colors, than the second material.
  • the first material may have a pink color that is similar to the color of gingiva and the second material may have a white or cream color that is similar to the color of teeth.
  • the denture teeth and the denture base can be fabricated together as a monolithic whole formed from multiple materials having multiple different colors. Beneficially, it may not be necessary to determine a common draw angle for the denture teeth, remove undercuts from the denture base, or couple the fabricated denture teeth to the denture base.
  • the rapid fabrication machine 119 can receive a single file that includes 3D model components (e.g., 3D meshes) that are to be formed using a variety of materials and/or colors.
  • the file may include color information.
  • the model file may be divided into volumetric zones or surface zones where the zones are each associated with different color information.
  • the rapid fabrication machine 119 may then fabricate (e.g., 3D print) the parts together using the colors identified in the color information in the file.
  • the rapid fabrication machine 119 may include reservoirs (or spools) of material or filament in multiple colors to allow for printing unitary denture components having multiple different colors. Some implementations of the machine 119 may include three, four, five, or twelve material reservoirs of different colors.
  • An example may include red, yellow, and blue materials in different reservoirs that may be combined to form many different colors, shades of colors, gradients of colors, and/or layering of colors in different zones of teeth and/or gingiva being printed as the denture 124.
  • Some implementations may also include a dark (e.g., black) color material to alter brightness of the combined material.
  • the color may be specified in terms of CYMK (cyan, yellow, magenta, key (black)).
  • Some implementations may also include a white reservoir that can be used to alter the brightness of the material blend.
  • the rapid fabrication machine 119 may include other reservoirs of material that are, for example, colors commonly used in dentures to reduce the need to blend colors.
  • the rapid fabrication machine 119 can print with a material having a color determined from multiple polygons or vertices.
  • the rapid fabrication machine 119 may determine color on a surface of a polygon using a process similar to Gouraud shading by, for example, variably blending the colors of vertices of the polygon based on position with respect to (distance to) the vertices.
  • the interior of the fabricated parts may be fabricated with a material based on a blend of the surface colors.
  • the interior of the fabricated parts may be fabricated with a material based on other factors, such as cost reduction (e.g., using materials with lower costs), weight reduction (e.g., using materials with lower weights), or other material properties such as strength (e.g., using materials with a desired strength or other material property).
  • cost reduction e.g., using materials with lower costs
  • weight reduction e.g., using materials with lower weights
  • strength e.g., using materials with a desired strength or other material property
  • the rapid fabrication machine 119 generates layers by horizontally slicing the model, as described further in reference to FIGs. 11-12.
  • the colors throughout a layer may be determined by blending colors from the surface (edges of the slice).
  • blending colors, vertices or polygons may not need to be added simply to alter the color of the fabricated dentures 124.
  • Adding vertices and polygons increases the amount of data that must be transmitted to the rapid fabrication machine 119 and the amount of memory required in the rapid fabrication machine 119 to store the model.
  • algorithms used to perform the horizontal slicing and other steps of printing may perform faster and require fewer processor cycles when the model has fewer polygonal surfaces and vertices.
  • Some implementations of the rapid fabrication machine 119 may be configured to receive model files in a voxel format in which color data is associated with each voxel in the model or with each comer of each voxel. The color data may be blended across the voxel based on distance from corners by the rapid fabrication machine 119 during printing.
  • the denture fabrication station 122 operates to fabricate the denture 124 for the patient.
  • the denture fabrication station 122 uses the denture component 120 produced by the rapid fabrication machine 119 in some implementations.
  • the denture 124 is a complete or partial denture.
  • the denture 124 may include one or both of a maxillary denture and a mandibular denture.
  • the denture 124 is formed from an acrylic, ceramic, or metallic material.
  • the dental impression 108 is used in the fabrication of the denture 124.
  • the dental impression 108 is used to form a plaster model of the dentition of the patient.
  • the denture fabrication station 122 can include equipment and processes to perform some or all of the techniques used in traditional dental laboratories to generate dental appliances. Other implementations of the denture fabrication station 122 are possible as well.
  • the denture 124 is seated in the mouth of the patient in the dental therapy station 126 by a dentist.
  • the dentist can confirm that an occlusal surface of the denture 124 is properly defined by instructing the patient to engage in various bites.
  • the dental office 102 may be connected to the dental lab 104 via a network.
  • the network may be an electronic communication network that facilitates communication between the dental office 102 and the dental lab 104.
  • An electronic communication network is a set of computing devices and links between the computing devices. The computing devices in the network use the links to enable communication among the computing devices in the network.
  • the network can include routers, switches, mobile access points, bridges, hubs, intrusion detection devices, storage devices, standalone server devices, blade server devices, sensors, desktop computers, firewall devices, laptop computers, handheld computers, mobile telephones, and other types of computing devices.
  • the network includes various types of links.
  • the network can include one or both of wired and wireless links, including Bluetooth, ultra- wideband (UWB), 802.11, ZigBee, and other types of wireless links.
  • the network is implemented at various scales.
  • the network can be implemented as one or more local area networks (LANs), metropolitan area networks, subnets, wide area networks (such as the Internet), or can be implemented at another scale.
  • LANs local area networks
  • LANs local area networks
  • subnets such as the Internet
  • the system 100 also plans treatments for implant-supported dentures. For example, the system 100 may determine appropriate positions for implants based on a denture design. Some implementations may generate digital design data for an implant surgical guide and fabricate the implant surgical guide using rapid fabrication technology.
  • the location of implants can be determined based, at least in part, on the design of the final dentures.
  • some implementations of the system 100 may integrate with one or more of an inventory management system and a parts management system. Based on the design of a denture or implant-supported denture treatment plan, a part pick list may be generated that lists the different components (e.g., denture teeth, implant abutments, support components). An inventory system may also be updated to adjust the quantities of parts and one or more orders may be generated and directed to one or more suppliers.
  • an inventory management system may also be updated to adjust the quantities of parts and one or more orders may be generated and directed to one or more suppliers.
  • FIG. 2 is a schematic block diagram illustrating an example motion capture system 200 for capturing jaw movement.
  • the motion capture system 200 includes an imaging system 202, a patient assembly 204, and a motion determining device 206. Also shown in FIGs. 1 A-B are a patient and a network.
  • the imaging system 202 includes an optical sensing assembly 210 and a screen assembly 212.
  • the optical sensing assembly 210 may capture a plurality of images as the patient’s jaw moves.
  • the optical sensing assembly 210 may include one or more cameras such as video cameras.
  • the optical sensing assembly 210 captures a plurality of images that do not necessarily include the patient assembly, but can be used to determine the position of the patient assembly 204.
  • the patient assembly 204 may emit lights that project onto surfaces of the screen assembly 212 and the optical sensing assembly 210 may capture images of those surfaces of the screen assembly 212.
  • the optical sensing assembly 210 does not capture images but otherwise determines the position of the projected light or lights on the surfaces of the screen assembly 212.
  • the screen assembly 212 may include one or more screens.
  • a screen may include any type of surface upon which light may be projected. Some implementations include flat screens that have a planar surface. Some implementations may include rounded screens, having cylindrical (or partially cylindrical) surfaces.
  • the screens may be formed from a translucent material. For example, the locations of the lights projected on the screens of the screen assembly 212 may be visible from a side of the screens opposite the patient assembly 204 (e.g., the screen assembly 212 may be positioned between the optical sensing assembly 210 and the patient assembly 204).
  • the imaging system 202 may capture or generate various information about the images.
  • the imaging system 202 may generate timing information about the images.
  • the timing information can include a timestamp for each of the images.
  • a frame rate e.g., 10 frames/second, 24 frames/second, 60 frames/second
  • Other types of information that can be generated for the images includes an identifier of a camera, a position of a camera, or settings used when capturing the image.
  • the patient assembly 204 is an assembly that is configured to be secured to the patient.
  • the patient assembly 204 or parts thereof may be worn by the patient and may move freely with the patient (i.e., at least a part of the patient assembly 204 may, when mounted to the patient, move in concert with patient head movement).
  • the imaging system 202 is not mounted to the patient and does not move in concert with patient head movement.
  • the patient assembly 204 may include light emitters (or projectors) that emit a pattern of light that projects on one or more surfaces (e.g., screens of the screen assembly 212), which can be imaged to determine the position of the patient assembly 204.
  • the light emitters may emit beams of substantially collimated light (e g., laser beams) that project onto the surfaces as points. Based on the locations of these points on the surfaces, a coordinate system can be determined for the patient assembly 204, which can then be used to determine a position and orientation of the patient assembly 204 and the patient’s dentition.
  • the patient assembly 204 includes separate components that are configured to be worn on the upper dentition and the lower dentition and to move independently of each other so that the motion of the lower dentition relative to the upper dentition can be determined. Examples of the patient assembly 204 are illustrated and described throughout, including in FIG. 3.
  • the motion determining device 206 determines the motion of the patient assembly 204 based on images captured by the imaging system 202.
  • the motion determining device 206 includes a computing device that uses image processing techniques to determine three-dimensional coordinates of the patient assembly 204 (or portions of the patient assembly) as the patient’s jaw is in different positions. For example, images captured by the optical sensing assembly 210 of screens of the screen assembly 212 may be processed to determine the positions on the screens at which light from the patient assembly is projected. These positions on the screens of the screen assembly 212 may be converted to three- dimensional coordinates with respect to the screen assembly 212. From those three-dimensional coordinates, one or more positions and orientations of the patient assembly 204 (or components of the patient assembly 204) may be determined.
  • some implementations determine the relative positions and movements of the patient’s upper and lower dentition. Further, some implementations infer the location of a kinematically derived axis that is usable in modeling the motion of the patient’s mandible (including the lower dentition) about the temporomandibular joint.
  • the kinematically derived axis may be a hinge axis or a screw axis.
  • the hinge axis may be derived from a portion of the motion data (e.g., the motion date corresponding to a hinging open/closed of the patient’s jaw).
  • FIG. 3 illustrates a block diagram of an example patient assembly 204.
  • the patient assembly includes a clutch 220 and a reference structure 222.
  • the clutch 220 and the reference structure 222 are not physically connected and can move independently of one another.
  • the clutch 220 is a device that is configured to couple to a patient’s dentition.
  • the clutch 220 may grip any remaining teeth of the dentition of the patient.
  • the clutch 220 may couple to an edentulous region of a patient’s dentition or to dental implants that have been placed in edentulous regions of the patient’s dentition.
  • the clutch 220 comprises a dentition coupling device 224 and a position indicator system 228.
  • the clutch 220 is configured to couple to the lower dentition of the patient so as to move with the patient’s mandible.
  • the clutch 220 may be configured to couple to the patient’s upper dentition so as to move with the patient’s maxilla.
  • the dentition coupling device 224 is configured to removably couple to the patient’s dentition.
  • the dentition coupling device 224 rigidly couples to the patient’s dentition such that while coupled, the movement of the dentition coupling device 224 relative to the patient’s dentition is minimized.
  • Various implementations include various coupling mechanisms.
  • some implementations couple to the patient’s dentition using brackets that are adhered to the patient’s teeth with a dental or orthodontic adhesive.
  • some implementations couple to the patient’s dentition using an impression material.
  • some implementations of the dentition coupling device 224 comprise an impression tray and an impression material such as polyvinyl siloxane. To couple the dentition coupling device 224 to the patient’s dentition, the impression tray is filled with impression material and then placed over the patient’s dentition. As the impression material hardens, the dentition coupling device 224 couples to the patient’s dentition.
  • some implementations comprise a dentition coupling device 224 that is custom designed for a patient based on a three-dimensional model of the patient’s dentition.
  • the dentition coupling device 224 may be formed using a rapid fabrication machine.
  • a rapid fabrication machine is a three-dimensional printer, such as the PROJET® line of printers from 3D Systems, Inc. of Rock Hill, South Carolina.
  • a milling device such as a computer numerically controlled (CNC) milling device.
  • the dentition coupling device 224 may comprise various mechanical retention devices such as clasps that are configured to fit in an undercut region of the patient’s dentition or wrap around any remaining teeth.
  • Implementations of the dentition coupling device 224 may couple to the patient’s dentition using a combination of one or more mechanical retention structures, adhesives, and impression materials.
  • the dentition coupling device 224 may include apertures through which a fastening device (also referred to as a fastener) such as a temporary anchorage device may be threaded to secure the dentition coupling device 224 to the patient’s dentition, gum tissue, or underlying bone tissue.
  • a fastening device also referred to as a fastener
  • a temporary anchorage devices may screw into the patient’s bone tissue to secure the dentition coupling device 224.
  • the dentition coupling device 224 includes one or more fiducial markers, such as hemispherical inserts, that can be used to establish a static relationship between the position of the clutch 220 and the patient’s dentition.
  • the dentition coupling device 224 may include three fiducial markers disposed along its surface. The location of these fiducial markers can then be determined relative to the patient’s dentition such as by capturing a physical impression of the patient with the clutch attached or using imaging techniques such as capturing a digital impression (e.g., with an intraoral scanner) or other types of images of the dentition and fiducial markers.
  • Some implementations of the dentition coupling device 224 do not include fiducial markers.
  • One or more images or a digital impression of the patient’s dentition captured while the dentition coupling device 224 is mounted may be aligned to one or more images or a digital impression of the patient’s dentition captured while the dentition coupling device 224 is not mounted.
  • the position indicator system 228 is a system that is configured to be used to determine the position and orientation of the clutch 220.
  • the position indicator system 228 includes multiple fiducial markers.
  • the fiducial markers are spheres. Spheres work well as fiducial markers because the location of the center of the sphere can be determined in an image regardless of the angle from which the image containing the sphere was captured.
  • the multiple fiducial markers may be disposed (e g., non-collinearly) so that by determining the locations of each (or at least three) of the fiducial markers, the position and orientation of the clutch 220 can be determined. For example, these fiducial markers may be used to determine the position of the position indicator system 228 relative to the dentition coupling device 224, through which the position of the position indicator system 228 relative to the patient’s dentition can be determined.
  • the position indicator system 228 do not include separate fiducial markers.
  • structural aspects of the position indicator system 228 may be used to determine the position and orientation of the position indicator system 228.
  • one or more flat surfaces, edges, or comers of the position indicator system 228 may be imaged to determine the position and orientation of the position indicator system 228.
  • an intraoral scanner is used to capture a three- dimensional model (or image) that includes a corner of the position indicator system 228 and at least part of the patient’s dentition while the dentition coupling device 224 is mounted. This three-dimensional model can then be used to determine a relationship between the position indicator system 228 and the patient’s dentition.
  • the determined relationship may be a static relationship that defines the position and orientation of the position indicator system 228 relative to a three-dimensional model of the patient’s dentition (e.g., based on the corner of the position indicator system 228 that was captured by the intraoral scanner).
  • the position indicator system 228 includes a light source assembly that emits beams of light.
  • the light source assembly may emit substantially collimated light beams (e.g., laser beams).
  • the light source assembly is rigidly coupled to the dentition coupling device 224 so that as the dentition coupling device 224 moves with the patient’s dentition, the beams of light move. The position of the dentition coupling device 224 is then determined by capturing images of where the light beams intersect with various surfaces (e.g., translucent screens disposed around the patient). Implementations that include a light source assembly are illustrated and described throughout.
  • the reference structure 222 is a structure that is configured to be worn by the patient so as to provide a point of reference to measure the motion of the clutch 220.
  • the reference structure 222 is configured to mount elsewhere on the patient’s head so that the motion of the clutch 220 (and the patient’s mandible) can be measured relative to the rest of the patient’s head.
  • the reference structure 222 may be worn on the upper dentition.
  • the reference structure 222 when the reference structure 222 is mounted securely to the patient’s upper dentition, the position of the reference structure 222 will not be impacted by the movement of the mandible (e.g., muscle and skin movement associated with the mandibular motion will not affect the position of the reference structure 222).
  • the reference structure 222 may be configured to be worn elsewhere on the patient’s face or head.
  • the reference structure 222 is similar to the clutch 220 but configured to be worn on the dental arch opposite the clutch (e.g., the upper dentition if the clutch 220 is for the lower dentition).
  • the reference structure 222 shown in FIG. 3 includes a dentition coupling device 230 that may be similar to the dentition coupling device 224, and a position indicator system 234 that may be similar to the position indicator system 228.
  • the patient assembly 204 includes a gothic arch tracer.
  • the clutch 220 may include one or more tracing components that may move across a surface of the reference structure 222. The tracing components may have adjustable heights.
  • FIG. 4 illustrates an implementation of a clutch 400.
  • the clutch 400 is an example of the clutch 220.
  • the clutch 400 includes a dentition coupling device 402 and a light source assembly 404, and an extension member 408.
  • the dentition coupling device 402 is an example of the dentition coupling device 224
  • the light source assembly 404 is an example of the position indicator system 228.
  • the light source assembly 404 which may also be referred to as a projector, is a device that emits light beams comprising light that is substantially collimated. Collimated light travels in one direction.
  • a laser beam is an example of collimated light.
  • the light source assembly 404 includes one or more lasers.
  • the one or more lasers may be semiconductor lasers such as laser diodes or solid-state lasers such as diode-pumped solid-state lasers.
  • the light source assembly 404 comprises a first, second, and third light emitter.
  • the first and second light emitters may emit substantially collimated light in parallel but opposite directions (i.e., the first and second light emitters may emit light in antiparallel directions) such as to the left and right of the patient when the clutch 400 is coupled to the patient’s dentition.
  • the first and second light emitters are collinear or are substantially collinear (e g., offset by a small amount such as less than 5 micrometers, less than 10 micrometers, less than 25 micrometers, less than 50 micrometers, or less than 100 micrometers).
  • the third light emitter may emit substantially collimated light in a direction of a line that intersects with or substantially intersects with lines corresponding to the direction of the first and second light emitters. Lines that intersect share a common point. Lines that substantially intersect do not necessarily share a common point, but would intersect if offset by a small amount such as less than 5 micrometers, less than 10 micrometers, less than 25 micrometers, less than 50 micrometers, or less than 100 micrometers. In some implementations, the third light emitter emits light in a direction that is perpendicular to the first and second light emitters, such as toward the direction the patient is facing.
  • the third light emitter emits light in a direction that is offset from the direction of the first light emitter so as to be directed toward the same side of the patient as the direction of the first light emitter.
  • the third light emitter may be offset from the first light emitter by an offset angle that is an acute angle.
  • the third light emitter may be offset from the first light emitter by an offset angle that is less than 90 degrees such that the light emitted by both the first light emitter and the second light emitter intersect with the same screen (e.g., a planar screen having a rectangular shape and being disposed on a side of the patient).
  • the third light emitter may be offset from the first light emitter by an offset angle of between approximately 1 degree to 45 degrees.
  • the offset angle is between 3 degrees and 30 degrees.
  • the offset angle is between 5 degrees and 15 degrees.
  • the offset angle may be less than 10 degrees.
  • one or more compensation factors are determined to compensate for an offset from the first and second light emitters being collinear, or an offset from the third light emitter emitting light in a direction of a line that intersects with the directions of the first and second light sources.
  • a compensation factor may also be determined for the offset angle of the third light emitter with respect to the first and second light emitters.
  • an offset angle compensation factor may specify the angle between the direction of the third light emitter and a line defined by the first light emitter.
  • the offset angle compensation factor may be 90 degrees or approximately 90 degrees.
  • the offset angle compensation factor may, for example, be between approximately 5 and 45 degrees.
  • the compensation factors may be determined specifically for each position indicator system manufactured to account for minor variation in manufacturing and assembly.
  • the compensation factors may be stored in a datastore (such as on the motion determining device 206 or on a computer readable medium accessible by the motion determining device 206).
  • Each position indicator system may be associated with a unique identifier that can be used to retrieve the associated compensation factor.
  • the position indicator system 234 may include a label with the unique identifier or a barcode, QR code, etc. that specifies the unique identifier.
  • Some implementations of the light source assembly 404 include a single light source and use one or more beam splitters such as prisms or reflectors such as mirrors to cause that light source to function as multiple light emitters by splitting the light emitted by that light source into multiple beams.
  • the emitted light emanates from a common point.
  • some implementations of the light source assembly 404 may comprise apertures or tubes through which light from a common source is directed.
  • Some implementations may include separate light sources for each of the light emitters.
  • the light source assembly 404 includes light emitters 406a, 406b, and 406c (referred to collectively as light emitters 406) and a housing 410.
  • the light emitter 406a is emitting a light beam LI
  • the light emitter 406b is emitting a light beam L2
  • the light emitter 406c is emitting a light beam L3.
  • the light beams LI and L2 are parallel to each other, but directed in opposite directions.
  • the light beam L3 is perpendicular to the light beams LI and L2.
  • the housing 410 is configured to position the light emitters 406 so that the light beams LI, L2, and L3 are approximately coplanar with the occlusal plane of the patient’s dentition.
  • the light beam L3 is perpendicular to the light beams LI and L2 in the example of FIG. 3, alternatives are possible.
  • the housing 410 may be approximately cube shaped and includes apertures through which the light emitters 406 extend. In other implementations, the light emitters do not extend through apertures in the housing 410 and instead light emitted by the light emitters 406 passes through apertures in the housing 410.
  • the dentition coupling device 402 is rigidly coupled to the light source assembly 404 by an extension member 408.
  • the extension member 408 extends from the dentition coupling device 402 and is configured to extend out of the patient’s mouth when the dentition coupling device 402 is worn on the patient’s dentition.
  • the extension member 408 is configured so as to be permanently attached to the light source assembly 404 such as by being formed integrally with the housing 410 or joined by welding or a permanent adhesive.
  • the extension member 408 is configured to removably attach to the light source assembly 404. Because the light source assembly 404 is rigidly coupled to the dentition coupling device 402, the position and orientation of the dentition coupling device 402 can be determined from the position and orientation of the light source assembly 404.
  • the housing 410 and the dentition coupling device 402 are integral (e.g., are formed from a single material or are coupled together in a manner that is not configured to be separated by a user).
  • the housing 410 includes a coupling structure configured to removably couple to the extension member 408 of the dentition coupling device 402.
  • the dentition coupling device 402 can be a disposable component that may be custom fabricated for each patient, while the light source assembly 404 may be reused with multiple dentition coupling devices.
  • the housing 410 includes a connector that is configured to mate with a connector on the dentition coupling device 402.
  • the housing 410 may couple to the dentition coupling device 402 with a magnetic clasp.
  • Some implementations include a registration structure that is configured to cause the housing 410 to join with the dentition coupling device 402 in a repeatable arrangement and orientation.
  • the registration structure comprises a plurality of pins and corresponding receivers.
  • the registration structure includes a plurality of pins disposed on the housing 410 and corresponding receivers (e.g., holes) in the dentition coupling device 402 (or vice versa).
  • the registration structure comprises a plurality of spherical attachments and a plurality of grooves.
  • the registration structure includes three or more spherical attachments disposed on the housing 410 and two or more v-shaped grooves disposed on the dentition coupling device 402 that are disposed such that the spherical attachments will only fit into the grooves when the housing 410 is in a specific orientation and position relative to the dentition coupling device 402.
  • the registration structure includes a spring-mounted pin or screw that serves as a detent to impede movement of the housing 410 with respect to the dentition coupling device 402.
  • FIGS. 5A-B are cross-sectional side views that illustrate the attachment of an implementation of a dentition coupling device 520 to a dental implant 522.
  • the dentition coupling device 520 is an example of the dentition coupling device 224 or the dentition coupling device 230.
  • the dentition coupling device 520 may include one or more fixed arms and one or more pivotable arms that can be positioned to align with the patient’s dentition.
  • FIG. 5A is a cross-sectional side view that illustrates an implant abutment 526 attached to a dental implant 522 that is disposed in the patient’s gingival tissue G.
  • the implant abutment 526 is held in place with an implant screw 524.
  • the implant screw 524 has threads that mate with corresponding threads in a receiver of the dental implant 522.
  • a patient receiving dentures may have several dental implants placed to support and secure the denture.
  • FIG. 5B is a cross-sectional side view of the dental implant 522 and gingival tissue G with the implant abutment 526 removed and the dentition coupling device 520 attached to the dental implant 522.
  • the implant screw 524 passes through a slot 592 of an arm 590 of the dentition coupling device 520, through an offset 528, and into the dental implant 522.
  • the offset 528 may be a cylindrical structure that includes an aperture through which a portion of the implant screw 524 may pass.
  • an aperture in the offset 528 may allow passage of the threaded portion of the implant screw 524 but not the head of the implant screw 524.
  • the offset 528 may be sized in the occlusal dimension (O) so as to offset the arm 590 from the gingival tissue G.
  • a washer to couple the implant screw 524 to the arm 590 (e.g., when an aperture in the arm 590 is larger than the head of the screw).
  • the washer may be formed from a flexible material such as rubber.
  • the arm 590 may be secured to the threads of the receiver of the dental implant 522 with a scanning abutment.
  • a scanning abutment may include a threaded region that is sized to fit into and mate with the threads of the receiver of the dental implant 522.
  • the scanning abutment may also include a fiducial structure that can used to determine a location and orientation of the implant 522 when the scanning abutment is attached.
  • the scanning abutment may be imaged with a component of the image capture system (e.g., an intraoral scanner or a 2D or 3D camera) to determine the locations of the associated dental implant.
  • FIG. 6 illustrates an implementation of a motion capture system 600 for capturing jaw movement in which only two screens are used.
  • the motion capture system 600 is an example of the motion capture system 200.
  • the motion capture system 600 includes an imaging system 602 and a patient assembly 604.
  • the imaging system 602 includes a housing 610.
  • the imaging system also includes screen 638a and a screen 638b (collectively referred to as screens 638), which are positioned so as to be on opposite sides of the patient’s face (e.g., screen 638b to the left of the patient’s face and screen 638a to the right of the patient’s face).
  • a screen framework is disposed within the housing 610 to position the screens 638 with respect to each other and the housing 610.
  • this implementation does not include a screen disposed in front of the patient’s face.
  • the motion capture system 600 may allow better access to the patient’s face by a caregiver. Also shown is patient assembly 604 of the motion capture system 600.
  • the patient assembly 604 includes a clutch 620 and a reference structure 622, each of which include a light source assembly having three light emitters.
  • the clutch 620 is an example of the clutch 220 and the reference structure 622 is an example of the reference structure 222.
  • the clutch 620 is attached to the patient’s mandible (i.e., lower dentition) and is emitting light beams LI, L2, and L3.
  • Light beams LI and L3 are directed toward the screen 638a, intersecting at intersection points II and 13, respectively.
  • Light beam L2 is directed toward the screen 638b.
  • the light beams LI and L3 are offset from each other by approximately 15 degrees.
  • the light beams LI and L2 are collinear and directed in opposite directions (i.e., L2 is offset from LI by 180 degrees).
  • the reference structure 622 is attached to the patient’s maxilla (i.e., upper dentition) and is emitting light beams L4, L5, and L6.
  • Light beams L4 and L6 are directed toward the screen 638b.
  • Light beam L5 is directed toward the screen 638a, intersecting at intersection point 15.
  • the light beams L4 and L6 are offset from each other by approximately 15 degrees.
  • the light beams L4 and L5 are collinear and directed in opposite directions (i.e., L4 is offset from L5 by 180 degrees).
  • An optical sensing assembly of the motion capture system 600 may capture images of the screens 638 so that the intersection points can be determined.
  • the location of a first axis associated with the clutch 620 may be identified based on the intersection points from the light beams LI and L2.
  • An intersection coordinate between the light beams LI and L3 may then be determined based on the distance between the intersection points II and 13 on the screen 638a.
  • the distance from the intersection point II along the first axis can be determined based on the distance between the points II and 13 and the angle between II and 13.
  • the angle between II and 13 is determined for the clutch 620 and may be stored in a data store, for example, on a non- transitoiy computer-readable storage medium.
  • intersection coordinate can be found, which will have a known relationship to the clutch 620 and therefore the patient’s dentition.
  • a coordinate system for the clutch 620 can be determined based on the intersection points too (e.g., a second axis is defined by the cross product of the first axis and a vector between the intersection points II and 13, and a third axis is defined by the cross product of the first axis and the second axis).
  • the position and orientation of the reference structure 622 can be determined based on the intersection points of the light beams L4, L5, and L6 with the screens 638a and 638b.
  • three-dimensional coordinate systems for the clutch and the reference structure are determined using only two screens.
  • the motion capture system includes only two screens and the motion capture system does not include a third screen.
  • the imaging system captures images of only two screens. Some implementations identify intersection points using images captured of only two screens. For example, two intersection points from light beams emitted by a reference structure are identified on an image of the same screen.
  • a light emitter being oriented to emit light in a first direction toward the screen means the light emitter is oriented to emit light in a first direction toward the screen when the light emitter is attached to a patient (or other structure) and positioned for motion tracking with respect to the imaging system.
  • FIG. 7 illustrates a top view of an implementation of a reference structure 730 and an implementation of an imaging system 732.
  • the reference structure 730 is an example of the reference structure 622.
  • the imaging system 732 is an example of the imaging system 602.
  • the reference structure 730 includes a dentition coupling device 734, an extension member 740, and a light source assembly 742.
  • the dentition coupling device 734 is an example of the dentition coupling device 230 and may be similar to the example dentition coupling devices previously described with respect to implementations of the clutch.
  • the light source assembly 742 is an example of the position indicator system 234. In this example, the light source assembly 742 includes light emitters 750a, 750b, and 750c (collectively referred to as light emitters 750).
  • the dentition coupling device 734 is configured to removably couple to the dentition of the patient.
  • the dentition coupling device 734 is coupled to the opposite arch of the patient’s dentition as the clutch (e.g., the dentition coupling device 734 of the reference structure 730 couples to the maxillary arch when a clutch is coupled to the mandibular arch).
  • the dentition coupling device 734 is coupled to the extension member 740 that is configured to extend out through the patient’s mouth when the dentition coupling device 734 is coupled to the patient’s dentition.
  • the extension member 740 may be similar to the extension member 408.
  • the imaging system 732 includes screens 738a and 738b (referred to collectively as screens 738), and cameras 720a and 720b (referred to collectively as cameras 720).
  • the screen 738a is oriented parallel to the screen 738b.
  • the imaging system 732 may also include a screen framework (not shown) that positions the screens 738 with respect to each other.
  • the screen framework may extend beneath the reference structure 730 and couple to the bottoms of the screens 738. Together, the screens 738 and the screen framework are an example of the screen assembly 212.
  • the cameras 720 are an example of the optical sensing assembly 210.
  • the screens 738 may be formed from a translucent material so that the points where the light beams emitted by the light source assembly 742 intersect with the screens 738 may be viewed from outside of the screens 738. Images that include these points of intersection may be recorded by the cameras 720. The motion determining device 206 may then analyze these captured images to determine the points of intersection of the light beams with the screens 738 to determine the location of the light source assembly 742. The position of the light source assembly of a clutch (not shown) may be determined in a similar manner.
  • the cameras 720 are positioned and oriented to capture images of the screens 738.
  • the camera 720a is positioned and oriented to capture images of the screen 738a
  • the camera 720b is positioned and oriented to capture images of the screen 738b.
  • the cameras 720 are mounted to the screen framework so that the position and orientation of the cameras 720 are fixed with respect to the screens.
  • each of the cameras 720 may be coupled to the screen framework by a camera mounting assembly. In this manner, the position and orientation of the cameras 720 relative to the screens 738 does not change if the screen framework is moved.
  • the screen framework includes a housing (e g., as shown at 610 in FIG.
  • FIG. 8 illustrates a perspective view of the reference structure 730 disposed between the screens 738 of the imaging system 732.
  • the screens 738 are joined together by a screen framework 736 that positions and orients the screens 738 with respect to one another.
  • the light emitter 750a is emitting a light beam L4, which intersects with the screen 738b at intersection point 14;
  • the light emitter 750b is emitting a light beam L5, which intersects with the screen 738a at intersection point 15;
  • the light emitter 750c is emitting a light beam L6, which intersects with the screen 738a at intersection point 16.
  • the camera 720a captures images of the screen 738a, including the intersection point 15 of the light beam L5 emitted by the light emitter 750b.
  • the camera 720a may capture a video stream of these images.
  • the camera 720b captures images of the screens 738b and the intersection points 14 and 16.
  • the captured images from the cameras 720 are then transmitted to the motion determining device 206.
  • the motion determining device 206 may determine the location of the intersection points 14, 15, and 16, and from those points the location of the light source assembly 742.
  • a point of common intersection for the light beams L4, L5, and L6 is determined based on the location of the intersection points 14, 15, and 16 (e.g., the point at which the light beams intersect or would intersect if extended).
  • the location and orientation of the reference structure 730 relative to the screens 738 can be determined.
  • FIG. 9 is a flowchart of an example process 900 for fabricating a denture based on at least motion data.
  • the process 900 is performed by the system 100 described herein, although one or more blocks in the process 900 can also be performed by any other computing system described throughout this disclosure.
  • the process 900 is described from the perspective of a computer system, which may include one or more of the components described in reference to the system 100 in FIGs. 1A-B.
  • digital patient data including motion data and a digital dental model
  • the digital patient data may include imaging data of the patient dentition.
  • the imaging data may be captured using various imaging modalities described herein.
  • the imaging data includes a three-dimensional digital dental model of the patient’s dentition.
  • the three-dimensional digital dental model may be captured using an intraoral scanner.
  • the three- dimensional digital dental model may be captured by scanning a physical impression or mold formed from a physical impression using a three-dimensional scanner.
  • the digital dental model can be generated by the computer system and using at least the motion data that is acquired in block 902.
  • the acquired digital patient data may also include motion data of the patient’s jaw.
  • the motion data may be captured using the motion capture system 200.
  • the motion data may represent the patient’s jaw moving through various jaw movements including, for example, excursive movements and protrusive movements.
  • the motion data may also represent that patient’s jaw position and movement as the patient pronounces specific phonetic sounds such as the “F” sound and the “S” sound.
  • audio or video files may be captured as the patient pronounces the specific sounds.
  • the motion data may map to frames or positions in the video or audio data. Based on sound processing (e.g., audio signal processing) of the audio data or image processing of the video data, various positions in the patient’s speech may be identified and the corresponding frame of the motion data may be identified.
  • the acquired digital patient data may also include one or more anterior facial images of the patient.
  • the anterior facial images may include two-dimensional images or three- dimensional images.
  • the anterior facial images include an image of the patient smiling and an image of the patient with lips in repose (e.g., relaxed).
  • the anterior facial images may also include videos.
  • the videos may include video of the patient performing various jaw movements such as excursive movements and protrusive movements.
  • the videos may also include video of the patient pronouncing specific phonetic sounds such as sibilants (e.g., the “S” sound) or fricatives (e.g., the “F” sound).
  • the acquired digital patient data may also include other types of patient images captured using imaging modalities such as computed tomography (CT), including cone beam computed tomography (CBCT), ultrasound, and magnetic resonance imaging (MRI).
  • CT computed tomography
  • CBCT cone beam computed tomography
  • MRI magnetic resonance imaging
  • the computer system integrates the digital patient data.
  • the digital patient data may be integrated to a common coordinate system (e.g., positioned relative to the same XYZ axes).
  • Different types of digital patient data may be integrated using different techniques.
  • three-dimensional data sets may be integrated using for example an iterative alignment process such as an iterative closest point technique.
  • multiple types of the digital patient data include fiducial markers. The positions of the fiducial markers may be determined from the digital patient data and used to align one set of digital patient data with another.
  • the digital patient data includes two-dimensional images captured with a camera of the image capture system 107.
  • a polygon may be generated within the common coordinate system.
  • the two-dimensional images may be mapped to the polygon.
  • a vertical dimension of occlusion and an occlusal plane position and orientation is determined for the patient.
  • the determined vertical dimension of occlusion indicates the desired position of the patient’s mandible and maxilla when the patient’s jaw is at rest.
  • the vertical dimension of occlusion may correspond to a total distance between edentulous ridges to accommodate dentures with a desired amount of occlusal open space when the patient is at rest.
  • the vertical dimension of occlusion influences the function, comfort, and aesthetics of dentures.
  • the determined occlusal plane may correspond to a plane disposed between the patient’s maxilla and mandible that approximately corresponds to where the occlusal surfaces of the patient’s teeth meet.
  • the occlusal plane may, for example, be positioned at a desired location of the incisal edge of the patient’s upper central incisors, which may be determined from photos of the patient or using a gothic arch tracer.
  • the occlusal plane may be oriented based on the motion data. Although often referred to as an occlusal plane in the denture and dental fields, the occlusal plane need not be precisely planar and may vary from a plane to follow the curve of the patient’s lips.
  • the vertical dimension of occlusion may be specified by a care provider such as dentist or physician.
  • the vertical dimension of occlusion may also be determined based, at least in part, on motion data of the digital patient data. For example, motion data while the patient is pronouncing specific sounds such as sibilants (e.g., the “S” sound) or fricatives (e.g., the “F” sound).
  • a desired vertical dimension of occlusion may be determined from the relative positions of the maxilla and mandible as the sounds are pronounced.
  • the vertical dimension of occlusion may also be determined from a two-dimensional facial image of the digital patient data.
  • the occlusal plane may, for example, be determined based on applying a ratio to the vertical dimension of occlusion. In some implementations, the occlusal plane may be determined based on the two-dimensional facial image of the digital patient data. For example, the occlusal plane may be positioned so as to align the incisal edges of the upper central incisors with respect to the patient’s lips.
  • the digital dental model of the digital patient data is positioned based on the position and orientation of the occlusal plane.
  • a portion of the digital dental model representing the mandibular dental arch may be positioned based on the motion data so as to be positioned at the determined vertical dimension with respect to the maxillary dental arch and so that the denture teeth on the mandibular arch align with the occlusal plane.
  • a frame of the motion data that positions the mandibular dental arch at the determined vertical dimension is identified by the computer system.
  • the mandibular dental arch is rotated about a hinge axis to open to the determined vertical dimension of occlusion.
  • the denture design system 116 includes a user interface that displays the digital dental model, the occlusal plane, or both.
  • the user interface may be configured to receive user input to adjust the vertical dimension of occlusion or the position of the occlusal plane.
  • the user interface may be configured to receive a drag (e.g., click-and-drag or touch-and-drag) input to interactively move the mandibular arch of the digital dental model up or down along an arch defined by the motion data or a hinge axis inferred from the motion data.
  • the user interface may be configured to interactively move the occlusal plane along the arch between the mandibular arch and maxillary arch of the digital dental model.
  • the computer system generates an occlusal guidance surface based on the motion data.
  • the occlusal guidance surface may be used to guide the positioning of denture teeth on one of the dental arches.
  • the occlusal guidance surface may be generated for one or both of the mandibular arch and the maxillary arch.
  • the occlusal guidance surface is generated for a dental arch by sweeping (or moving) at least a portion of the opposing dental arch according to the motion data. For example, a portion of the opposing dental arch may be swept through one or more of excursive and protrusive movements based on the motion data. In some implementations, the portion of the opposing dental arch may be swept through all of the movements represented in the motion data.
  • a midline polyline segment may be swept according to the motion data.
  • the midline polyline segment may be a cross-section of the opposing dentition at the midline (e.g., middle of the dental arch).
  • the cross-section may be generated by slicing or intersecting a vertically oriented plane through the opposing dentition.
  • the midline polyline segment is not directly based on the opposing dentition.
  • the midline polyline segment may be a line segment on the occlusal plane that extends in the anterior-posterior direction at the midline. As the portion of the opposing dentition is swept according to the motion data, the occlusal guidance surface is generated by the computer system.
  • a midline polyline segment may be duplicated in multiple locations according to the motion data (e.g., the midline polyline segment may be duplicated every 25 micron, every 50 microns, every 100 microns, or another distance).
  • the adjacent midline polyline segments may then be joined to form a surface.
  • a polygonal surface may be deformed based on the swept midline polyline segment.
  • the polygonal surface may initially be a flat surface that is positioned at the determined occlusal plane location. As the midline polyline segment is swept through different locations, the polygonal surface may be deformed vertically to the midline polyline segment.
  • the computer system can position digital denture teeth based on the occlusal guidance surface in block 912.
  • the digital denture teeth may be loaded from a library of digital denture teeth.
  • Some implementations include multiple libraries of denture teeth.
  • the digital denture teeth libraries may vary functionally, aesthetically, or based on manufacturer.
  • the digital denture teeth may include labels for anatomical landmarks such as cusps, marginal ridges, incisal edges, fossa, grooves, base boundaries, or other anatomical landmarks. These labels may be used to automatically position the digital denture teeth with respect to one another and digital denture teeth on the opposing dentition .
  • a digital representation of a denture base is generated by the computer system.
  • a soft-tissue boundary curve can be generated based on the digital dental model.
  • the soft-tissue boundary curve represents the edge of the denture base.
  • the soft-tissue boundary curve may surround the edentulous ridge.
  • the soft-tissue boundary curve may be represented by a spline curve.
  • Some implementations include a user interface through which a user may adjust the spline curve.
  • a soft-tissue interface surface may be generated based on the soft-tissue boundary curve and the digital dental model.
  • a portion of the digital dental model that is enclosed by the soft-tissue boundary curve may be offset (e.g., by 10 microns, 25 microns, 50 microns, 100 microns, or another amount) to form the soft-tissue interface surface.
  • the soft-tissue interface surface may be an intaglio surface (i.e., the surface of the denture that touches the gum tissue). On upper dentures, the intaglio surface may be a posterior palatal seal.
  • the offset may provide space for a dental adhesive that can secure the denture, when fabricated, to the patient’s edentulous ridge. Some implementations are configured to fit to the patient’s edentulous ridge via suction or friction. In these embodiments, the soft tissue interface surface may not be offset from the digital dental model.
  • Tooth boundary curves may also be identified for each of the positioned digital denture teeth.
  • the tooth boundary curves may be identified based, for example, on labels stored with each of the digital denture teeth that identify the portion of the tooth that should be embedded in the denture base.
  • a surface may be formed to join the outer edges of the tooth boundary curves to the soft-tissue interface surface.
  • Sockets may be generated within the boundary curves. The sockets may be shaped to receive the denture teeth.
  • the computer system determines color and/or texture data for (i) the digital denture teeth and (ii) the denture base based on at least the digital patient data. Refer to FIGs. 10-13 for further discussion about determining the color and/or texture data for (i)-(ii).
  • dentures for the particular patient can be fabricated.
  • the denture base may be fabricated based on the digital representation of the denture base and the color and/or texture data for the denture base.
  • the denture base may be fabricated using a rapid fabrication technology such as three-dimensional printing or computer numerically controlled (CNC) milling, or any other fabricating machines described in reference to FIGs. 1 A-B.
  • the denture base may be fabricated from acrylic or another biocompatible material.
  • the denture base may also be made from a material that has aesthetic properties that substantially match gum tissue.
  • pre-manufactured denture teeth that match the digital denture teeth library may be placed and bonded into the sockets of the denture base.
  • the denture teeth may also be manufactured using rapid fabrication technology.
  • the denture teeth may be fabricated using a three-dimensional printer or a CNC mill, as described in reference to FIGs. 1 A-B.
  • the denture teeth may be formed from a biocompatible material that has aesthetic properties that are similar to the aesthetic properties of teeth.
  • the digital denture teeth and the denture base can be printed as a single unit by a mixed material three-dimensional printer.
  • one or both of the denture base and the denture teeth may be cast using a wax casting process using a pattern fabricated by a three-dimensional printer of CNC mill.
  • interferences between the digital denture teeth can be identified by moving, by the computer system, the dental arches according to the motion data.
  • the digital models of the denture teeth may be adjusted, by the computer system, to remove portions of the digital denture teeth models that interfere before the denture teeth are fabricated.
  • a CNC mill may be used to remove interfering regions of the pre-manufactured denture teeth after they are placed in the denture base.
  • FIGs. 10A-B is a flowchart of a process 1000 for determining teeth and gingiva color data for a digital dental model of a patient.
  • Output from the process 1000 can be used to model, design, and manufacture dentures for the patient.
  • the teeth color data and the gingiva color data can be used to fabricate realistic dentures that closely match or otherwise resemble color of the patient’s actual teeth and/or gingiva.
  • the process 1000 is described from the perspective of designing and manufacturing dentures, the disclosed technology may also be used to design and manufacture crowns, bridges, and other types of dental appliances.
  • the process 1000 can also be performed to determine color data more particularly for one or more of the following: anterior teeth versus posterior teeth, front or external-facing side of teeth versus back or internal-facing side of teeth, and front or external -facing side of gingiva versus back or internal-facing side of gingiva.
  • the process 1000 can be performed by the computer system 152 or any other components described in reference to FIGs. 1A-B, including but not limited to the denture design system 116 and/or the rapid fabrication machine 119.
  • the process 1000 can also be performed by one or more other computing systems, devices, computers, networks, cloud-based systems, and/or cloud-based services. For illustrative purposes, the process 1000 is described from the perspective of a computer system.
  • the computer system can receive patient oral scan data in block 1002.
  • the oral scan data can include any of the image and/or motion data described throughout this disclosure (refer to FIGs. 1 A-B).
  • the oral scan data can include a three-dimensional image or a two-dimensional image of the patient’s mouth.
  • the oral scan data can also include color information or other color data/values corresponding to different portions or regions of an oral scan of the patient’s mouth.
  • Image or other processing techniques can be performed, as described below, to extract one or more of the color information from the data and use the extracted color information to determine color values for designing dentures for the patient.
  • the computer system can perform a color calibration process on the received data. For example, the computer system can adjust or otherwise correct white balance in image data of the patient’s mouth.
  • the white balance can be adjusted using a greyscale card, which may typically be performed as part of a color retraction process.
  • One or more other color calibration techniques can be performed.
  • the computer system can identify at least one zone in the calibrated data for at least one of a tooth, a set of teeth, an upper gingiva, and a lower gingiva (block 1006).
  • the computer system can identify zones associated with a particular tooth, zones associated with the set of teeth, zones associated with the upper gingiva, and/or zones associated with the lower gingiva.
  • Each of the abovementioned parts of the patient’s mouth can be associated with multiple zones.
  • the zone associations can be predetermined and stored in a data store as mapping data.
  • the computer system can then retrieve this mapping data, process the calibrated data to identify the particular tooth, set of teeth, upper gingiva, and/or lower gingiva, then map the predetermined zones to the identified part(s) of the patient’s mouth using the mapping data.
  • the computer system can apply a model (e.g., machine learning-trained model) that is configured to identify the abovementioned parts of the patient’s mouth and identify or otherwise map the predetermined zones to those identified parts of the patient’s mouth.
  • the computer system can additionally or alternatively use object detection techniques to identify the abovementioned parts of the patient’s mouth, optionally generate a bounding box around each of the identified parts, and then map the zones onto the identified part in the bounding box that correspond to the identified part.
  • a tooth, set of teeth, upper gingiva, and lower gingiva can each have a different quantity of zones.
  • the quantity of zones can vary based on colors, textures, and other quality characteristics that may be needed or preferred to make the particular part of the patient’s mouth appear realistic.
  • a single tooth may have fewer zones than a set of teeth because additional color effects may be required across a greater surface area of visible teeth (e.g., regarding the set of teeth when the patient smiles) versus a single tooth (e.g., a molar at the back of the patient’s mouth).
  • the zones can be predefined as portions of the particular part of the patient’s mouth having nonphysical boundaries.
  • each tooth can be assigned 3 zones.
  • a 3-zone gradient of color can be applied to the tooth, where each color is assigned to each zone of the tooth to make the tooth appear realistic.
  • the gingiva can have 3 zones, where a color is assigned to each zone and a gradient or other blending effect is applied to make the colors in the gingiva appear more realistic.
  • the computer system can identify layers for each zone in block 1008. Similarly as described above in reference to block 1006, the computer system can apply mapping data, one or more rules, object detection techniques, and/or machine learning-trained models to the calibrated data to then identify layers per zone. Sometimes, the layers can be identified as part of identifying the zones in block 1006. In some implementations, the layers can be the same as the zones that are identified in block 1006. Refer to FIGs. 15-18 for further discussion about identifying layers for each zone and/or for each tooth more generally.
  • multiple layers can be identified in each zone for a tooth (or across multiple zones for the tooth).
  • layers of color having varying characteristics e.g., brightness, opacity, translucency
  • layers of color having varying characteristics can make the tooth appear more realistic — for example, at some angles, light may go through the tooth, at other angles, light may reflect back off the tooth, and at yet other angles, some light may be absorbed or otherwise stuck in the tooth.
  • Layering of colors and their respective characteristics on the tooth can therefore compensate for effects of light transmission and light reflection on the tooth. Layering of colors and their respective characteristics can also make the tooth appear to have more depth, thereby making the tooth more realistic.
  • the computer system can identify a statistical color value for each layer in each zone.
  • the computer system can sample a portion of the calibrated oral scan data that already has color values associated with it.
  • the oral scan data can include a three-dimensional or two-dimensional image of the patient’s mouth.
  • the computer system can use object detection techniques and/or image processing techniques to identify a portion of the image of the patient’s mouth that corresponds to a particular part of the patient’s mouth having a particular zone, such as an enamel layer of a tooth.
  • the computer system can then identify and extract color values in the identified portion of the image and determine a statistical color value for the particular zone based on the extracted color values in the identified portion of the image.
  • the computer system can use horizontal and/or vertical slicing techniques of an entire arch to identify color values across a slice.
  • the identified color values across the slice can be averaged to determine a statistical color value for a desired zone of the patient’s teeth associated with that dental arch.
  • the computer system can use horizontal and/or vertical slicing techniques on a particular tooth to identify the statistical color values for the zones of that tooth.
  • the statistical color value can be an average value of colors in each layer in each zone (e.g., or, in some implementations, an average color value in each zone).
  • the statistical color value can be some other distribution of color values in which outlier color values that are identified in a particular layer and/or in a particular zone may be removed or otherwise ignored.
  • the statistical color value can also be a mean color value for each layer in each zone.
  • the statistical color value can correspond to more than one layer in a zone for a particular part of the patient’s mouth, such as a particular tooth, set of teeth, upper gingiva, lower gingiva, exterior-facing side of teeth, interior-facing side of teeth, etc. Additionally or alternatively, the statistical color value can correspond to more than one zone in the calibrated data.
  • each canine tooth in the patient’s mouth can have the same zones. When the statistical color values are determined for the zones in one canine tooth, those values can also be assigned to the same zones in the other canine teeth.
  • the computer system may identify the statistical color value for one zone or one layer in one zone, and then use that statistical color value to derive color values for other zones and/or other layers in other zones.
  • the derived color values can then be blended with the identified statistical color value such that resulting dentures appear more realistic.
  • each identified statistical color value can be associated with a common dental color or shade. That association can be saved and used for manufacturing dentures that match the identified statistical color values for the particular patient’s mouth. Accordingly, each zone for each tooth, set of teeth, and/or gingiva can be assigned a specific predetermined, prescribed dental color.
  • the predetermined dental color can include a variety of dental colors that are typically used when generating dentures.
  • the dental colors can include, but are not limited to, Al, A2, A3, A4, Bl, B2, B3, B4, Cl, C2, C3, C4, D2, D3, and D4. These dental colors indicate what color or colors should be used for printing or manufacturing the dentures.
  • the computer system may assign this statistical color value to the predetermined dental color defined as A2. This assignment can be made based on following one or more rules and/or mapping data that identifies statistical color values (and/or ranges of statistical color values) that correspond to the predetermined dental colors.
  • the computer system can identify and assign at least one texture for each layer in each zone (block 1014).
  • teeth and gingiva have known or expected texture(s).
  • These known or expected textures can be stored in a library (e.g., texture library), such as a data store or other type of data repository described herein.
  • a library e.g., texture library
  • the computer system can retrieve, from the library, at least one texture value or other texture data that is associated with the identified zone and/or identified layer.
  • the computer system can map the texture value to the identified zone and/or identified layer.
  • any of the techniques described herein to identify and assign color values may also apply to identifying and assigning texture values.
  • the computer system generates a digital model of the patient’s mouth with at least the predetermined dental color(s) mapped to the corresponding layers in zones in the digital model.
  • the model can be generated earlier in the process, such as after the patient oral scan data is received in block 1002.
  • the blocks 1004-1014 can be performed using the digital model, which can have the color information from the oral scan data mapped thereto.
  • the model can already be generated for the particular patient in some implementations, and the computer system may simply retrieve the model from a data store, and then map the predetermined dental color(s) to corresponding layers in zones in the digital model for the patient.
  • the digital model of the patient’s mouth can be a three-dimensional or two-dimensional representation of the patient’s teeth and gingiva.
  • the digital model can be generated using any of the techniques described in reference to FIGs. IB-9.
  • the computer system can adjust the predetermined dental color(s) in one or more layers in one or more zones in the digital model (block 1018). Adjusting the colors in the digital model can make the resulting dentures appear more realistic with an appearance that is more lifelike and smooth transition between colors in the layers and/or zones. For example, the computer system can blend one or more of the colors across one or more layers and/or zones in the digital model. As another example, the computer system can adjust a gradient of one or more of the colors in one or more layers and/or zones and/or across one or more layers and/or zones. As yet another example, the computer system can adjust brightness of one or more of the colors. The computer system may also adjust opacity, transparency, and/or translucency of one or more of the colors. Any of these adjustments can be determined and made based on one or more rulesets.
  • the computer system may modify and/or create various shadow effects for any of the layers, zones, teeth, and/or gingiva represented in the digital model.
  • the computer system can assign an additional zone on the fly to one or more teeth represented in the digital model based on how proximate a first tooth is to a second tooth or some other landmark represented in the digital model (where the landmark can be defined according to a library of annotations and/or landmarks).
  • the first tooth can be identified as having 3 zones, where each zone is assigned a different color having unique characteristics (e.g., brightness, gradient, opacity).
  • the second tooth immediately adjacent one side of the first tooth and a third tooth immediately adjacent the other side of the first tooth can be identified as positioned within a threshold distance from each other and/or the first tooth, and therefore may be assigned the same colors identified in the 3 zones of the first tooth.
  • the second and third teeth can also be identified as having 3 zones, each with the same colors and corresponding characteristics as identified for the first tooth.
  • any other teeth that exceed the threshold distance from the first tooth can be identified as having different zones, different quantities of zones, and/or different colors for each of the zones.
  • any of these adjustments described in block 1018 can be made earlier in the process 1000, such as in block 1010 (identifying color values) and/or block 1012 (assigning the color values to predetermined dental colors). Additionally or alternatively, once the predetermined dental colors are mapped to the teeth in the digital dental model in block 1016, the computer system can make additional adjustments to ensure that the teeth appear realistic and/or have color and/or texture values that closely resemble the actual color and/or texture of the patient’s teeth in the oral scan data.
  • the computer system can simulate ray-tracing of the digital model with the predetermined dental color(s) to identify one or more deviations in color that may exceed threshold color values (block 1020).
  • the computer system can optionally adjust at least one of the predetermined dental color(s) in the digital model based on the identified deviation(s) in color (block 1022).
  • Simulating ray-tracing in a CAD environment can be beneficial to automatically identify how light reflects off of or otherwise appears in the patient’s mouth. Such simulating can be used to identify areas in the patient’s mouth that may appear more shaded than others. This type of information can be useful to determine brightness adjustments, for example, to one or more colors in the identified areas so that when similar light illuminates the patient’s dentures, the dentures appear realistic.
  • the simulated ray-tracing may identify dark triangles or other geometric shapes that represent dark or shaded zones in the patient’s mouth under certain lighting conditions.
  • the computer system can then determine adjustments to characteristics of colors identified in those dark or shaded zones to counteract the shading that would appear when the patient wears the resulting dentures in the same or similar lighting conditions.
  • the simulating ray-tracing may indicate that the patient’s upper canines appear more shaded when light shines directly on the front of the patient’s mouth. Therefore, the computer system can determine adjustments to enhance (e g., brightness of) the color of the upper canines (or at least one of the colors in the layers and/or zones of the canines) so that such shading does not appear when the resulting dentures are worn by the patient in the same or similar lighting conditions.
  • the computer system can iteratively simulate the ray-tracing once such adjustments are determined and made in order to identify whether the adjustments are sufficient to reduce or otherwise eliminate the effects identified from the previously simulated ray-tracing. In other words, the ray-tracing can be simulated and adjustments can be made until deviations in the colors no longer exceeds the threshold color values (and/or the deviations in colors are within expected or normal threshold ranges of color deviations).
  • results from the simulated ray-tracing may also be used to add one or more colors to layers and/or zones, especially based on perspective of viewing the teeth.
  • the results from the ray-tracing may also be used to remove one or more colors from layers and/or zones.
  • the results may be used to change an intensity of one or more colors in one or more layers and/or zones.
  • the computer system returns output including the digital model and at least color data corresponding to the mapped predetermined dental color(s).
  • the output can indicate what colors are mapped to what layers and/or zones for which teeth and/or gingiva.
  • the output can also indicate characteristics of each color, including but not limited to brightness, opacity, gradient, intensity, translucency, blend with other colors, etc.
  • the output may also include texture data that was determined and applied in the optional block 1014.
  • the output can be a single file containing the digital model in addition to data about each zone, each layer per zone, associated color data, and/or associated texture data. Sometimes, the output may also include instructions for manufacturing/fabricating dentures based on the digital model.
  • the file can be an STL file, as described herein.
  • OBJ, PLY, 3MF, and/or other similar file types can be used, which allow for color and other complex data to be stored, retrieved, and accessed.
  • file types such as OBJ, PLY, and 3 MF can include not only color data but also library data such as annotations, splines, color gradient, and landmark data.
  • the color data can be assigned to particular triangles, vertices, and other geometric shapes in the digital model, which is analogous to how computer graphics are rendered.
  • the output can include multiple files (e.g., a first file containing the digital model, a second file containing the color data, a third file containing the texture data).
  • the computer system can store the output in a library in block 1026.
  • the output can then be retrieved at a later time for manufacturing and printing dentures for the particular patient.
  • the output can be added to a library for determining color/texture for dentures of other patients.
  • the computer system accesses and/or generates a denture model for the patient.
  • the denture model can be generated using any of the techniques described herein.
  • the denture model can represent dentures to be manufactured for the particular patient.
  • the denture model can also indicate, for the dentures, color data and/or texture data (as determined in the process 1000) that is then used for manufacturing the dentures.
  • the denture model may be different than the patient digital model.
  • the digital model can model all the patient’s teeth and mouth based on motion, images, and other data collected during an oral scan, as part of the oral scan data.
  • the denture model can be a three-dimensional representation of dentures to be designed and manufactured for the particular patient and based on the patient’s teeth and mouth represented in the digital model of the patient’s mouth. Refer to FIGs. IB-9 for further discussion about generating the denture model.
  • the computer system applies the mapped predetermined dental color(s) in layers to the denture model for the particular patient in block 1030.
  • the dentures can be manufactured using the color data, texture data, and any adjustments made thereto that were determined in the above process 1000 to make the teeth appear realistic.
  • block 1030 can be performed as part of block 1028.
  • the computer system can optionally fabricate a pair of dentures for the patient based on the denture model.
  • the computer system can generate instructions for manufacturing the dentures.
  • the instructions can be stored, as described herein, then later retrieved when the dentures are to be manufactured.
  • the instructions can also be transmitted to a rapid fabrication machine as described throughout this disclosure, or any other denture-printing or manufacturing device described herein.
  • the instructions when received at the machine, can cause the machine to automatically print and manufacture the dentures according to the instructions, the output, and any other manufacturing information provided in the instructions.
  • the instructions or the output can be transmitted to a computing device of a relevant user, such as a dentist, and outputted in a graphical user interface (GUI) display at the computing device.
  • GUI graphical user interface
  • FIG. 11A is a flowchart of a process 1100 for determining color data (and optionally texture data) for gingiva in a digital dental model of a patient.
  • the process 1100 is described generally for determining color data of gingiva, the process 1100 can also be performed more specifically to determine color data for anterior gingiva and color data for posterior gingiva.
  • the anterior gingiva may require more detail in color and/or texture than the posterior gingiva, especially since the anterior gingiva is front-facing and therefore more visible than the posterior gingiva.
  • a color and/or texture library may include designs, annotations, landmarks, color data, and/or texture data that are specific for anterior gingiva and other designs, annotations, landmarks, color data, and/or texture data that are specific for posterior gingiva. Refer to FIGs. 15-18 for further discussion about determining color data for the gingiva, such as determining the color data for different layers or zones of the gingiva.
  • the process 1100 can be performed by the computer system 152 or any other components described in reference to FIGs. 1A-B, including but not limited to the denture design system 116 and/or the rapid fabrication machine 119.
  • the process 1100 can also be performed by one or more other computing systems, devices, computers, networks, cloud-based systems, and/or cloud-based services. For illustrative purposes, the process 1100 is described from the perspective of a computer system.
  • the computer system can receive a digital dental model of a patient’s mouth in block 1102. Refer to FIGs. IB- 10 for further discussion about the digital dental model.
  • the computer system can identify annotations and/or landmarks for gingiva in the model. Coloring gingiva can be achieved by relating colors to annotations and landmarks identified in the digital dental model.
  • the annotations and landmarks can be assigned and stored in a library.
  • the annotations and landmarks can be specific to the particular patient’s mouth. Sometimes, the annotations and landmarks can be generic and therefore associated with different patients’ mouths.
  • the computer system can map a first color and/or texture to an annotated portion of the gingiva that corresponds to socket halos (block 1106).
  • the computer system may map the first color and/or texture as a top surface color and/or texture (block 1108).
  • mapping color for anterior gingiva the first color can be a lightest pink tone.
  • mapping color for posterior gingiva the first color can be a light pink tone.
  • the computer system can map a second color and/or texture to an annotated portion of the gingiva that corresponds to each root eminence (block 1110).
  • the computer system can map the second color and/or texture as a middle surface color and/or texture (block 1112).
  • This mapping may appear best for central incisors and canines, which may include teeth numbers 6, 8, 9, and 11.
  • the second color can be a pink tone that is darker than the first color.
  • mapping color for posterior gingiva the second color can be a pink tone that is darker than the first color that extends towards the annotation points identified on the posterior gingiva.
  • the computer system can map a third color and/or texture to an annotated portion of the gingiva that corresponds to portions of the gingiva between one or more annotations (block 1114).
  • the computer system may map the third color and/or texture as a bottom surface color and/or texture (block 1116).
  • the third color can be darkest pinks, purples, and/or blues that may appear in a stamped texture between the annotation points of teeth numbers that include, but are not limited to, 6, 8, 9, and 11.
  • the third color can be a darkest pink that is mapped in between the annotation points identified on the posterior gingiva.
  • the blocks 1106-1116 can be performed in any order. Sometimes, one or more of the blocks 1106-1116 may be performed in parallel.
  • the process 1100 is described in reference to mapping 3 colors and/or textures to the digital model for the patient’s gingiva, the process 1100 can also be performed to map any other quantity of colors and/or textures to the digital model.
  • the quantity of colors and/or textures may vary depending on how many zones and/or layers are identified in the digital model of the patient’s mouth for the gingiva. For example, the example process 1100 maps 3 colors, where each color corresponds to a different zone identified for the patient’s gingiva.
  • 4 zones can be identified for the gingiva in the digital model, and then 4 colors and/or textures can be mapped to the zones using the techniques described in the process 1100.
  • the colors that are mapped in blocks 1106-1116 can obey various fundamental rules of color dominance, opacity, and/or order preference such that the gingiva has a realistic appearance when manufactured as part of dentures.
  • the lightest pink color, or the first color can always be used as a surface color (e.g., outermost surface) for the anterior gingiva.
  • the darker pink color, or the second color can always be layered beneath the lightest pink color as a middle layer.
  • the darkest pink color, or the third color may always be layered beneath the darker pink color.
  • Any additional colors such as purples or blues (which can be identified as fourth and fifth colors) may always be lawyered beneath the abovementioned colors unless the additional colors represent veins or vessel in the anterior gingiva. If the additional colors represent veins or vessels in the anterior gingiva, then such additional colors may be mapped to one or more different layers of the anterior gingiva between one or more of the abovementioned first, second, and third colors.
  • FIG. 1 IB illustrates an example digital dental model 1150 of the patient in FIG. 11 A for determining the gingiva color data.
  • the digital dental model 1150 includes anterior gingiva 1152 and posterior gingiva 1154.
  • Annotation points 1156A-N can be identified and applied to both the anterior gingiva 1152 and the posterior gingiva 1154 in the digital dental model 1150.
  • Vertices 1158A-N can also be defined between one or more of the annotation points 1156A-N and used to assign one or more colors to the anterior gingiva 1152 and/or the posterior gingiva 1154 in the model 1150.
  • colors of the gingiva 1152 and 1154 may be non-uniform and determined, at least in part, based on location of denture library teeth in the model 1150.
  • denture library tooth 1160 may include various annotation points or markers (such as annotation point 1156C) that are used to determine color of the overlaying anterior gingiva 1152.
  • the anterior gingiva 1152 surrounding or near the tooth 1160 may be assigned one or more colors based on distance from an annotation point, such as the annotation point 1156C.
  • a lightest shade of pink can be assigned to the anterior gingiva 1152 that is closest to and/or surrounding the tooth 1160 and darker shades of pink being assigned to portions of the anterior gingiva 1152 that appear farther away in distance from the tooth 1160 and/or the annotation point 1156C for the tooth 1160.
  • the vertices 1158A-N can be used to define boundaries indicating the distances between certain landmarks, such as the tooth 1160 or the annotation point 1156C, and other landmarks, such as annotation point 1156A (where the annotation point 1156A is used to define a portion of the anterior gingiva 1152 that may be farthest away from one or more teeth).
  • a lightest shade of pink can be assigned to the anterior gingiva 1152 between the vertex 1158B and the tooth 1160 (e.g., having a smallest distance from the tooth).
  • a darker shade of pink can be assigned to a portion of the anterior gingiva 1152 that is defined by the vertices 1158A and 1158C, above the vertex 1158B and below the annotation point 1156A (e.g., having a threshold distance from the tooth).
  • a darkest shade of pink can be assigned to a portion of the anterior gingiva 1152 that is farthest away in distance from the annotation point 1156C and/or the tooth 1160, and defined by space to the right of the vertex 1158C and/or to the left of the vertex 1158A (e.g., having a greatest distance or some distance exceeding a threshold distance value from the tooth). Any of the colors assigned to portions of the anterior gingiva 1152 can then be blended or adjusted in some way to provide more realistic characterization of the gingiva 1152 than would be possible with a single color or a uniform gradient.
  • FIG. 1 IB also depicts a denture model 1162 having teeth 1170 and gingiva 1172.
  • color is assigned to the gingiva 1172 using horizontal slicing techniques.
  • a computer system as described herein can determine 3 horizontal slices: a first slice 1164 (corresponding to a cervical layer), a second slice 1166 (corresponding to a middle layer), and a third slice 1168 (corresponding to a base layer).
  • one or more of the slices 1164-1168 can at least partially overlap such that color assigned to each layer may blend together to provide a realistic representation of the gingiva 1172.
  • Each of the slices 1164-1168 can be assigned a different value, the value being at least one of a frequency, wavelength, numerical, or other shade value, using the disclosed techniques. Averaged values across the slices 1164-1168 can create a unique signature for the gingiva 1172. In some implementations, this unique signature can then be assigned to a commonly perceived color, shade, dental value, or other dental pattern. [0223] In some implementations, instead of or in addition to performing horizontal slicing techniques as shown with the digital model 1162, the computer system can perform a similar vertical slicing technique.
  • Vertical slicing can be beneficial to determine shade color and apply such color(s) more effectively when a data size is small (e.g., applying the color(s) to just 1 or 2 teeth or a small portion of the gingiva rather than more teeth or a larger section of the gingiva).
  • Vertical slices of color values may be taken at many intervals across a particular portion of the model 1162 (e.g., across a single tooth or across a particular portion of the gingiva). Then the color values identified at these intervals can be averaged and/or summated to determine a color value that can be consistently applied across the teeth or gingiva (and thus applied to multiple of the vertical slices).
  • FIGs. 12A-B is a flowchart of a process 1200 for determining color data for teeth in a digital dental model of a patient.
  • the process 1200 can be performed to map color data to each individual tooth of the patient.
  • the color data can then be stored in association with the particular tooth in a library as described throughout this disclosure. Determining the color data per tooth can beneficially make each tooth appear more realistic and lifelike, especially when this data is used to design and manufacture dentures.
  • the process 1200 may be performed to determine and map color data to more than one tooth (e.g., each canine can be mapped with the same color data, a threshold quantity of teeth that are next to each other or within some threshold distance from each other can be mapped with the same color data).
  • the process 1200 can be performed by the computer system 152 or any other components described in reference to FIGs. 1A-B, including but not limited to the denture design system 116 and/or the rapid fabrication machine 119.
  • the process 1200 can also be performed by one or more other computing systems, devices, computers, networks, cloud-based systems, and/or cloud-based services. For illustrative purposes, the process 1200 is described from the perspective of a computer system.
  • the computer system receives a digital dental model of a patient’s mouth in block 1202. Refer to block 1102 in the process 1100 of FIG. 11A for further discussion.
  • the computer system identifies annotations and/or landmarks for teeth in the model.
  • the annotations and/or landmarks may identify or otherwise represent, for each tooth (or more than one tooth) a cervical third, a middle third, and an incisal third.
  • one or more of these zones may overlap such that color values assigned to each zone can be blended at the points of overlap to make the teeth appear realistic.
  • the zones may not overlap, but color values assigned to each zone may still be blended between the zones to make the teeth appear realistic.
  • the annotations and/or landmarks can identify other portions or zones of the tooth or more than one tooth, such as a set of teeth along a dental arch. As shown in FIG. 12C, for example, the annotations and/or landmarks can identify horizontal slices for the teeth. As shown in FIG. 12D, as another example, the annotations and/or landmarks can be used to identify vertical slices for the teeth. Refer to block 1104 in the process 1100 in FIG. 11 A for further discussion about identifying annotations and/or landmarks in the model. [0229] In block 1206, the computer system selects at least one tooth.
  • the computer system can map a first color value representing a band of chroma to an annotated cervical third of the tooth.
  • the computer system can map the first color value using annotations indicating halo sockets as reference points on the tooth (block 1210).
  • the computer system can additionally or alternatively adjust a dominance level of the first color value to exceed a threshold level of dominance (block 1212).
  • the cervical third of the tooth typically may contain the most chroma compared to other parts of the tooth.
  • most area of the tooth that contacts the halo sockets will have the band of chroma mapped thereto.
  • the chroma can be represented in the first color value that may include, but is not limited to, A (reddish brown shades), B (reddish yellow shades), C (gray shades), D (reddish gray shades), and bleach.
  • the computer system can also map a second color value and optionally a texture value to an annotated middle third of the tooth in block 1214. For example, the computer system can adjust a level of opacity of the mapped second color value to exceed a threshold level of opacity (block 1216). The computer system can also identify the texture value based on data indicating a type of the tooth having the second color value and the texture value mapped thereto (block 1218). Each type of tooth may have a different type of texture. Therefore, the computer system can utilize mapping data or rules to identify in a library of texture data, which texture values correspond to which teeth. The computer system can then select the texture value(s) that is associated with the type of the selected tooth to map onto the annotated middle third of the tooth.
  • the middle third of the tooth typically has the most value or opacity.
  • the second color value may include, but is not limited to, shades of white and/or yellow. Therefore, the level of opacity of the second color value can be increased to a value within a range of 90- 100% opacity so that the second color value is most apparent in comparison to other color values that are mapped to the tooth in one or more different zones.
  • the computer system can map a third color value to an annotated incisal third of the tooth.
  • the computer system may, for example, adjust a level of translucency of the third color value to exceed a threshold level of translucency (block 1222).
  • the computer system may map the third color value as a color band across the annotated incisal third of the tooth based on a type of the tooth (block 1224).
  • the computer system may map the third color value on the annotated incisal third of the tooth based on landmarks indicating one or more IP contacts (block 1226).
  • the incisal third can be a thinnest and most translucent layer or region of the tooth.
  • the third color value mapped to the tooth can be adjusted to have a highest level of transparency (or at least a threshold level of transparency, such as at least 80% transparent) in comparison to the first and second color values.
  • the third color value may include, but is not limited to, shades of blue, gray, and/or violet. Sometimes, the third color value can appear like highlights on the tooth.
  • a level of transparency/translucency for the third color value can vary based on the type of the tooth. For example, central, lateral, canine, and molar teeth can each have a different placement of the color band and/or a different threshold level of translucency.
  • Blocks 1208-1226 can be performed in any order. For example, one or more of the blocks 1208-1226 can be performed in series. One or more of the blocks 1208-1226 can be performed in parallel.
  • the process 1200 is described with respect to mapping three color values to the tooth, the process 1200 can also be performed to map any other quantity of color values (and texture values) to the tooth.
  • the quantity of color values that are mapped to a tooth can vary based on how many zones are identified for the particular tooth, for a set of teeth, and/or for the particular patient. Sometimes, multiple color values can be mapped to one zone, where each color can be mapped to a different layer in the zone.
  • the process 1200 can be performed for each zone of each tooth (or other part of the patient’s mouth) for which more than one color value is being applied.
  • the computer system can optionally adjust at least a gradient between the first, second, and third color values (block 1228).
  • one or more of the color values can be adjusted to create a gradient or blend effect between the color values (e.g., across the zones identified or defined by the annotations and/or landmarks). This can make the resulting teeth (e.g., dentures) appear lifelike and realistic.
  • the computer system can simulate ray-tracing on the tooth to identify and apply at least one color adjustment to the mapped first, second, and/or third color values (block 1230).
  • the computer system can simulate ray-tracing on the tooth to identify and apply at least one color adjustment to the mapped first, second, and/or third color values (block 1230). Refer to blocks 1020-1022 in the process 1000 in FIGs. 10A-B for further discussion about simulating ray-tracing.
  • the computer system can determine whether there are more teeth to map color values to (block 1232). If there are more teeth, the computer system can return to block 1206 and repeat blocks 1206-1230 for each remaining tooth (or set of teeth). If there are no more teeth to map color values to, the computer system can return the digital dental model with data indicating the mapped color values (block 1234). This returned data can be used, as described herein, to generate a denture design for the patient and/or manufacture (e g., print, fabricate) dentures for the patient. Refer to FIG. 1A and the processes 900 in FIG. 9 and 1000 in FIGs. 10A-B for further discussion about how the returned data can be used and/or outputted.
  • the computer system can optionally generate and return a patient-specific color library based on the mapped color values (block 1236).
  • block 1236 can be performed before, during, or after one or more other blocks in the process 1200.
  • the block 1236 can be performed after block 1202, in response to receiving a digital dental model of a patient’s mouth.
  • the block 1236 can be performed in response to receiving an oral scan of the patient’s mouth, which can occur even before receiving the digital dental model of the patient’s mouth in block 1202.
  • the computer system can perform image or scan data processing techniques described herein to extract color data from the oral scan. Then, the computer system can add the extracted color data to a patient-specific color library.
  • the patient-specific color library can be populated with colors and/or textures that are identified for the specific patient over time (e.g., whenever oral scan data or image data of the patient are received).
  • the patient-specific color library can therefore provide an archive of exact color and/or texture matches for each of the particular patient’s teeth and/or gingiva.
  • the patientspecific color library can be stored in a data repository or other type of data store described herein.
  • the patient-specific color library can be accessed, retrieved, and/or referenced with any of the disclosed processes whenever designing, adjusting, and/or generating digital dental models for the particular patient.
  • maintaining and updating the patient-specific color library over time can allow for efficient, quick, and easy completion of tasks such as designing digital dental models for the patient and generating print instructions.
  • the patient-specific library can also advantageously provide for consistency in color and/or texture applied to teeth models that are generated for the particular patient over time.
  • the patientspecific library can be used by the computer system to generate one or more genus color and/or texture libraries, which can then be used to design and generate digital dental models for one or more other patients.
  • FIG. 12C illustrates an example digital dental model 1250 of the patient in FIGs. 12A-B for determining the teeth color data with horizontal slicing techniques.
  • zones have been identified, using the techniques described in the process 1200 of FIGs. 12A-B, for chroma 1254, a middle portion 1256 of each tooth 1252A-N, and an incisal portion 1258 of each tooth 1252A-N.
  • Each tooth can be considered a triangular mesh having color values assigned per zone in the triangular mesh.
  • the color value assignments per zone can be stored in association with the tooth in a library, as described herein, then used to design and manufacture dentures for the patient.
  • the chroma 1254 color value can be applied as a band along a top edge/portion of each tooth 1252A-N. Between a lower boundary of the chroma 1254 and an upper boundary of the incisal portion 1258, the middle portion 1256 of each tooth 1252A-N can be assigned another color value, such as yellow and/or white shades.
  • the middle portion 1256 of each tooth 1252A- N can also have some texture applied thereto in order to make the teeth 1252A-N appear lifelike.
  • the color and/or texture can be applied differently to each tooth 1252A-N in the respective middle portion 1256 based on the type of tooth, as shown by the vertical lines in the middle portions 1256 of the teeth 1252A-N in FIG.
  • the incisal portion 1258 color value can be applied from the upper boundary of the incisal portion 1258 separating the incisal portion 1258 from the middle portion 1256 to a bottom or tip of each tooth 1252A-N.
  • the color value can be applied to the incisal portion 1258 in a pattern or design that varies based on the type of tooth, as shown in FIG. 12C.
  • the upper boundary can have a more prominent wave design (e.g., fewer but larger valleys defining the upper boundary) for canines than for central teeth.
  • a gradient of the color value assigned to the incisal portion 1258 may also be adjusted according to one or more rules described herein that provide for more realistic-looking teeth design.
  • horizontal slicing may be performed along a dental arch.
  • the oral scan data 1260 is sliced can vary based on what type of coloring, texture, and/or appearance is desired for the particular patient.
  • three horizontal slices 1262, 1264, and 1266 can be made using the techniques described herein (the slice 1262 can correspond to an incisal third, the slice 1264 can correspond to a middle third, and the slice 1266 can correspond to a cervical third of a particular tooth and/or a set of teeth).
  • Each of the slices 1262, 1264, and 1266 can be assigned colors having different values, including but not limited to a frequency, wavelength, or other numerical values.
  • the color values assigned to each of the slices 1262, 1264, and 1266 can make up a unique signature or fingerprint for the particular tooth (or set of teeth), which can then be associated with most commonly assigned or perceived dental colors. The associated dental colors can then be used for designing and manufacturing dentures that match the patient’s actual teeth and appear lifelike.
  • the slices 1262, 1264, and 1266 are shown as applying to multiple teeth, the same techniques described herein can be applied to smaller areas in the oral scan data 1260, such as one tooth, 2-4 teeth, 6 anterior teeth, or any other combination of teeth desired by a relevant user, such as a dentist.
  • FIG. 12D illustrates another example digital dental model 1270 of the patient in FIGs. 12A-B for determining the teeth color data with vertical slicing techniques.
  • the techniques described in reference to FIGs. 12A-B are used to determine vertically-sliced zones for the patient’s teeth.
  • Vertical slicing can beneficially be used to color a single tooth to appear more lifelike.
  • Vertical slicing can be used to identify a lightest zone 1272, a middle zone 1274, and a darkest zone 1276 for the particular tooth. Averaging color values from each of the zones 1272, 1274, and 1276 can provide for determining an appropriate and accurate transition of color from a top portion to a bottom portion of the particular tooth.
  • FIG. 13A is an example digital dental model 1300 having annotations 1302A-N for defining teeth 1304A-N and corresponding gingiva 1306 color data.
  • the digital dental model 1300 can be stored in a library and used for printing/manufacturing corresponding dentures.
  • the digital dental model 1300 of FIG. 13A can be generic and used for denture design and manufacturing for many patients. Sometimes, the model 1300 can be specific to a particular patient’s mouth.
  • the annotations 1302A-N can be defined in the library as reference points for identifying landmarks in the model 1300 that are then used to assign particular color values to the teeth 1304A-N and the gingiva 1306.
  • the annotations 1302A-N can be used to determine layering of one or more colors and gradient effects to apply to one or more of the layered colors applied to the teeth 1302A-N (and/or the gingiva 1306).
  • Annotations 1308A-N over the gingiva 1306 can be used to identify sockets or boundaries 1310 of the gingiva 1306 and the teeth 1302A-N.
  • the annotations 1308A-N can also be used to identify and assign gingiva halo color data and boundaries 1312 in order to make the gingiva 1306 appear realistic. Refer to FIGs. 11-12 for further discussion about using the annotations 1304A-N and 1308A-N to identify and assign color values to both the teeth 1302A-N and the gingiva 1306.
  • FIG. 13B is an example digital dental model 1320 having cutback layers 1328 of teeth color data.
  • the cutback layers 1328 can be assigned different colors and/or color values (e.g., opacity, gradient, brightness, translucency) to allow for more depth of color and more realistic-looking teeth.
  • the digital dental model 1320 can include the cutback layer 1328 that has an opaqueness level that exceeds some threshold level and a matching layer 1322 that is more transparent than the cutback layer 1328.
  • the matching layer 1322 can overlay at least a portion of the cutback layer 1328.
  • the matching layer 1322 can be more apparent (e.g., more opaque) near a top portion of a tooth, but as shown in the side view 1326, the matching layer 1322 can be less apparent (e.g., more transparent) as it covers an entire front portion of the tooth and extends down the tooth towards a tip or bottom portion of the tooth.
  • the cutback layer 1328 can be assigned color shades that include, but are not limited to, whites, yellows, greys, oranges, and/or browns.
  • the transparent matching layer 1322 can be assigned color shades that include, but are not limited to, more transparent blues and/or greys. As mentioned above, this combination of layering colors can cause resulting teeth to appear more realistic and lifelike.
  • FIG. 13C is an example digital dental model 1330 having color and texture data for teeth 1332A-N and gingiva 1334.
  • various layers and/or zones can be identified per tooth 1332A-N and gingiva 1334.
  • Colors and other color values e.g., gradient, transparency, brightness
  • colors A2 and A3 can be assigned to 2 layers on the tooth 1332A.
  • FIGs. 12A-D for further discussion about assigning color to the teeth 1332A-N.
  • a lightest pink color can be assigned to a zone 1336A of the gingiva 1334, as defined by annotation points and landmarks described in reference to FIGs. 11A-B.
  • a middle-level pink color can be assigned to a zone 1336B of the gingiva 1334, and a darkest pink and one or more textures can be assigned to a zone 1336C of the gingiva 1334.
  • Any of the color data assigned and described herein can be stored in association with each layer of the teeth 1332A-N and/or each layer of the gingiva 1334 in a library described herein. Although each layer can be assigned color and/or texture data, each layer may also be further divided into additional or smaller color zones so as to increase an amount of detail and/or characteristics that can be applied to the model 1330 to resemble realistic teeth and gingiva.
  • the computing device 1400 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.
  • the mobile computing device is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart-phones, and other similar computing devices.
  • the components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
  • the computing device 1400 includes a processor 1402, a memory 1404, a storage device 1406, a high-speed interface 1408 connecting to the memory 1404 and multiple highspeed expansion ports 1410, and a low-speed interface 1412 connecting to a low-speed expansion port 1414 and the storage device 1406.
  • Each of the processor 1402, the memory 1404, the storage device 1406, the high-speed interface 1408, the high-speed expansion ports 1410, and the low-speed interface 1412 are interconnected using various busses, and can be mounted on a common motherboard or in other manners as appropriate.
  • the processor 1402 can process instructions for execution within the computing device 1400, including instructions stored in the memory 1404 or on the storage device 1406 to display graphical information for a GUI on an external input/output device, such as a display 1416 coupled to the high-speed interface 1408.
  • an external input/output device such as a display 1416 coupled to the high-speed interface 1408.
  • multiple processors and/or multiple buses can be used, as appropriate, along with multiple memories and types of memory.
  • multiple computing devices can be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi -processor system).
  • the memory 1404 stores information within the computing device 1400.
  • the memory 1404 is a volatile memory unit or units.
  • the memory 1404 is a non-volatile memory unit or units.
  • the memory 1404 can also be another form of computer-readable medium, such as a magnetic or optical disk.
  • the storage device 1406 is capable of providing mass storage for the computing device 1400.
  • the storage device 1406 can be or contain a computer- readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
  • a computer program product can be tangibly embodied in an information carrier.
  • the computer program product can also contain instructions that, when executed, perform one or more methods, such as those described above.
  • the computer program product can also be tangibly embodied in a computer- or machine-readable medium, such as the memory 1404, the storage device 1406, or memory on the processor 1402.
  • the high-speed interface 1408 manages bandwidth-intensive operations for the computing device 1400, while the low-speed interface 1412 manages lower bandwidth-intensive operations.
  • the highspeed interface 1408 is coupled to the memory 1404, the display 1416 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 1410, which can accept various expansion cards (not shown).
  • the low-speed interface 1412 is coupled to the storage device 1406 and the low-speed expansion port 1414.
  • the low-speed expansion port 1414 which can include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) can be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • input/output devices such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • the computing device 1400 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as a standard server 1420, or multiple times in a group of such servers. In addition, it can be implemented in a personal computer such as a laptop computer 1422. It can also be implemented as part of a rack server system 1424. Alternatively, components from the computing device 1400 can be combined with other components in a mobile device (not shown), such as a mobile computing device 1450. Each of such devices can contain one or more of the computing device 1400 and the mobile computing device 1450, and an entire system can be made up of multiple computing devices communicating with each other.
  • the mobile computing device 1450 includes a processor 1452, a memory 1464, an input/output device such as a display 1454, a communication interface 1466, and a transceiver 1468, among other components.
  • the mobile computing device 1450 can also be provided with a storage device, such as a micro-drive or other device, to provide additional storage.
  • a storage device such as a micro-drive or other device, to provide additional storage.
  • Each of the processor 1452, the memory 1464, the display 1454, the communication interface 1466, and the transceiver 1468, are interconnected using various buses, and several of the components can be mounted on a common motherboard or in other manners as appropriate.
  • the processor 1452 can execute instructions within the mobile computing device 1450, including instructions stored in the memory 1464.
  • the processor 1452 can be implemented as a chipset of chips that include separate and multiple analog and digital processors.
  • the processor 1452 can provide, for example, for coordination of the other components of the mobile computing device 1450, such as control of user interfaces, applications run by the mobile computing device 1450, and wireless communication by the mobile computing device 1450.
  • the processor 1452 can communicate with a user through a control interface 1458 and a display interface 1456 coupled to the display 1454.
  • the display 1454 can be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology.
  • the display interface 1456 can comprise appropriate circuitry for driving the display 1454 to present graphical and other information to a user.
  • the control interface 1458 can receive commands from a user and convert them for submission to the processor 1452.
  • an external interface 1462 can provide communication with the processor 1452, so as to enable near area communication of the mobile computing device 1450 with other devices.
  • the external interface 1462 can provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces can also be used.
  • the memory 1464 stores information within the mobile computing device 1450.
  • the memory 1464 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.
  • An expansion memory 1474 can also be provided and connected to the mobile computing device 1450 through an expansion interface 1472, which can include, for example, a SIMM (Single In Line Memory Module) card interface.
  • SIMM Single In Line Memory Module
  • the expansion memory 1474 can provide extra storage space for the mobile computing device 1450, or can also store applications or other information for the mobile computing device 1450.
  • the expansion memory 1474 can include instructions to carry out or supplement the processes described above, and can include secure information also.
  • the expansion memory 1474 can be provide as a security module for the mobile computing device 1450, and can be programmed with instructions that permit secure use of the mobile computing device 1450.
  • secure applications can be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
  • the memory can include, for example, flash memory and/or NVRAM memory (nonvolatile random access memory), as discussed below.
  • NVRAM memory nonvolatile random access memory
  • a computer program product is tangibly embodied in an information carrier.
  • the computer program product contains instructions that, when executed, perform one or more methods, such as those described above.
  • the computer program product can be a computer- or machine-readable medium, such as the memory 1464, the expansion memory 1474, or memory on the processor 1452.
  • the computer program product can be received in a propagated signal, for example, over the transceiver 1468 or the external interface 1462.
  • the mobile computing device 1450 can communicate wirelessly through the communication interface 1466, which can include digital signal processing circuitry where necessary.
  • the communication interface 1466 can provide for communications under various modes or protocols, such as GSM voice calls (Global System for Mobile communications), SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), among others.
  • GSM voice calls Global System for Mobile communications
  • SMS Short Message Service
  • EMS Enhanced Messaging Service
  • MMS messaging Multimedia Messaging Service
  • CDMA code division multiple access
  • TDMA time division multiple access
  • PDC Personal Digital Cellular
  • WCDMA Wideband Code Division Multiple Access
  • CDMA2000 Code Division Multiple Access
  • GPRS General Packet Radio Service
  • a GPS (Global Positioning System) receiver module 1470 can provide additional navigation- and location-related wireless data to the mobile computing device 1450, which can be used as appropriate by applications running on the mobile computing device 1450.
  • the mobile computing device 1450 can also communicate audibly using an audio codec 1460, which can receive spoken information from a user and convert it to usable digital information.
  • the audio codec 1460 can likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device 1450.
  • Such sound can include sound from voice telephone calls, can include recorded sound (e.g., voice messages, music files, etc.) and can also include sound generated by applications operating on the mobile computing device 1450.
  • the mobile computing device 1450 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as a cellular telephone 1480. It can also be implemented as part of a smart-phone 1482, personal digital assistant, or other similar mobile device.
  • Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof.
  • ASICs application specific integrated circuits
  • These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
  • machine-readable medium and computer-readable medium refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine- readable medium that receives machine instructions as a machine-readable signal.
  • machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • the systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components.
  • the components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), and the Internet.
  • LAN local area network
  • WAN wide area network
  • the Internet the global information network
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • FIG. 15A illustrates an example system 1500 for storing digital tooth libraries that are augmented with data such as color, layers, texture, and/or transparency data.
  • the system 1500 can include an augmented digital tooth libraries data store 1502.
  • the data store 1502 can be configured to store a plurality of augmented tooth libraries 1504A-N.
  • Each library can include information, data, and/or properties/physical characteristics for individual teeth and/or groups/sets of teeth (e.g., blocks of teeth, an arch form of teeth, anterior teeth, posterior teeth, molars).
  • the data store 1502 can be configured to store color and/or texture libraries for one or more different teeth and/or gingivae.
  • the color and/or texture libraries can be archived in the data store 1502, then later retrieved for use in designing teeth for one or more patients. Sometimes, the color and/or texture libraries can be stored in association with particular teeth and/or gingivae. Sometimes, the color and/or texture libraries can be stored in association with a particular patient(s). The color and/or texture libraries can be used to design teeth for any patient. In some implementations, the color and/or texture libraries may only be used to design teeth for patients having particular conditions and/or design needs. In yet some implementations, one or more of the libraries 1504A-N can be patient-specific color and/or texture libraries. The data store 1502 can maintain and update one or more libraries for each patient, as described in reference to block 1236 in the process 1200 of FIGs.
  • Each of the libraries 1502A-N can include a plurality of sub-libraries.
  • the sublibraries can have a same set of teeth but with different color properties (e.g., different general colors, which can be applicable to a variety of different use cases) or other physical properties (e.g., texture, transparency). Therefore, when designing a dental appliance, any one of the sublibraries can be selected. Using the sub-libraries can advantageously reduce an amount of time needed to design the dental appliance since a relevant user would not have to determine and apply a specific set of colors to the selected tooth library for the dental appliance design.
  • the dental appliance can be designed with the specific color(s) associated with the selected sub-library.
  • the color(s) of the selected sub-library can also be further customized.
  • a sub-library can include a plurality of layers, zones, or meshes. Each of the layers, zones, or meshes can have different color, texture, and/or transparency properties. Any of these properties can be adjusted for a layer, zone, or mesh, then automatically applied to any other portion of the selected tooth library having the same layer, zone, or mesh.
  • each of the meshes can be further divided into zones. Sometimes, instead of zones, each mesh can have RGBA values/properties.
  • the RGB can correspond to color data for the mesh and the A can correspond to a translucency for the mesh (e.g., a percentage of translucency, a percentage of opaqueness).
  • the library 1504A contains sub-libraries for different colors Al, A2, and D2. Any of the sub-libraries can be selected and applied to a dental appliance design so that the properties associated with the selected sub-library are automatically applied to the corresponding tooth/teeth in the dental appliance design.
  • Each of the libraries 1504A-N can contain additional or fewer sub-libraries.
  • Each of the sub-libraries can include a plurality of meshes associated with the tooth. Sometimes, a mesh can have multiple nested meshes. The meshes can also be considered layers and/or zones.
  • mesh N corresponds to a mesh for tooth gums. Sometimes, the gums can have multiple meshes or sub-meshes, like a tooth.
  • the Color Al sub-library for the tooth library 1504A can include 4 meshes (e.g., layers). In some implementations, one or more of the same or different teeth can have additional or fewer meshes.
  • each of the sub-libraries in the library 1504A can include a plurality of meshes for the 1 tooth in different colors.
  • the Color Al sub-library for example, can include a plurality of meshes for Tooth A, where each mesh includes values, identifiers, labels, or other indicators for mesh properties.
  • the mesh properties can include color, transparency, and/or texture.
  • Each of the sub-libraries can also include at least one texture map.
  • the texture map can be pinned to or otherwise associated with different parts of the tooth.
  • the texture map can be applied to one or more different layers/meshes of the tooth.
  • the texture map can include stippling or other types of texture features to make the tooth (or gingiva) appear more realistic.
  • Each of the sub-libraries for the library 1504A can be stored in the data store 1502 with color values or identifiers for each mesh, translucency or transparency values for each mesh, and/or texture values for each mesh.
  • different parts of each mesh can include different property values, which may also be customizable depending on the patient and/or dental appliance being designed.
  • FIG. 15A also illustrates a front view 1506 and a side view 1514 of a tooth 1512 and gingiva 1510, which correspond to the library 1504A.
  • Each of the meshes 1, 2, 3, 4, and N 1508A, B, C, D, and N, respectively, can include different shapes and different colors, transparency, and/or texture properties.
  • the mesh 1 1508A can be an innermost layer that encompasses a portion of the tooth 1512 and a portion of the gingiva 1510.
  • the mesh 1 1508A can include a root portion of the tooth 1512 that can be at least partly visible through the gingiva 1510.
  • Coloring and adding texture to the root portion with the mesh 1 1508A can advantageously provide for making the tooth 1512 appear realistic.
  • the gums or gingiva 1510 surrounding the tooth’s root can appear whiter or lighted in color (e.g., lighter pink) than other portions of the gums because the gums can have some degree of translucency.
  • the portion of the tooth 1512 that includes the root may be partially visible underneath the gums.
  • the root of the tooth 1512 can be fabricated (e.g., 3D printed) underneath the gingiva 1510 with one or more color values and the gingiva 1510 can be fabricated with some degree or percentage of translucency so that the root portion can be partially visible through the gingiva 1510.
  • the gingiva 1510 can have different levels of thickness in each layer/mesh around the root portion of the tooth 1512 as well as different levels of transparency in each layer/mesh around the root portion of the tooth 1512.
  • the different levels of thickness and/or transparency per layer/mesh can be stored in the augmented tooth libraries 1504A-N described herein.
  • a second, one larger layer, mesh 2 1508B can be more transparent than the mesh 1 1508A and can include a larger portion of the visible tooth 1512 than the mesh 1 1508A.
  • a third, one larger layer, mesh 3 1508C can be more transparent than both the meshes 1 and 2, 1508 A and 1508B, respectively, and can include a larger portion of the visible tooth 1512.
  • the mesh 4 1508D can be an outermost layer of the tooth 1512, and can correspond to a fully transparent (and/or mostly transparent) layer that, may be designated to provide a protective coating on an outer surface of the tooth.
  • the mesh N 1508N can be a mesh for the gingiva 1510, which can overlay or otherwise be an outermost layer for the gingiva 1510 (e.g., overlaying the mesh 1 1508A).
  • the shapes of the meshes 1, 2, 3, 4, and N 1508A, B, C, D, and N, respectively, can be shaped differently based on the type of tooth, a shape of the tooth itself, a desired dental appearance for the particular patient, and/or a dental appliance to be designed with the augmented tooth libraries described herein.
  • FIG. 15A illustrates an example system 1520 for generating augmented digital tooth libraries from new and/or preexisting digital tooth libraries.
  • each tooth library can be generated for a specific patient.
  • tooth libraries can be generated for a genus or population of patients having similar conditions, teeth, gingiva, and/or other characteristics (e.g., demographics, age, geographic region).
  • An augmentation computer system 1522 can communicate (e.g., wired and/or wirelessly via networks) with the augmented tooth libraries data store 1502, a non-augmented digital tooth libraries data store 1524, a layer models and rulesets data store 1526, and a gingiva models and rulesets data store 1528.
  • the data stores 1502, 1524, 1526, and 1528 can be a same data store and/or network of data stores.
  • the augmentation computer system 1522 can be any type of computer system, cloud-based system, and/or network of computing devices configured to augment digital tooth libraries and/or digital tooth libraries with additional data and/or properties, including but not limited to color, texture, and/or transparency properties described throughout this disclosure.
  • the augmented digital tooth libraries data store 1520 can be configured to store digital tooth libraries and/or digital tooth files that have been augmented using the disclosed techniques by the augmentation computer system 1522.
  • the augmented digital tooth libraries can be retrieved by one or more computing systems during runtime for use in designing realistic dental appliances for patients. Refer to description throughout this disclosure for additional details.
  • the non-augmented digital tooth libraries data store 1524 can be configured to store a variety of non-augmented, standard digital tooth libraries that have been generated by a variety of different sources and/or users.
  • the non-augmented digital tooth libraries may not contain additional data or properties such as color, texture, and/or transparency properties.
  • a non-augmented digital tooth library can be grey-scale or may have no color, texture, or transparency.
  • the non-augmented digital tooth libraries and/or files may only contain a digital tooth.
  • the non-augmented digital tooth libraries and/or files may not contain any gingiva component.
  • one or more of the non-augmented digital tooth libraries and/or files may contain at least a portion of the gingiva component for the tooth.
  • the non-augmented digital tooth libraries and/or files can include more than one tooth, such as adjacent teeth, a bridge or arch form of teeth, a block of teeth, or another group of teeth.
  • the layer models and rulesets data store 1526 can be configured to store information that can be used by the augmentation computer system 1522 to generate layers (e.g., meshes, zones) for each tooth in the non-augmented digital tooth libraries and/or digital tooth files according to morphologies that have been generated for each type of tooth (e.g. refer to FIG. 17).
  • the data store 1526 can also be accessed by the computer system 1522 to retrieve one or more rules that can be used to apply color, texture, and/or transparency properties to each of the layers of each of the digital tooth libraries and/or digital tooth files.
  • the rules can be associated with different types of teeth, groups of teeth, and/or dental appliances that the digital libraries are intended to be used for.
  • the gingiva models and rulesets data store 1528 can be configured to store information that can be used by the augmentation computer system 1522 to generate layers (e.g., meshes, zones) for each gingiva component in the non-augmented digital tooth libraries and/or digital tooth files.
  • the data store 1528 can also be accessed by the computer system 1522 to retrieve one or more rules that can be used to apply color, texture, and/or transparency properties to each of the layers of each gingiva component in the digital tooth libraries and/or digital tooth files.
  • the rules can be associated with different types of teeth, groups of teeth, dental appliances that the digital libraries are intended to be used for, types of gingiva, locations of the gingiva, groups of gingiva, etc.
  • the data store 1528 can also include rules and/or models for filling in gaps or missing parts of gingiva that may not be part of individual digital tooth libraries.
  • each digital tooth library can include a single tooth and a gingiva component that extends like a cone from a root of the tooth to a predetermined endpoint.
  • gaps may exist between the gingiva components of the teeth once the teeth are placed adjacent to each other. This is because the digital tooth libraries may not include gingiva components that extend beyond reference points of the corresponding tooth.
  • the rules in the data store 1528 can therefore be used to fill in the gaps with gingiva having color, texture, and/or transparency properties to create a uniform, realistic grouping of teeth and gingiva in the digital dental model.
  • the augmentation computer system 1522 can receive, access, or retrieve non-augmented digital tooth libraries from the non-augmented digital tooth libraries data store 1524 in block A (1530).
  • the non-augmented digital tooth libraries can be preexisting.
  • the computer system 1522 can additionally or alternatively receive, access, or retrieve non-augmented digital tooth fdes from one or more other sources in block B (1532).
  • the other sources can include, but are not limited to, user and/or computing devices of relevant user such as dentists, orthodontists, technicians, teeth designer computing system, dental appliance designer computing system, fabrication devices/machines, etc.
  • the non-augmented digital tooth files can be new tooth libraries.
  • the computer system 1522 can identify and retrieve one or more models and/or rulesets for tooth layers and/or gingiva components in block C (1534). For example, the computer system 1522 can receive a non-augmented digital tooth library for a central incisor. The computer system 1522 can then access the layer models and rulesets data store 1526 and request/retrieve layer models that correspond to central incisors and/or rules for applying different color, texture, and/or transparency properties to central incisors. The computer system 1522 can also retrieve a morphology (refer to FIG. 17) from the data store 1526, which can be used to determine how many and arrangement of layers (e.g., meshes, zones) for the central incisor.
  • a morphology (refer to FIG. 17) from the data store 1526, which can be used to determine how many and arrangement of layers (e.g., meshes, zones) for the central incisor.
  • the computer system 1522 can also access the gingiva models and rulesets data store 1528 to request/retrieve layer models that correspond to a gingiva component of central incisors and/or rules for applying different color, texture, and/or transparency properties to the gingiva component of central incisors.
  • each type of tooth can have different layer models and/or rulesets.
  • the computer system 1522 can access the data stores 1526 and 1528 at a same time. Sometimes, the computer system 1522 can access the data stores 1526 and 1528 at different times (e.g., the data store 1526 can be accessed when the computer system 1522 is augmenting the digital tooth library for the tooth and the data store 1528 can be accessed when the computer system 1522 is augmenting the digital tooth library for the gingiva component of the tooth).
  • the computer system 1522 can apply the models and/or rulesets to the digital tooth libraries and/or digital tooth files to generate 3D layers for the tooth. In other words, the computer system 1522 can generate a 3D model of where one or more layers may exist in the tooth.
  • the computer system 1522 can also apply the models and/or rulesets to generate 3D layers for the gingiva component (if the non-augmented digital tooth library and/or file has a gingiva component or portion thereof). If, for example, the digital tooth library only has an outer surface layer, the computer system 1522 can apply the retrieved rulesets to determine how to make one or more interior layers (e.g., meshes, surfaces, zones) and shape of the interior layers. Refer to at least the process 1000 in FIGs. 10A and 10B for further discussion about generating the 3D layers for the tooth.
  • the computer system 1522 can generate gingiva for the corresponding digital tooth (block E, 1538). If the non-augmented digital tooth library does not contain any portion of the gingiva component for the tooth, then the computer system 1522 can generate the gingiva component for the tooth using the retrieved models and/or rules from the gingiva models and rulesets data store 1528. The retrieved models and/or rules can be specific to the particular type of tooth in the digital tooth library/file. Generating the gingiva can include generating 3D layers (e.g., meshes, zones, surfaces) of the gingiva component for the tooth in the digital tooth libraiy/file. Refer to at least the process 1100 in FIGs. 11 A and 1 IB for further discussion about generating the gingiva.
  • 3D layers e.g., meshes, zones, surfaces
  • the computer system 1522 can generate coloring, transparency, and/or texture data for each layer of the digital tooth and/or gingiva (block F, 1540). Refer to FIGs. 15 A, 16, 17, and 18 for further discussion.
  • the computer system 1522 can apply one or more of the retrieved rules to each layer of the digital tooth to assign a color, transparency, and/or texture value/identifier to the layer.
  • the rules can indicate, for each layer, a preferred color, transparency, and/or texture that the tooth can be for a particular type of dental appliance.
  • Each layer of the tooth and/or gingiva can include a unique, different RGBA value, where RGB can be a color value and A can be a transparency value.
  • generating the data for each layer can also include assigning each layer to one or more zones.
  • assigning each layer can be made to color, texture, and/or transparency in a particular zone, and those changes can be implemented across all layers and/or portions of the layers that are assigned to the particular zone. This can advantageously reduce an amount of time in the dental appliance design process while also using fewer compute power and resources to affect changes across multiple parts of the tooth.
  • the computer system 1522 can perform same or similar operations for the gingiva layers.
  • the computer system 1522 can also generate a plurality of sub-libraries for the digital tooth library, where each sub-library contains different color, transparency, and/or texture values/identifiers for the same layers of the tooth and/or gingiva.
  • Each of the sub-libraries can provide tooth and/or gingiva properties that are unique to different dental appliances or other use cases. Refer to at least the processes 1000 in FIGs. 10A and 10B and 1100 in FIGs. 11 A and 1 IB for further discussion about applying color, texture, and/or transparency data to layers and/or zones of a tooth and/or gingiva.
  • the computer system 1522 can generate augmented digital tooth libraries.
  • the augmented digital tooth libraries can include the nesting of layers for each of the tooth and the gingiva component.
  • the augmented digital tooth libraries can include color, texture, and/or transparency values/identifiers that have been assigned, by the computer system 1522, to each of the nested layers for the tooth and the gingiva component.
  • the augmented digital tooth libraries can include a plurality of sub-libraries, where the sub-libraries include the same layers per tooth and gingiva, but may have different color, texture, and/or transparency values/identifiers assigned thereto. Storing this variety of information in the augmented digital tooth libraries can allow for dental appliances to be more quickly and efficiently designed.
  • the color, texture, and transparency properties can be stored and loaded with the digital tooth libraries, which can reduce an amount of processing time and use of compute power needed to design dental appliances using the digital tooth libraries.
  • the augmented digital tooth libraries can then be universally used for designing any type of dental appliance, including but not limited to dentures, crowns, bridges, and/or caps.
  • a computer system when designing a particular type of dental appliance, a computer system (and/or user) can select predefined color, texture, and/or transparency properties for a selected augmented digital tooth library that is intended for use in designing the particular type of dental appliance.
  • the augmented digital tooth library can therefore be stored with a plurality of different predefined color, texture, and/or transparency properties so that the augmented digital tooth library can be easily and efficiently stored and then retrieved for designing any type of dental appliance.
  • the computer system 1522 can return the augmented digital tooth libraries by storing them in the augmented digital tooth libraries data store 1502 (block H, 1544).
  • each augmented digital tooth library can have multiple possible zones where different color, texture, and/or transparency schemes can be applied.
  • Each library may also have preset colors, textures, and/or transparency levels for each zone.
  • Each library can be stored with default color, texture, and/or transparency values that can be modified later during runtime generation of dental appliances using digital tooth libraries. That way, the digital tooth libraries can be preloaded and preprocessed without having to do process the libraries and add color, texture, and/or transparency values at runtime generation of dental appliances.
  • the augmented digital tooth libraries can be stored individually (e.g., per tooth) and/or in blocks or quadrants. Storing the tooth libraries in blocks or quadrants can advantageously speed up placement and positioning of the teeth in a digital dental model for designing dental appliances.
  • the digital tooth libraries described herein can be stored as 3MF files.
  • the digital tooth libraries can be STL files, that can be converted and outputted as 3MF files.
  • the 3MF files can advantageously store color, texture, and transparency data in relation with digital tooth libraries and/or digital dental models.
  • One or more other file formats can also be realized and used, as described herein.
  • FIG. 15C illustrates an example system 1550 for generating fabrication instructions and/or digital teeth graphics using augmented digital tooth libraries.
  • An augmented digital tooth design and fabrication system 1554 can communicate (e.g., wired, wirelessly) with the augmented digital tooth libraries data store 1502, the gingiva models and rulesets data store 1528, a fabrication device modeling and rulesets data store 1552, a dental appliance fabrication device 1558, a display device 1556, and/or a digital dental appliance design system 1560.
  • the augmented digital tooth design and fabrication system 1554 can be any type of computing system described herein.
  • the computer system 1554 can be configured to generate fabrication instructions and/or graphical representation of digital teeth setups using augmented digital tooth libraries.
  • the fabrication device modeling and rulesets data store 1552 can be configured to store rules for generating dental appliance fabrication instructions.
  • Each fabrication device 1558 can have different rules for processing and interpreting digital dental models and digital teeth setups.
  • Each fabrication device 1558 can produce dental appliances with different types of equipment, machinery, inks/dyes, and/or file formats.
  • the rules in the data store 1552 can be used by the computer system 1554 to generate instructions for a particular fabrication device to manufacture a particular dental appliance that is designed using the disclosed techniques.
  • the dental appliance fabrication device 1558 can be any type of manufacturing, printing, and/or 3D printing machine or system. Refer to the rapid fabrication machine 119 described in at least FIG. 1 A for further discussion.
  • the display device 1556 can be any type of user device, computing device, LCD screen, other type of screen, touch screen, etc. that can be configured to present information to a relevant user.
  • the display device 1556 can be part of any one or more computing systems described herein.
  • the display device 1556 can be part of the computer system 152 described in reference to at least FIG. 1 A.
  • the digital dental appliance design system 1560 can be configured to design a dental appliance using a digital dental model and tooth scan data for a particular patient.
  • the computer system 1560 can be the same as or similar to the denture design system 116 described in reference to at least FIG. 1A. In some implementations, the computer system 1560 can be the same as or similar to the computer system 152 described in reference to at least FIG. 1A.
  • the augmented digital tooth design and fabrication system 1554 can receive a tooth library selection in block A (1562) from the digital dental appliance design system 1560. Additionally, the computer system 1554 can receive a dental appliance setup from the design system 1560 (block B, 1564). The tooth library selection and the setup can indicate a desired setup for the dental appliance design for a particular patient.
  • the computer system 1554 can retrieve the selected library from the augmented digital tooth libraries data store 1502 in block C (1566).
  • the selected library can include individual augmented digital tooth libraries (e.g., each library is for an individual tooth).
  • the selected library can include a block, group, set, and/or arch of teeth.
  • the computer system 1554 can retrieve any of the augmented tooth libraries that have been generated as described in reference to FIG. 15B.
  • the computer system 1554 can retrieve one or more rulesets from the gingiva models and rulesets data store 1528 and/or the fabrication device modeling and rulesets data store 1552 in block D (1568). Sometimes, the computer system 1554 can retrieve rulesets from the data store 1528 and the data store 1552 at different times. Refer to FIG. 15B for further discussion about the different rulesets and data stores.
  • the computer system 1554 can apply the dental appliance setup to the selected library (1570).
  • the computer system 1554 can generate a digital dental model using any of the techniques described herein. Refer to at least FIGs. 1A, IB, 9, 10A, and 10B for further discussion.
  • the computer system 1554 can adjust relative positioning of meshes of the teeth in the digital dental model based on mesh reference points in block F (1572).
  • the teeth can be adjusted (e.g., moved, enlarged, shrunken) in order to better fit the digital dental model and/or achieve a desired appearance for a resulting dental appliance.
  • As the teeth are adjusted, their corresponding meshes may be automatically moved within each other and based on the reference points.
  • the reference points can be identified for particular types of teeth, such as shown in FIG. 17 with teeth morphologies.
  • the reference points can be retrieved from the gingiva models and rulesets data store 1528.
  • the reference points can indicate relative distances and/or relationships between different colored/textured shares, layers, and/or zones that are applied to the teeth.
  • the reference points can be used as guides to move and align the teeth to permit for the teeth to be properly arranged and sized relative to each other in the teeth setup/digital dental model.
  • Each mesh as shown and described in reference to FIG. 16 and 18 have relative positioning next to each other, which can be used to determine how the meshes are moved as the tooth or teeth are also moved. Therefore, gingiva and respective festooning can travel with the teeth as the teeth are adjusted. Refer to at least FIG. 9 for further discussion about adjusting positioning of the meshes of the teeth.
  • the computer system 1554 can also generate portions of gingiva and/or archways that may be missing from the teeth libraries in the digital dental model (block G, 1576). When the teeth are arranged adjacent/next to each other in the digital dental model, gaps or missing portions of the gingiva and/or archway may appear in the model.
  • the computer system 1554 can apply rules, algorithms, and/or machine learning models to fdl in the gaps with portions of gingiva and/or archway to make the teeth appear cohesive and part of one arch form, bridge, or gingiva.
  • the computer system 1554 can apply one or more rules to make an appearance of the gingiva more realistic, as described further in reference to at least FIGs. 11A, 1 IB, and 15B.
  • the computer system 1554 may generate specific fabrication material, color, and/or printing instructions based on the teeth setup and the retrieved rules for the particular fabrication device 1558 (block H, 1578). Based on specific colors, textures, and/or transparencies of the teeth and gingiva in the teeth setup, the computer system 1554 can generate instructions to set up the particular fabrication device 1558 used to manufacture the dental appliance according to the teeth setup.
  • the computer system 1554 may generate a graphical representation of the teeth setup (block I, 1580).
  • the graphical representation can indicate how the teeth setup may appear as the dental appliance.
  • the graphical representation can indicate how the teeth setup may visually look in the patient’s mouth and relative to the rest of their physical appearance and attributes.
  • the computer system 1554 can return the completed augmented teeth setup in block J (1582).
  • the computer system 1554 can optionally return fabrication instructions 1584 to the dental appliance fabrication device 1558 (with or without the augmented teeth setup and/or the graphical representation of the teeth setup).
  • the instructions can indicate what colors, textures, and/or transparency properties are mapped to which portions of the teeth setup.
  • the instructions can indicate what types of material to use for printing and the color values that can be applied to each of the materials.
  • the instructions can beneficially provide for ensuring consistency across fabrication of similar dental appliances by the particular fabrication device 1558.
  • the fabrication device 1558 as described herein (refer to the rapid fabrication machine 119 in at least FIG. 1 A) can be configured to manufacture, fabricate, produce, and/or 3D print different types of dental appliances using different types of materials, inks, dyes, etc.
  • Some fabrication devices 1558 can be configured to use light features to produce evenly-colored 3D printed objects at a quality level of traditional 2D printers. For example, some 3D printers can use a binding agent to fuse plastic powder locally. Color can then be applied at a level of 3D printing called a ‘voxel,’ which can be a 3D pixel.
  • color 3D printing can be achieved for complex objects, including but not limited to dental appliances such as dentures, crowns, bridges, and/or caps.
  • dental appliances such as dentures, crowns, bridges, and/or caps.
  • Some fabrication devices 1558 can use filaments that already contain color for 3D printing while other fabrication devices can apply color from one or more external sources for 3D printing.
  • the fabrication devices 1558 with direct color 3D printing can use colorful filaments to 3D print dental appliances, which can work with fused deposition modeling (FDM) technology (a process for making physical objects by building up successive layers of material).
  • FDM printing can include multi-jet and/or poly-jet printing. Other printing techniques may also be used with the disclosed techniques.
  • 3D color printing can be achieved if the FDM printer has a single extruder.
  • the fabrication instructions can set multiple tasks and a g-code with instructions for the 3D printer to stop at certain levels of printing.
  • the filament can be switched out for a different or other color and then the printing job can be restarted/continued.
  • a dual extruder can be used, which can be used to print with 2 colors of filaments or more.
  • Some 3D printers can have additional extruders (e.g., 3, 4, 5, 8, etc.).
  • the fabrication devices 1558 can perform indirect color 3D printing by applying color from an external source during the printing process. This technology can be more precise and can allow for a more realistic appearance of the 3D prints.
  • the printing instructions can include color and texture information. When a layer of material is spread during printing, the printer can be instructed to apply color that adheres to the layer. A next layer can be spread and then the process can repeat. In some implementations, between layers, color can be applied and cured with UV light, which allows for adjusting transparency of the colors or other properties of each layer in the dental appliance that is being 3D printed.
  • the computer system 1554 can optionally return a graphical representation 1586 of the augmented teeth setup to the display device 1556 (with or without the augmented teeth setup and/or the fabrication instructions).
  • the display device 1556 can output or otherwise present the graphical representation in one or more graphical user interface (GUI) displays.
  • GUI graphical user interface
  • the outputted graphical representation can be used to show the patient and/or dentist or other relevant user what the dental appliance may look like and/or how the teeth setup may look with the patient’s physical features.
  • the graphical representation can provide a realistic view of colors, textures, and/or transparency of the teeth setup.
  • the relevant user can make adjustments to the outputted teeth setup, which can be transmitted to any of the computing systems described herein (e.g., the denture design system 116 in FIG. 1 A).
  • the user adjustments can be used by the receiving computing system to modify the teeth setup for the particular patient’s dental appliance design.
  • the adjusted teeth setup can be presented back at the display device 1556 and/or used by the computer system 1554 to generate fabrication instructions for the fabrication device 1558.
  • FIG. 16 illustrates example digital teeth 1600, 1602, 1604, 1606, and 108 with layers having different color, texture, and/or transparency properties.
  • the central incisor 1600 represents how the tooth 1600 with corresponding gingiva may appear in comparison to teeth 1610A-N from other tooth libraries that do not contain properties such as color, texture, and/or transparency.
  • the teeth 1610A-N do not have roots, nor do they have gingiva structures or layers.
  • the teeth 1610A-N each have a single mesh.
  • all of the teeth 1610A-N can be modified like the tooth 1600 to include various layers having different color, texture, and/or transparency properties.
  • the tooth 1600 can be generated using the disclosed techniques, which can include attaching a tissue-side scan to a bottom of a digital tooth library corresponding to the tooth 1600.
  • a gingiva component of the tooth 1600 could be present with or without a tooth root behind it.
  • a tooth root behind it For example, for some fabrication devices, such as printers or 3D printing machines, and for some fabrication strategies, it can be preferred to not have the root behind the gingiva. On the other hand, for some fabrication devices and/or fabrication strategies, it can be preferred to have a white color of the root reflecting at least partially through a pink color of the gingiva component.
  • the tooth and gingiva components may each have several layers (that is, each of the tooth and gingiva can be made up of multiple meshes on top of or inside each other to provide a realistic appearance and coloring when fabricating (e.g., 3D printing) a resulting dental appliance.
  • each of the teeth 1602, 1604, 1606, and 1608 have different layers of both tooth and gingiva components.
  • the tooth 1602 is depicted as having only outside layers to the tooth and gingiva components.
  • the tooth 1604 is depicted as having a root and a simple reduced layer of the tooth 1604 inside the gingival mesh.
  • the tooth 1606 is depicted as having all layers of each of the tooth and gingiva components with full transparency for each layer.
  • the tooth 1608 provides an illustrative example of the tooth having 2 layers and the gingiva component having only 1 layer.
  • a second tooth layer can be approximately 95% the size of the first layer for the tooth 1608.
  • additional layers may be required to produce an appropriate color depth and/or detail for a desired outcome, dental appearance, dental standards, etc. Refer to FIG. 18 for additional discussion about the various layers that can be determined and applied to a tooth and/or gingiva component.
  • FIG. 17 illustrates example morphology that can be sculpted on inside layers of different types of digital teeth.
  • An MX central incisor can have a morphology 1700.
  • An MX lateral incisor can have an example morphology 1702.
  • An MX canine can have an example morphology 1704.
  • an MD central incisor, an MD lateral incisor, an MD canine, premolars, and molars may all have different morphology that can be custom sculpted on an inside layer of the tooth.
  • the morphologies can be custom sculpted using one or more techniques described herein.
  • FIG. 18 illustrates an example tooth 1800 with layers having different amounts of opacity, translucency, and/or transparency.
  • any tooth can have one or more layers.
  • a tooth can have 1 layer.
  • a tooth can also have 2 layers.
  • a tooth can have 4 layers.
  • the tooth has 5 layers. Each of the 5 layers can be in direct proportion to an outermost layer of the tooth 1800.
  • Each layer can also be a percentage of the outermost layer (e.g., 99.9-97.5%, 95%, 90%, 80%, etc.).
  • An innermost layer may be called a stump shade layer.
  • the innermost layer can represent a root layer and can sometimes, as shown in FIG. 15A, move up into a portion of gingiva for the tooth.
  • a layer that is next or one larger than the innermost later can be considered a morphology layer.
  • a next, next larger layer can be a dentin layer, then a near can be an enamel layer, and a next or largest layer can be a gloss/effects layer.
  • Each of the layers can have different degrees of transparency — in other words, each layer may not just be translucent or just opaque.
  • the dentin layer for example, can be partially translucent, as defined according to one or more dentin layer rules and/or dentistry standards.
  • a 6 th layer can be added to the tooth 1800, which can be a gloss/glazing layer that eliminates and/or aides in providing a sheen to the outermost surface of the tooth 1800.
  • the gloss/glazing layer can advantageously reduce a polishing time and can provide a desirable polish.
  • the 6 th layer can also have an option to incorporate a text box where the patient’s name or identification number can be applied during design of the patient’s dental appliance.
  • the gloss/glazing layer can be approximately 25-800 microns thick.
  • One or more inner layers of the library tooth 1800 may have custom shaping to represent a correct tooth morphology for that specific tooth in the arch.
  • the shaping may provide background texture and coloring.
  • the shaping can also aide in reflecting light back out of the tooth 1800 in different directions.
  • the diffusion of light can be essential or otherwise desired to make realistic dental objects during design and then produce a realistic 3D printed object. Accordingly, some teeth may have more than one morphology layer. Sometimes, the shapes may not be uniform between and amongst each of the morphology layers.
  • the example tooth 1800 shows differing amounts of translucency per one or more layers. Each layer and/or group of layers can have a different amount of opacity, translucency, and/or transparency.
  • the outermost gloss/glazing layer can be considered a transparent outer layer 1802.
  • the transparent outer layer 1802 can be completely transparent. In some implementations, the transparent outer layer 1802 can have a predetermined amount of transparency that is a little less than completely transparent (e.g., 99.5% transparent, 98.7%, 98.5%, 97%, 95%, etc.).
  • a second layer can be a transparent enamel.
  • a third layer can be a translucent enamel.
  • the second and/or third layers can be considered translucent layers 1804.
  • a fourth layer can be a more opaque dentin layer.
  • a fifth layer can be the opaque stump, as described herein, which can be considered an opaque inner layer 1806. Varying percentages of translucency or opaqueness can be used for each of the layers described herein. The percentages of translucency or opaqueness can be defined by a relevant user, such as a dentist preparing/designing dental appliances for the patient, and/or according to rules, standards, or patient expectations/preferences for designing the particular dental appliances. In addition, additional or fewer layers can be added for more or less detail.
  • outermost layers can be transparent (e.g., completely or near- completely transparent), and moving through varying states, degrees, levels, and/or percentages of translucent until an innermost layer.
  • the inner most layer can be completely opaque or near- completely opaque.
  • changes in scaling, brushing, and/or sculpting surface layers in a tooth design can be carried proportionally to other layers of the tooth. Therefore, an intended effect of the tooth library can remain, despite being altered for designing a particular patient’s dental appliances.
  • a bottom of a library mesh may be open to aide in connecting the library mesh with an object it is intended to be joined to.
  • the object to connect to can be a tissue side scan, which can then become a tissue side of the dentures.
  • the object to connect to can be a tooth preparation scan.
  • the layered library techniques described herein can be relevant and applicable for designing all different types of dental appliances, including but not limited to dentures, implants, crowns, bridges, and other types of dental prostheses.
  • FIG. 19 illustrates a tooth cross section 1900 having mesh layers 1902A-N that can be exported to a 3D color printer.
  • Triangle vertices of the mesh layers 1902A-N can be assigned color and/or opacity values using the disclosed techniques.
  • the mesh layers 1902A-N can be exported, using a computer system described herein, to a 3D color printer in one or more different fde formats, including but not limited to a 3MF file.
  • the tooth cross section 1900 is represented in a voxel volume 1904.
  • the voxel volume 1904 can be a 3D voxel volume.
  • the voxel volume 1904 includes a layer of voxels 1906A-N.
  • the layer of voxels 1906A-N can be a single layer.
  • the layer of voxels 1906A-N can be a 2D layer of voxels within the voxel volume 1904.
  • Some 3D printers may accept voxel volume files, which can allow for the disclosed technology to set each voxel in the voxel volume 1904 independently.
  • each voxel in the volume 1904 independently can provide a graduated color and/or opacity changes in addition to one or more textures (e.g., speckling, other realistic details) to the 3D-printed tooth. Therefore, the color and/or opacity values that are assigned to the mesh layers 1902A-N of the tooth can be exported and applied in 3D printing of that tooth.
  • textures e.g., speckling, other realistic details

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

Disclosed are systems, methods, and techniques for modeling a patient's mouth with color data. A method can include receiving, by a computer system, oral scan data for a patient including (i) a dental impression generated by a dental impression station, (ii) image data of the patient's mouth generated by an image capture system, and/or (iii) motion data of the patient's jaw movement generated by a motion capture system, identifying, for at least one tooth represented in the oral scan data, a first zone, a second zone, and a third zone, which can be non-overlapping zones, identifying a statistical color value for each identified zone for the tooth, and generating a digital dental model for the patient based on mapping the identified statistical color value for each identified zone for the tooth to corresponding zones for a tooth represented in the digital dental model.

Description

SYSTEMS FOR GENERATING, STORING, AND USING AUGMENTED DIGITAL TOOTH
LIBRARIES DESIGN
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of priority to U.S. Provisional Application No. 63/506,517, filed June 6, 2023, and U.S. Provisional Application No. 63/421,565, filed November 1, 2022, both of which are incorporated herein by reference in their entirety.
TECHNICAL FIELD
[0002] This document generally describes devices, systems, and methods related to the generation, storage, and use of augmented digital tooth libraries, for example, which can include computer-based modeling a patient’s teeth and gingiva to generate a digital denture design for the patient with augmented features, such as different layers of the patient’s teeth and gingiva along with corresponding color, transparency, and/or texture data.
BACKGROUND
[0003] A denture is a dental prosthesis that is made to replace missing teeth. Dentures are often supported by surrounding soft and hard tissue of a patient’s oral cavity. For example, a denture may be designed to fit over and be supported by a patient’s gum tissue. Dentures may include a denture base region that is formed from an acrylic material and colored to appear similar to gum tissue. Denture teeth formed from acrylic or other materials may be secured to the denture base. The denture teeth may also be colored to appear similar to real teeth.
[0004] There are a variety of types of dentures. For example, dentures may be fixed or removable, implant-supported or non-implant supported. Additionally, dentures may be complete (e.g., replacing teeth of an entire dental arch) or partial (e.g., replacing less than all of the teeth of the dental arch). A fixed denture is not intended to be removed by a patient during ordinary use. Typically, a fixed denture is placed by a care provider, such as a dentist or prosthodontist, and is removed, if necessary, by the care provider. A fixed denture may, for example, be secured to one or more dental implants. A removable denture is made such that a patient may (and usually should) remove the denture during ordinary use. For example, the patient may remove the denture on a daily basis for overnight cleaning.
[0005] Non-implant supported, removable dentures are often held in place by a suction fit between a bottom of the denture and the patient’s gum tissue. The bases of removable dentures are generally fabricated to closely follow a shape of the patient’s gum tissue. When the base is pressed against the patient’s gum tissue, air may be forced out, creating a low-pressure suction seal between the denture base and the patient’s tissue. Partial removable dentures may include clasps that mechanically secure the denture to the patient’s remaining teeth. Implant-supported dentures are designed to couple to dental implants that have been implanted in the patient’s mouth. Implant-supported dentures may be fixed or removable. Some implant-supported dentures may be removable by the patient to allow for cleaning.
[0006] When properly made and fit, dentures may provide numerous benefits to the patient. These benefits include improved mastication (chewing) as the denture replaces edentulous (gum tissue) regions with denture teeth. Additional benefits include improved aesthetics when the patient’s mouth is open due to the presence of denture teeth and when the patient’s is closed due to cheek and lip support provided by the denture structure. Another benefit of dentures is improved pronunciation as the presence of properly sized front teeth is important for making several speech sounds.
SUMMARY
[0007] This document generally describes technology for augmenting digital tooth libraries with color, texture, transparency/translucency, and/or layers data. The augmented digital tooth libraries can then be retrieved during runtime for designing dental appliances for patients, such as dentures, and fabricating the dental appliances. The disclosed technology can provide for anatomical tooth libraries where teeth are already in bridges and/or 2 or more teeth (or blocks), which can save time in design and use of compute processing power and resources. Such tooth libraries can also be used to prevent unnecessary alterations, steps, and/or customization operations during the design process of dental appliances. The disclosed technology can provide any anatomical libraries having a tooth and gingiva together in the library, which can also be used to improve an amount of time in designing and operating of compute processing power and other processing resources. The disclosed technology can also be used to generate any standalone gingival library. The disclosed technology can be used to generate any anatomical library where a tooth or teeth have multiple layers as well as internal segments, which can reflect individual tooth and/or root morphology to provide color depth and accuracy in 3D printed or other fabricated dental appliances. As a result, the fabricated dental appliances can reflect a natural dentition, thereby improving patient experience and appearance. Using the disclosed technology, any color dental library can also be generated and maintained, such as libraries for teeth and/or gingival. In some implementations, the disclosed technology can provide for generation of dental libraries where teeth (e.g., groups of teeth and/or individual teeth) have layers of color, texture, and/or transparency properties to provide realistic color depth that also travels with the gingiva, which may also be all layered. As a result, the components of the library (e.g., tooth and gingiva) as well as their corresponding layers can be moved and adjusted in parametric fashion and/or by ratios in a coordinated fashion in order to preserve an overall appearance of the library, regardless of how the library may be scaled and/or altered (e.g., to fit different arch forms). As yet another example, the disclosed technology can be used to generate a full color library with layered teeth and gingiva where upper and lower tooth arrangements may already be completed and desired or preferred occlusion with each other. The completed unit/arrangement of teeth and gingiva can be scaled without compromising the occlusion or function. This completed unit/arrangement can also be quickly, easily, and efficiently adapted to a tissue surface to complete a dental appliance design in a single step. As a result, the disclosed techniques can provide for reducing an amount of time needed to design dental appliances and more efficient use of processing power and other compute resources. Similarly, the disclosed technology can be used to generate and maintain any type of tooth library described herein that is in grey-scale or otherwise does not include color data (e.g., the tooth library has no color or otherwise is colorless).
[0008] The document also describes technology for generating a digital dental model for a patient with color data corresponding to teeth and gingiva in the patient’s mouth. More specifically, the disclosed technology provides for receiving a scan of the patient’s mouth, identifying color data for teeth and gingiva in the scan, and assigning the color data in layers and/or grades to the patient’s teeth and gingiva represented in a digital dental model of the patient’s mouth. The digital dental model with the assigned color data can then be used to fabricate dentures for the patient. Although the disclosed technology is described for assigning color data to the patient’s teeth and gingiva in the digital dental model, the disclosed technology may similarly be used to assign texture data to the patient’s teeth and gingiva in the digital dental model.
[0009] Transferring accurate color from the scan of the patient’s mouth to the dentures (e.g., dental prosthetic) is an important aspect of data collection. Traditionally, a care provider, such as a dentist or dental assistant, uses a shade guide and their own perception of color to choose best match color(s) for denture design. The resulting color of manufactured dentures may not match colors of other teeth and/or gingiva in the patient’s mouth. Sometimes, the manufactured dentures may look fake or unrealistic, especially in certain lighting conditions. The disclosed techniques, therefore, provide for accurately identifying various colors of the patient’s teeth and gingiva from the scan of the patient’s mouth and blending or otherwise adjusting those colors across one or more layers that are defined for the teeth and gingiva to generate a realistic-looking digital denture design. Such color data can be provided, with the digital denture design, to a rapid fabrication machine, 3D printer, or other multi-layer manufacturing system/device to fabricate the dentures for the patient.
[0010] One or more implementations described herein can include method for modeling a patient’s mouth with color data, the method including: receiving, by a computer system, oral scan data for a patient, the oral scan data including at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system, identifying, by the computer system and for at least one tooth represented in the oral scan data for the patient, a first zone, a second zone, and a third zone, in which the first, second, and third zones may be non-overlapping zones, identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth, and generating, by the computer system, a digital dental model for the patient based on mapping the identified statistical color value for each identified zone for the at least one tooth to corresponding zones for at least one tooth represented in the digital dental model. [0011] In some implementations, the implementations described herein can optionally include one or more of the following features. For example, the method can also include generating, by the computer system, a digital denture model for the patient based on the digital dental model having the mapped color values. The method can also include transmitting, by the computer system to a rapid fabrication machine, instructions that, when executed by the rapid fabrication machine, cause the rapid fabrication machine to manufacture dentures based on the digital denture model for the patient. The instructions may include at least one data fde having information about the mapped color values for printing the dentures in corresponding dental colors.
[0012] As another example, the method can include identifying, by the computer system and for each identified zone for the at least one tooth, at least one layer, and identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth can include identifying a statistical color value for the at least one layer for the identified zone. The method can include assigning, by the computer system, the statistical color value for each identified zone to a predetermined dental color. The method can also include adjusting, by the computer system, the statistical color value for at least one of the identified zones for the at least one tooth. Adjusting, by the computer system, the statistical color value may include blending the statistical color value across at least two of the identified zones. The method can also include simulating, by the computer system, ray-tracing of the digital dental model for the patient to identify deviations in at least one of the statistical color values that exceeds a threshold color value. The method can also include adjusting, by the computer system, the at least one of the statistical color values based on the identified deviation.
[0013] As another example, the method can include identifying, by the computer system and for the at least one tooth, a tooth type, retrieving, by the computer system from a data store, texture data associated with the identified tooth type, and assigning, by the computer system, the texture data to at least one zone for the at least one tooth. The at least one zone for the at least one tooth can be the second zone. The method can also include identifying, by the computer system and for at least one tooth represented in the oral scan data for the patient, a first zone, a second zone, and a third zone comprises identifying annotations for library teeth in the oral scan data for the patient. Identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth can include: mapping a first color value representing a band of chroma to the first zone. The first zone can represent a cervical third of the at least one tooth and the first color value can be mapped to the first zone using annotations in the oral scan data that indicate halo sockets as reference points. The method can also include adjusting a dominance level of the first color value to exceed a threshold level of dominance.
[0014] As yet another example, identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth may include mapping a second color value to the second zone, the second zone representing a middle third of the at least one tooth. The method can also include adjusting a level of opacity of the mapped second color value to exceeds a threshold level of opacity. The second color value can be at least one of a yellow shade and a white shade.
[0015] Sometimes, identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth may include: mapping a third color value to the third zone, the third zone representing an incisal third of the at least one tooth. The method can also include adjusting a level of translucency of the third color value to exceed a threshold level of translucency. Mapping a third color value to the third zone may include mapping the third color value as a color band across an annotated incisal third of the at least one tooth in the oral scan data. The incisal third of the at least one tooth can be annotated differently based on a tooth type. Mapping a third color value to the third zone may include mapping the third color value onto an annotated incisal third of the at least one tooth based on landmarks in the oral scan data that indicate IP contacts.
[0016] As another example, the method can include adjusting, by the computer system, a gradient of the statistical color values across the first, second, and third zones. The method can include performing, by the computer system, a color calibration process on the oral scan data. The method can also include identifying, by the computer system, at least one zone across multiple teeth in the oral scan data, the multiple teeth in the oral scan data being a threshold distance from each other along a dental arch of the patient. Sometimes, the statistical color value, for each zone, can be an average of color values identified for the zone. The statistical color value, for each zone, can be a mean color value determined from a group of color values identified for the zone. The statistical color value, for each zone, can be a summation of a group of color values identified for the zone.
[0017] Sometimes, the method may also include identifying, by the computer system and for a gingiva represented in the oral scan data for the patient, at least one zone, and identifying, by the computer system, a statistical color value for the at least one zone for the gingiva. The method can include identifying, by the computer system, a texture value for the at least one zone for the gingiva. The at least one zone may include a first zone, a second zone, and a third zone. The first, second, and third zones for the gingiva may be non-overlapping zones. The method can also include mapping, by the computer system, at least one of a first color and a first texture to an annotated portion of the gingiva in the oral scan data that corresponds to socket halos. The first color can be a top surface color for the gingiva and the first texture can be a top surface texture for the gingiva. The method may include mapping, by the computer system, at least one of a second color and a second texture to an annotated portion of the gingiva in the oral scan data that corresponds to each root eminence in the gingiva. The second color can be a middle surface color for the gingiva and the second texture can be a middle surface texture for the gingiva. The method may include mapping, by the computer system, at least one of a third color and a third texture to an annotated portion of the gingiva in the oral scan data that corresponds to portions of the gingiva between annotations. The third color can be a bottom surface color for the gingiva and the third texture can be a bottom surface texture for the gingiva.
[0018] One or more embodiments described herein can include a method for modeling a patient’s mouth with color data, the method including: receiving, by a computer system, oral scan data for a patient, the oral scan data including at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system, identifying, by the computer system and for at least a portion of teeth represented in the oral scan data for the patient, at least one zone, identifying, by the computer system, a statistical color value for the at least one zone for the portion of teeth, mapping, by the computer system, the statistical color value to a predetermined dental color value for teeth, generating, by the computer system, a digital dental model for the patient based at least in part on the oral scan data for the patient and the mapped predetermined dental color value for the teeth, and transmitting, by the computer system to a rapid fabrication machine, data representative of the digital dental model with the mapped predetermined dental color value for the teeth. The rapid fabrication machine can be configured to manufacture at least a portion of dentures for the patient based on the transmitted data.
[0019] The method can optionally include any one or more of the abovementioned features.
[0020] One or more embodiments described herein can include a method for modeling a patient’s mouth with color data, the method including: receiving, by a computer system, oral scan data for a patient, the oral scan data including at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system, identifying, by the computer system and for at least a portion of gingiva represented in the oral scan data for the patient, at least one zone, identifying, by the computer system, a statistical color value for the at least one zone for the portion of gingiva, mapping, by the computer system, the statistical color value to a predetermined dental color value for gingiva, generating, by the computer system, a digital dental model for the patient based at least in part on the oral scan data for the patient and the mapped predetermined dental color value for the gingiva, and transmitting, by the computer system to a rapid fabrication machine, data representative of the digital dental model with the mapped predetermined dental color value for the gingiva. The rapid fabrication machine can be configured to manufacture at least a portion of dentures for the patient based on the transmitted data.
[0021] The method can optionally include one or more of the abovementioned features.
[0022] One or more embodiments described herein include a method for modeling a patient’s mouth with color data, the method including: receiving, by a computer system, oral scan data for a patient, the oral scan data including at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system, identifying, by the computer system, (i) at least one zone for at least a portion of teeth represented in the oral scan data for the patient and (ii) at least one zone for at least a portion of gingiva represented in the oral scan data for the patient, identifying, by the computer system, (i) at least one statistical color value for the at least one zone for the portion of teeth and (ii) at least one statistical color value for the at least one zone for the portion of gingiva, generating, by the computer system, a digital dental model for the patient based at least in part on the oral scan data for the patient and the identified statistical color values, and transmitting, by the computer system to a rapid fabrication machine, data representative of the digital dental model with the identified statistical color values. The rapid fabrication machine can be configured to manufacture at least a portion of dentures for the patient based on the transmitted data.
[0023] One or more of the methods described throughout this document can optionally include the statistical color value being selected from among a plurality of predetermined standard dental colors that are part of a color tooth library. The statistical color can be selected for each of the each identified zones for the at least one tooth. The statistical color can be selected as a predetermined standard dental color from among the plurality of predetermined standard dental colors that most closely matches one or more color values derived from the oral scan data. The color tooth library can be specific to one or more individual teeth. The color tooth library can be for a full denture library. In some implementations, the color tooth library can be archived in a data store and retrieved, by the computer system, for generating digital dental models for one or more patients.
[0024] The method can optionally including one or more of the abovementioned features. [0025] One or more of the embodiments described herein can include a method for augmenting a digital tooth library with one or more properties, the method including: accessing, by an augmentation computer system from a non-augmented digital tooth libraries data store, a non-augmented digital tooth library, retrieving, by the augmentation computer system from at least one models and rulesets data store, models and rulesets for tooth layers and gingiva, generating, by the augmentation computer system, 3D layers for the non-augmented digital tooth library based on applying a first portion of the retrieved models and rulesets to the nonaugmented digital tooth library, the first portion corresponding to a first subset of the models and rulesets for teeth, generating, by the augmentation computer system, 3D layers for gingiva for the non-augmented digital tooth library based on applying a second portion of the retrieved models and rulesets to the non-augmented digital tooth library, the second portion corresponding to a second subset of the models and rulesets for gingiva, generating, by the augmentation computer system, values for one or more properties for each of the 3D layers of the non- augmented digital tooth library and the gingiva, the one or more properties including at least one of color, texture, and transparency properties, generating, by the augmentation computer system, an augmented digital tooth library that can include the 3D layers for the non-augmented digital tooth library and the gingiva and the generated values for the one or more properties for each of the 3D layers, and returning, by the augmentation computer system, the augmented digital tooth library.
[0026] The method can optionally include one or more of the abovementioned features. In some implementations, the method can optionally include one or more of the following features. For example, generating, by the augmentation computer system, 3D layers for the nonaugmented digital tooth library can include generating: an opaque innermost layer, at least one translucent layer that can be larger than the opaque innermost layer and can overlay at least a portion of the opaque innermost layer, and at least one transparent outermost layer that can be larger than the at least one translucent layer and can overlay at least a portion of the opaque innermost layer and the at least one translucent layer. The at least one translucent layer can include a translucency value that may be greater than a translucency value of the opaque innermost layer but may be less than a translucency value of the at least one transparent outermost layer.
[0027] One or more embodiments described herein can include a method for generating an augmented teeth setup using augmented digital tooth libraries, the method including: receiving, by a computer system, a tooth library selection and a dental appliance setup from a design computing system, retrieving, by the computer system, the selected tooth library from an augmented digital tooth libraries data store, the selected tooth library including 3D layers for a tooth, 3D layers for corresponding gingiva, and predefined values for the one or more properties for each of the 3D layers, the 3D layers including meshes and the one or more properties including at least one of color, texture, and transparency properties, retrieving, by the computer system and from one or more data stores, one or more rulesets for generating the dental appliance setup, applying, by the computer system, the dental appliance setup to the selected tooth library, adjusting, by the computer system, relative positioning of the meshes of the selected tooth library based on mesh reference points corresponding to the selected tooth library, generating, by the computer system, portions of the gingiva that may be missing from the selected tooth library based on applying a portion of the retrieved rulesets that correspond to the gingiva, and returning, by the computer system, an augmented teeth setup that may include the dental appliance setup with the adjusted selected tooth library, the generated portions of the gingiva, and the one or more properties for each of the 3D layers that corresponds to the selected tooth library.
[0028] The method can optionally include one or more of the abovementioned features. In some implementations, the method can optionally include one or more of the following features. For example, the method can also include generating specific fabrication material, color, and printing instructions based on the augmented teeth setup and a fabrication device. The fabrication device can be a 3D printer. The method can include transmitting, by the computer system to the fabrication device, fabrication instructions for execution by the fabrication device in printing a dental appliance based on the augmented teeth setup. The method can include generating, by the computer system, a graphical representation of the augmented teeth setup. The method can include transmitting, by the computer system, the graphical representation to a display device. The display device can be configured to output the graphical representation in a graphical user interface (GUI) at the display device.
[0029] The devices, system, and techniques described herein may provide one or more of the following advantages. For example, the disclosed technology improves design automation by reducing an amount of time needed to design dentures for a patient and increasing accuracy in color assignment for the dentures design. The disclosed technology can automatically analyze patient oral scan data to identify different colors and/or textures in the patient’s teeth and gingiva, then map the identified colors and/or textures to a digital dental model for the patient. This eliminates potential human error and time needed to visually inspect the patient’s mouth and guess what colors match the patient’s teeth and gingiva. The disclosed technology can determine color data for the patient’s teeth and gingiva with high accuracy and in little time, thereby resulting in realistic-looking dentures and rapid design-to-manufacturing of the dentures. [0030] Similarly, the disclosed technology provides for generation of realistic-colored and realistic-textured dentures that blend in with other teeth and gingiva in the patient’s mouth. Computer-simulated ray-tracing techniques may be used to identify shading and/or brightness of assigned colors in the digital dental model and accordingly adjust those colors to appear more realistic in similar lighting conditions when the dentures are worn by the patient. Furthermore, the disclosed technology provides for manufacturing realistic-looking dentures by layering and blending the color data in the digital dental model. Colors can be identified for various zones on the patient’s teeth and gingiva. Some zones may be overlapping while other zones may not. The disclosed technology can provide for adjusting various characteristics of colors assigned to overlapping zones such that the colors blend together and make the resulting dentures appear more realistic.
[0031] The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features and advantages will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0032] FIG. 1A is conceptual diagram of a system for generating a digital dental model for a patient with color data.
[0033] FIG. IB is a schematic block diagram of the system of FIG. 1A for fabricating a motion-based denture based on the digital dental model.
[0034] FIG. 2 is a schematic block diagram illustrating an example motion capture system for capturing jaw movement.
[0035] FIG. 3 illustrates a block diagram of an example patient assembly of FIG. 2.
[0036] FIG. 4 illustrates an example implementation of the clutch of FIG. 3.
[0037] FIGs. 5A-B are cross-sectional side views that illustrate attachment of a dentition coupling device of the clutch or reference structure of FIG. 2 to a dental implant.
[0038] FIG. 6 is an example of the motion capture system of FIGs. 1 A-B in which two screens are used.
[0039] FIG. 7 illustrates a top view of a reference structure of FIG. 3 and the imaging system of FIGs. 1A-B.
[0040] FIG. 8 illustrates a perspective view of the reference structure of FIG. 7 disposed between the screens of the imaging system of FIG. 7.
[0041] FIG. 9 is a flowchart of an example process for fabricating a denture for a patient. [0042] FIGs. 10A-B is a flowchart of a process for determining teeth and gingiva color data for a digital dental model of a patient.
[0043] FIG. 11 A is a flowchart of a process for determining color data for gingiva in a digital dental model of a patient.
[0044] FIG. 1 IB illustrates an example digital dental model of the patient in FIG. 11 A for determining the gingiva color data.
[0045] FIGs. 12A-B is a flowchart of a process for determining color data for teeth in a digital dental model of a patient.
[0046] FIG. 12C illustrates an example digital dental model of the patient in FIGs. 12A-B for determining the teeth color data with horizontal slicing techniques.
[0047] FIG. 12D illustrates another example digital dental model of the patient in FIGs. 12A- B for determining the teeth color data with vertical slicing techniques.
[0048] FIG. 13 A is an example digital dental model having annotations for defining teeth and gingiva color data.
[0049] FIG 13B is an example digital dental model having cutback layers of teeth color data.
[0050] FIG. 13C is an example digital dental model having color and texture data for teeth and gingiva.
[0051] FIG. 14 is a schematic diagram that shows an example of a computing device and a mobile computing device.
[0052] FIG. 15A illustrates an example system for storing digital tooth libraries that are augmented with data such as color, layers, texture, and/or transparency data.
[0053] FIG. 15B illustrates an example system for generating augmented digital tooth libraries from new and/or preexisting digital tooth libraries.
[0054] FIG. 15C illustrates an example system for generating fabrication instructions and/or digital teeth graphics using augmented digital tooth libraries.
[0055] FIG. 16 illustrates example digital teeth with layers having different color, texture, and/or transparency properties.
[0056] FIG. 17 illustrates example morphology that can be sculpted on inside layers of different types of digital teeth. [0057] FIG. 18 illustrates an example tooth with layers having different amounts of opacity, translucency, and/or transparency.
[0058] FIG. 19 illustrates a tooth cross section having mesh layers that can be exported to a 3D color printer.
[0059] Like reference symbols in the various drawings indicate like elements.
DETAILED DESCRIPTION OF ILLUSTRATIVE IMPLEMENTATIONS
[0060] This document generally relates to generating digital dental models with color data to manufacture realistic-looking dentures. More specifically, a digital dental model can be generated for a patient using a variety of data, such as an oral scan. The disclosed technology can process the oral scan to identify one or more colors corresponding to different zones and/or layers of the patient’s teeth and/or gingiva. The identified colors can be mapped in layers to teeth and/or gingiva in the digital dental model for the patient and automatically adjusted to appear more realistic in various lighting or other conditions. The digital dental model with the color mappings can then be used to manufacture dentures for the patient that look realistic and match teeth and/or gingiva already in the patient’s mouth.
[0061] As described herein, a motion-based digital denture design system may be used to capture actual motion data from the patient to aid in the design of dentures, dental prosthetics, or other dental appliances (e.g., crowns). The motion data may provide for dentures that fit the patient better than conventionally-designed dentures that may not use actual motion data. For example, teeth of the dentures may be positioned so as to avoid interfering with opposing teeth (e g., opposing actual teeth or denture teeth) during patient biting motion. As a result, the disclosed technology may reduce chair time and number of visits required to fit dentures to the patient. The disclosed technology may also provide for designing dentures that have balanced occlusal support throughout functional movements (e.g., excursive movements).
[0062] Referring to the figures, FIG. 1 A is conceptual diagram of a system 100 for generating a digital dental model for a patient with color data. In the system 100, a computer system 152 may communicate (e.g., wired and/or wireless) with a denture design system 116, rapid fabrication machine 119, and denture library 154 via network(s) 110. The computer system 152 can be any type of computing device, network of computing devices and/or systems, and/or cloud-based system described herein. The computer system 152 can be configured to collect and/or capture data about a patient 150 in a dental office 102. Sometimes, the computer system 152 can be remote from the patient 150 and/or the dental office 102.
[0063] The computer system 152 can communicate via the network(s) 110 with a dental impression station 106, an image capture system 107, and a motion capture system 200. Sometimes, the computer system 152 can be integrated into or otherwise part of at least one of the dental impression station 106, the image capture system 107, and the motion capture system 200. The computer system 152 can generate instructions that cause the dental impression station 106, the image capture system 107, and/or the motion capture system 200 to capture data of the patient 150. Refer to FIGs. IB-2 for further discussion about the dental impression station 106, the image capture system 107, and the motion capture system 200.
[0064] The denture design system 116 can be configured to automatically design digital dental models for patients, such as the patient 150, based on data collected by the computer system 152. The denture design system 116 can also assign and map color data to teeth and gingiva in the digital design model of the patient 150 to then be used in manufacturing dentures for the patient. In some implementations, the denture design system 116 can be part of the computer system 152. The rapid fabrication machine 119 can be configured to manufacture/produce dentures for the patient 150 based on design information that is generated and provided by the denture design system 116. In some implementations, the rapid fabrication machine 119 can be part of the denture design system 116 and/or the computer system 152.
Refer to FIGs. IB and 9 for further discussion about the denture design system 116 and the rapid fabrication machine 119.
[0065] The denture library 154 can be any type of database, data store, cloud-based storage, and/or data repository that is configured to store information about digital dental models and denture designs. The denture library 154 can store the abovementioned information in association with particular patients, such as a digital dental model for the patient 150. The denture library 154 can also store denture designs, for example, that are generic and may apply to a variety of patients. Refer to FIGs. 13A-C for further discussion about the information stored in the denture library 154. [0066] Still referring to FIG. 1A, the computer system 152 can capture patient dental and/or denture data in block A. The data can include dental impression data, images of the patient 150’s mouth, and/or motion data, all of which may be captured and generated by the respective dental impression station 106, the image capture system 107, and the motion capture system 200. As described herein, the captured data can include oral scan data of the patient 150’s mouth. Refer to FIG. 1A for further discussion about capturing the patient dental and/or denture data.
[0067] In block B, the computer system 152 can transmit the patient data to the denture design system 116. Sometimes, the patient data can be stored in a database, such as the denture library 154, and then retrieved at a later time for further processing.
[0068] The denture design system 116 can also retrieve denture color and/or texture data in block C. For example, the system 116 can retrieve information indicating one or more known color values used for coloring dentures. When processing the patient data, the system 116 can then correlate colors identified in the patient data with one or more of the known color values for dentures to accurately identify what colors can be used for manufacturing dentures for the patient 150. Similarly, when identifying zones and/or layers for teeth and/or gingiva based on the patient data, the system 116 can also assign one or more known texture values to the identified zones and/or layers. In other words, the denture library 154 can maintain a mapping of textures to zones or layers of teeth and gingiva. The mapping can be used to select appropriate textures to apply to the zones or layers of teeth and gingiva that are identified for the particular patient 150 from the patient data.
[0069] In some implementations, block C can be performed in one or more other orders. For example, block C can be performed after block F, which is described further below. The data retrieved in block C can be used to adjust one or more colors and/or texture that is mapped to a digital dental model for the patient 150. Sometimes, block C can be performed after block E such that the retrieved data is used to perform the mapping described in block F. One or more other orders of operations are also possible.
[0070] In block D, the denture design system 116 can generate a digital dental model based on the transmitted data. Refer to FIGs. IB-9 for further discussion about generating the digital dental model. [0071] The denture design system 116 can identify zones of teeth and/or gingiva in the digital dental model and layers for each identified zone in block E. The system 116 can apply one or more rules, algorithms, and/or machine learning models to identify various zones for each of the teeth and the gingiva appearing in the digital dental model for the patient 150. As an illustrative example, the system 116 can identify 2 zones for each tooth, the zones including a top or enamel layer and a cutback layer. The system 116 can also identify 2 zones for the gingiva, including a base layer and a festoon layer. Therefore, the digital dental model for the patient 150 can be built with 4 zones, each of which can be assigned different color data and/or texture data. Although each zone can be defined independently of one another, all the zones can be printed by the rapid fabrication machine 119 as part of a singular object (e.g., dentures). Moreover, the zones may not be divided by physical space in the digital dental model but can have boundaries defined by characteristics corresponding to each zone. The characteristics include color. The characteristics may also include texture. Furthermore, as described herein, color assigned to each zone can be blended across zones or otherwise adjusted in some way that causes resulting dentures to appear realistic when manufactured and worn by the patient 150.
[0072] In block F, the denture design system 116 maps at least one color value and/or texture value to the layers in each identified zone. As mentioned above, different zones and layers per zone allow for making dentures having more color depth and a more realistic appearance once manufactured and worn by the patient 150. As described herein, portions of the digital dental model can be sampled to identify one or more colors per layer and/or per zone. The identified colors can be mapped or otherwise assigned to known dental colors, which can be retrieved in block C from the denture library 154. The identified colors can be adjusted using additional processing techniques (e.g., computer-simulated ray-tracing) to make dentures manufactured with the corresponding color data appear more realistic when worn by the patient 150. In some implementations, one or more of the colors can be identified by a care provider, such as a dentist, and provided as user input to the denture design system 116 to be mapped to one or more layers in the identified zones in the digital dental model.
[0073] In some implementations, the disclosed techniques is used to map texture values to the teeth and/or gingiva in the digital dental model of the patient 150, thereby making the resulting dentures appear more realistic. Typically, different parts of any patient’s teeth and gingiva have different textures. Mapping data can indicate mappings of various textures to different zones defined for teeth and gingiva. The mapping data can therefore be used by the denture design system 116 to apply stock or expected texture values to specific zones defined in the digital denture model for the patient 150. For example, some zones of teeth may always have a ribbed texture and a base of dentures may always have a stippling texture. The stock or expected texture values can therefore be stored in the denture library 154, retrieved therefrom, and applied to particular zones in any digital dental model for any patient.
[0074] Refer to FIGs. 10-13 for further discussion about identifying zones and/or layers of teeth and gingiva in the digital dental model (block E) and mapping color values to the identified layers and/or zones (block F).
[0075] The denture design system 116 can transmit the digital dental model with the mapped color and/or texture values to the rapid fabrication machine 119 (block G). Sometimes, the system 116 can store the model with the mapped values in the denture library 154 for future retrieval and use in manufacturing dentures for the patient 150.
[0076] The rapid fabrication machine 119 can fabricate dentures for the patient 150 based on the digital dental model and the mapped color and/or texture values (block H). As described herein, the machine 119 can directly 3D print the mapped color and/or texture values on the dentures rather than copying and/or printing dentures with existing or library-stock color and/or texture values. Refer to FIGs. IB and 9 for further discussion about manufacturing the dentures based on the digital dental model and mapped color and/or texture values.
[0077] FIG. IB is schematic block diagram of the system 100 of FIG. 1A for fabricating a motion-based denture based on a digital dental model. In this example, the system 100 includes the dental office 102 of FIG. 1A and a dental lab 104. The example dental office 102 includes the motion capture system 200 (described further with respect to at least FIG. 2), the dental impression station 106, the image capture system 107, and a dental therapy station 126. Although shown as separate components, the image capture system 107 may be a sub-component of the motion capture system 200 (as described elsewhere). Although shown as a single dental office 102, in some implementations, the dental office 102 includes multiple dental offices. For example, one or more of the dental impression station 106, the image capture system 107, and the motion capture system 200 can be in a different dental office than the dental therapy station 126. Further, one or more of the dental impression station 106, the motion capture system 200, and the dental therapy station 126 may not be located in a dental office.
[0078] The example dental impression station 106 is configured to generate a dental impression 108 of dentition of a patient (e.g., the patient 150 in FIG. 1A). The dental impression 108 is a geometric representation of the dentition of the patient, which may include teeth (if any) and edentulous (gum) tissue, or gingiva as described herein. In some implementations, the dental impression 108 is a physical impression captured using an impression material, such as sodium alginate, polyvinylsiloxane or another impression material.
[0079] In some implementations, the dental impression 108 is a digital impression. The digital impression may be represented by one or more of a point cloud, a polygonal mesh, a parametric model, or voxel data. The digital impression can be generated directly from the dentition of the patient, using for example an intraoral scanner. Example intraoral scanners include the TRIOS Intra Oral Digital Scanner, the Lava Chairside Oral Scanner C.O.S., the Cadent iTero, the Cerec AC, the Cyrtina IntraOral Scanner, and the Lythos Digital Impression System from Ormco. In other implementations, a digital impression is captured using other imaging technologies, such as computed tomography (CT), including cone beam computed tomography (CBCT), ultrasound, and magnetic resonance imaging (MR1). In yet other implementations, the digital impression is generated from a physical impression by scanning the impression or plaster model of the dentition of the patient created from the physical impression. Examples of technologies for scanning a physical impression or model include three-dimensional laser scanners and computed tomography (CT) scanners. In yet other implementations, digital impressions can be created using other technologies.
[0080] The motion capture system 200 is configured to capture a representation of movement of dental arches relative to each other in the patient’s mouth. In some implementations, the motion capture station generates motion data 110. The dental impression 108 can also be used to generate a patient-specific dentition coupling device for capturing patient motion using the motion capture system 200. Some implementations described herein may use other types of motion capture systems to generate motion data of the patient’s mouth.
[0081] In some implementations, the motion capture system 200 generates the motion data 110 from optical measurements of the dental arches that are captured while the dentition of the patient is moved. The optical measurements can be extracted from image or video data recorded while the dentition of the patient is moved. Additionally, the optical measurements can be captured indirectly. For example, the optical measurements can be extracted from images or video data of one or more devices (e.g., a patient assembly such as the patient assembly 204 that is illustrated and described with respect to at least FIGs. 2-3) that are secured to a portion of the dentition of the patient. The motion data 110 can be generated using other processes as well. Further, the motion data 110 may include transformation matrices that represent position and orientation of the dental arches. The motion data 110 may include a series of transformation matrices that represent various motions or functional paths of movement for the patient’s dentition. Other implementations of the motion data 110 are possible as well.
[0082] Still images can be captured of the patient’s dentition while the dentition of the patient is positioned in a plurality of bite locations. Image processing techniques can then be used by any of the disclosed computing systems to determine positions of the patient’s upper and lower arches relative to each other (either directly or based on the positions of the attached patient assembly 204). In some implementations, the motion data 110 can be generated by interpolating between the positions of the upper and lower arches determined from at least some of the captured images.
[0083] The motion data 110 may be captured with the patient’s jaw in various static positions or moving through various motions. For example, the motion data 110 may include a static measurement representing a centric occlusion (e g., the patient’s mandible closed with teeth fully engaged) or centric relation (e.g., the patient’s mandible nearly closed, just before any shift occurs that is induced by tooth engagement or contact) bite of a patient. The motion data 110 may also include static measurements or sequences of data corresponding to protrusive (e.g., the patient’s mandible being shifted forward while closed), lateral excursive (e g., the patient’s mandible shifted/rotated left and right while closed), hinging (e.g., the patient’s mandible opening and closing without lateral movement), chewing (e.g., the patient’s mandible chewing naturally to, for example, determine the most commonly used tooth contact points), and border movements (e.g., the patient’s mandible is shifted in all directions while closed, for example, to determine the full range of motion) of the patient’s jaw. In some implementations, the motion data is captured while the patient is using a Lucia jig or leaf gauge so that the patient’s teeth (for patients who are not completely edentulous) do not impact/ contribute to the movement data. This motion data 110 may be used to determine properties of the patient’s temporomandibular joint (TMJ). For example, hinging motion of the motion data 110 may be used to determine the location of the hinge axis of the patient’s TMJ.
[0084] In some implementations, a representation of the motion of the hinge axis may be displayed while the motion data 110 is being captured. For example, a computing device may cause a line segment to be displayed in relation to a representation of the patient’s dentition. The line segment may be displayed at a location that is approximately where the patient’s condyle is located. The line segment may move in concert with the relative motion of the patient’s mandible (lower dentition). Visually, the movement of the line may appear to rotate at a location approximately equal to the hinge axis of the patient’s TMJ. Furthermore, during motion capture the caregiver may annotate the motion data to identify portions of the motion data such as the motion data corresponding to hinging open/closed. For example, the caregiver may actuate an input such as a button on a user interface, a physical button, or a foot pedal to annotate portions of the motion data.
[0085] The image capture system 107 is configured to capture image data 109 of the patient. The image data 109 may include one or more static images or videos of the patient. The static images or frames with the image data 109 may be associated with the motion data 110. For example, a specific image from the image data 109 may be associated with a specific frame of the motion data 110, indicating that the specific image was captured while the patient’s jaw was in the position indicated by the specific frame of the motion data 110. In some implementations, the image capture system 107 includes a three-dimensional camera and the image data 109 may include one or more three-dimensional images. Examples of three-dimensional cameras include stereo cameras (e.g., using two or more separate image sensors that are offset from one another). The three-dimensional camera may also include a projector such as a light projector or laser projector that operates to project a pattern on the patient’s face. For example, the projector may be offset relative to the camera or cameras so that the images captured by the camera include distortions of the projected pattern caused by the patient’s face. Based on these distortions, the three-dimensional structure of portions of the patient’s face can be approximated. Various implementations project various patterns such as one or more stripes or fringes (e.g., sinusoidally changing intensity values). In some implementations, the three-dimensional image is captured in relation to the motion capture system 200 or a portion thereof so that the three-dimensional images can be related to the same coordinate system as the motion data.
[0086] Still referring to FIG. IB, the example dental lab 104 includes a 3D scanner 112, the denture design system 116, the rapid fabrication machine 119, and a denture fabrication station 122. Although shown as a single dental lab, the dental lab 104 may also include multiple dental labs. For example, the 3D scanner 112 can be in a different dental lab than one or more of the other components shown in the dental lab 104. Further, one or more of the components shown in the dental lab 104 may not be physically located in a dental lab. For example, one or more of the 3D scanner 112, denture design system 116, rapid fabrication machine 119, and denture fabrication station 122 can be physically located in the dental office 102. Additionally, some implementations of the system 100 may not include all of the components shown in the dental lab 104 in FIG. IB.
[0087] The example 3D scanner 112 is a device that can be configured to create a three- dimensional digital representation of the dental impression 108. In some implementations, the 3D scanner 112 generates a point cloud, a polygonal mesh, a parametric model, or voxel data representing the dental impression 108. In some implementations, the 3D scanner 112 generates a digital dental model 114. In some implementations, the 3D scanner 112 includes a laser scanner, a touch probe, and/or an industrial CT scanner. Yet other implementations of the 3D scanner 112 are possible as well. Further, sometimes, the system 100 may not include the 3D scanner 112. For example, where the dental impression station 106 generates a digital dental impression, the 3D scanner 112 may not be included. In these implementations, the dental impression 108 may be the digital dental model 114 or may be used directly to generate the digital dental model 114.
[0088] The denture design system 116 is a system that is configured to generate denture data 118. In some implementations, the denture data 118 is three-dimensional digital data that represents a denture component 120 and is in a format suitable for fabrication using the rapid fabrication machine 119. The denture design system 116 may use the digital dental model 114, the image data 109, and the motion data 110 to generate the denture data 118. For example, the denture design system 116 may generate a denture base having a geometric form that is shaped to fit a portion of the digital dental model 114 (e g., a portion of the model representing an edentulous region of the patient’s dentition). The denture design system 116 may also determine various parameters that are used to generate the denture data 118 based on the image data 109. For example, implementations of the denture design system 116 may use various image processing techniques to estimate a vertical dimension parameter from the image data 109. Additionally, the denture design system 116 may use the motion data 110 to design the denture data 118. For example, the denture design system may use the motion data to ensure that the denture design avoids interferences with the opposing dentition (or dentures) during the bite motion represented by the motion data 110.
[0089] In some implementations, the denture design system 116 includes a computing device having one or more user input devices. The denture design system 116 may include computer- aided-design (CAD) software that generates a graphical display of the denture data 118 and allows an operator to interact with and manipulate the denture data 118. In some implementations, the denture design system 116 may include a user interface that allows a user to specify or adjust parameters of the denture design such as vertical dimension, overbite, overjet, or tip, torque, and rotation parameters for one or more denture teeth.
[0090] For example, the denture design system 116 may include virtual tools that mimic the tools and techniques used by a laboratory technician to physically design a denture. In some implementations, the denture design system 116 includes a user interface tool to move a digital representation of the patient’s dentition (e.g., the digital dental model 114) according to the motion data 110 (which may be similar to a physical articulator). Additionally, the denture design system 116 can include a server that partially or fully automates generation of designs of the denture data 118, which may use the motion data 110.
[0091] In some implementations, the rapid fabrication machine 119 can include one or more three-dimensional printers. Another example of the rapid fabrication machine 119 is stereolithography equipment. Yet another example of the rapid fabrication machine 119 is a milling device, such as a computer numerically controlled (CNC) milling device. In some implementations, the rapid fabrication machine 119 is configured to receive files in STL format. The received files can include denture design data, including but not limited to color and/or texture data assigned to different layers and/or zones of teeth and gingiva in the denture design data. Other implemenations of the rapid fabrication machine 119 are possible as well.
[0092] The rapid fabrication machine 119 is configured to use the denture data 118 to fabricate a denture component 120. The denture component 120 can be a physical component that is configured to be used as part or all of the denture 124. For example, the denture component 120 can be milled from zirconium, acrylic, or another material that is used directly as a dental appliance. In other implementations, the denture component 120 can be a mold formed from wax or another material that is to be used indirectly (e.g., through a lost wax casting or ceramic pressing process) to fabricate the denture 124. Further, the denture component 120 can be formed using laser sintering technology.
[0093] The rapid fabrication machine 119 may include a 3D printer that fabricates the denture 124 directly from a material that is suitable for placement in the patient’s mouth. In some implementations, the 3D printer may fabricate the denture teeth with the denture base. For example, the rapid fabrication machine 119 may print parts using multiple materials. A first material may be used to fabricate the denture base and a second material may be used to fabricate the denture teeth. The first material may have a different color, or multiple different colors, than the second material. For example, the first material may have a pink color that is similar to the color of gingiva and the second material may have a white or cream color that is similar to the color of teeth. In some implementations of the rapid fabrication machine 119, the denture teeth and the denture base can be fabricated together as a monolithic whole formed from multiple materials having multiple different colors. Beneficially, it may not be necessary to determine a common draw angle for the denture teeth, remove undercuts from the denture base, or couple the fabricated denture teeth to the denture base.
[0094] As described herein, the rapid fabrication machine 119 can receive a single file that includes 3D model components (e.g., 3D meshes) that are to be formed using a variety of materials and/or colors. The file may include color information. For example, the model file may be divided into volumetric zones or surface zones where the zones are each associated with different color information. The rapid fabrication machine 119 may then fabricate (e.g., 3D print) the parts together using the colors identified in the color information in the file. The rapid fabrication machine 119 may include reservoirs (or spools) of material or filament in multiple colors to allow for printing unitary denture components having multiple different colors. Some implementations of the machine 119 may include three, four, five, or twelve material reservoirs of different colors. An example, may include red, yellow, and blue materials in different reservoirs that may be combined to form many different colors, shades of colors, gradients of colors, and/or layering of colors in different zones of teeth and/or gingiva being printed as the denture 124. Some implementations may also include a dark (e.g., black) color material to alter brightness of the combined material. The color may be specified in terms of CYMK (cyan, yellow, magenta, key (black)). Some implementations may also include a white reservoir that can be used to alter the brightness of the material blend. The rapid fabrication machine 119 may include other reservoirs of material that are, for example, colors commonly used in dentures to reduce the need to blend colors.
[0095] In some implementations, the rapid fabrication machine 119 can print with a material having a color determined from multiple polygons or vertices. For example, the rapid fabrication machine 119 may determine color on a surface of a polygon using a process similar to Gouraud shading by, for example, variably blending the colors of vertices of the polygon based on position with respect to (distance to) the vertices. The interior of the fabricated parts may be fabricated with a material based on a blend of the surface colors. The interior of the fabricated parts may be fabricated with a material based on other factors, such as cost reduction (e.g., using materials with lower costs), weight reduction (e.g., using materials with lower weights), or other material properties such as strength (e.g., using materials with a desired strength or other material property).
[0096] In some implementations, the rapid fabrication machine 119 generates layers by horizontally slicing the model, as described further in reference to FIGs. 11-12. The colors throughout a layer may be determined by blending colors from the surface (edges of the slice). Beneficially, by blending colors, vertices or polygons may not need to be added simply to alter the color of the fabricated dentures 124. Adding vertices and polygons increases the amount of data that must be transmitted to the rapid fabrication machine 119 and the amount of memory required in the rapid fabrication machine 119 to store the model. Additionally, algorithms used to perform the horizontal slicing and other steps of printing may perform faster and require fewer processor cycles when the model has fewer polygonal surfaces and vertices. [0097] Some implementations of the rapid fabrication machine 119 may be configured to receive model files in a voxel format in which color data is associated with each voxel in the model or with each comer of each voxel. The color data may be blended across the voxel based on distance from corners by the rapid fabrication machine 119 during printing.
[0098] Still referring to FIG. IB, the denture fabrication station 122 operates to fabricate the denture 124 for the patient. The denture fabrication station 122 uses the denture component 120 produced by the rapid fabrication machine 119 in some implementations. In some implementations, the denture 124 is a complete or partial denture. The denture 124 may include one or both of a maxillary denture and a mandibular denture. In some implementations, the denture 124 is formed from an acrylic, ceramic, or metallic material. In some implementations, the dental impression 108 is used in the fabrication of the denture 124. In some implementations, the dental impression 108 is used to form a plaster model of the dentition of the patient. Additionally, a model of the dentition of the patient can be generated by the rapid fabrication machine 119. The denture fabrication station 122 can include equipment and processes to perform some or all of the techniques used in traditional dental laboratories to generate dental appliances. Other implementations of the denture fabrication station 122 are possible as well.
[0099] In some implementations, the denture 124 is seated in the mouth of the patient in the dental therapy station 126 by a dentist. The dentist can confirm that an occlusal surface of the denture 124 is properly defined by instructing the patient to engage in various bites.
[0100] Additionally, the dental office 102 may be connected to the dental lab 104 via a network. The network may be an electronic communication network that facilitates communication between the dental office 102 and the dental lab 104. An electronic communication network is a set of computing devices and links between the computing devices. The computing devices in the network use the links to enable communication among the computing devices in the network. The network can include routers, switches, mobile access points, bridges, hubs, intrusion detection devices, storage devices, standalone server devices, blade server devices, sensors, desktop computers, firewall devices, laptop computers, handheld computers, mobile telephones, and other types of computing devices.
[0101] In various implementations, the network includes various types of links. For example, the network can include one or both of wired and wireless links, including Bluetooth, ultra- wideband (UWB), 802.11, ZigBee, and other types of wireless links. Furthermore, in various implementations, the network is implemented at various scales. For example, the network can be implemented as one or more local area networks (LANs), metropolitan area networks, subnets, wide area networks (such as the Internet), or can be implemented at another scale.
[0102] In some implementations, the system 100 also plans treatments for implant-supported dentures. For example, the system 100 may determine appropriate positions for implants based on a denture design. Some implementations may generate digital design data for an implant surgical guide and fabricate the implant surgical guide using rapid fabrication technology.
Beneficially, in at least some of these implementations, the location of implants can be determined based, at least in part, on the design of the final dentures.
[0103] Although not shown in this figure, some implementations of the system 100 may integrate with one or more of an inventory management system and a parts management system. Based on the design of a denture or implant-supported denture treatment plan, a part pick list may be generated that lists the different components (e.g., denture teeth, implant abutments, support components). An inventory system may also be updated to adjust the quantities of parts and one or more orders may be generated and directed to one or more suppliers.
[0104] FIG. 2 is a schematic block diagram illustrating an example motion capture system 200 for capturing jaw movement. In this example, the motion capture system 200 includes an imaging system 202, a patient assembly 204, and a motion determining device 206. Also shown in FIGs. 1 A-B are a patient and a network.
[0105] In some implementations, the imaging system 202 includes an optical sensing assembly 210 and a screen assembly 212. The optical sensing assembly 210 may capture a plurality of images as the patient’s jaw moves. For example, the optical sensing assembly 210 may include one or more cameras such as video cameras. In some implementations, the optical sensing assembly 210 captures a plurality of images that do not necessarily include the patient assembly, but can be used to determine the position of the patient assembly 204. For example, the patient assembly 204 may emit lights that project onto surfaces of the screen assembly 212 and the optical sensing assembly 210 may capture images of those surfaces of the screen assembly 212. In some implementations, the optical sensing assembly 210 does not capture images but otherwise determines the position of the projected light or lights on the surfaces of the screen assembly 212.
[0106] The screen assembly 212 may include one or more screens. A screen may include any type of surface upon which light may be projected. Some implementations include flat screens that have a planar surface. Some implementations may include rounded screens, having cylindrical (or partially cylindrical) surfaces. The screens may be formed from a translucent material. For example, the locations of the lights projected on the screens of the screen assembly 212 may be visible from a side of the screens opposite the patient assembly 204 (e.g., the screen assembly 212 may be positioned between the optical sensing assembly 210 and the patient assembly 204).
[0107] In addition to capturing the images, the imaging system 202 may capture or generate various information about the images. As an example, the imaging system 202 may generate timing information about the images. Although alternatives are possible, the timing information can include a timestamp for each of the images. Alternatively or additionally, a frame rate (e.g., 10 frames/second, 24 frames/second, 60 frames/second) is stored with a group of images. Other types of information that can be generated for the images includes an identifier of a camera, a position of a camera, or settings used when capturing the image.
[0108] The patient assembly 204 is an assembly that is configured to be secured to the patient. The patient assembly 204 or parts thereof may be worn by the patient and may move freely with the patient (i.e., at least a part of the patient assembly 204 may, when mounted to the patient, move in concert with patient head movement). In contrast, in at least some implementations, the imaging system 202 is not mounted to the patient and does not move in concert with patient head movement.
[0109] In some implementations, the patient assembly 204 may include light emitters (or projectors) that emit a pattern of light that projects on one or more surfaces (e.g., screens of the screen assembly 212), which can be imaged to determine the position of the patient assembly 204. For example, the light emitters may emit beams of substantially collimated light (e g., laser beams) that project onto the surfaces as points. Based on the locations of these points on the surfaces, a coordinate system can be determined for the patient assembly 204, which can then be used to determine a position and orientation of the patient assembly 204 and the patient’s dentition.
[0110] In some implementations, the patient assembly 204 includes separate components that are configured to be worn on the upper dentition and the lower dentition and to move independently of each other so that the motion of the lower dentition relative to the upper dentition can be determined. Examples of the patient assembly 204 are illustrated and described throughout, including in FIG. 3.
[OHl] The motion determining device 206 determines the motion of the patient assembly 204 based on images captured by the imaging system 202. In some implementations, the motion determining device 206 includes a computing device that uses image processing techniques to determine three-dimensional coordinates of the patient assembly 204 (or portions of the patient assembly) as the patient’s jaw is in different positions. For example, images captured by the optical sensing assembly 210 of screens of the screen assembly 212 may be processed to determine the positions on the screens at which light from the patient assembly is projected. These positions on the screens of the screen assembly 212 may be converted to three- dimensional coordinates with respect to the screen assembly 212. From those three-dimensional coordinates, one or more positions and orientations of the patient assembly 204 (or components of the patient assembly 204) may be determined.
[0112] Based on the determined positions and orientations of the patient assembly 204, some implementations determine the relative positions and movements of the patient’s upper and lower dentition. Further, some implementations infer the location of a kinematically derived axis that is usable in modeling the motion of the patient’s mandible (including the lower dentition) about the temporomandibular joint. The kinematically derived axis may be a hinge axis or a screw axis. For example, the hinge axis may be derived from a portion of the motion data (e.g., the motion date corresponding to a hinging open/closed of the patient’s jaw). The hinge axis location may also be determined based on radiographic imaging such as CBCT data. Additional motion data may be synthesized based on the location of the hinge axis. For example, if the location of the hinge axis is inferred based on motion data corresponding to hinging open/closed, motion data for other bite movements (e.g., excursive or protrusive movements) may be synthesized based on that hinge axis. [0113] FIG. 3 illustrates a block diagram of an example patient assembly 204. In this example, the patient assembly includes a clutch 220 and a reference structure 222. Here, the clutch 220 and the reference structure 222 are not physically connected and can move independently of one another.
[0114] The clutch 220 is a device that is configured to couple to a patient’s dentition. For example, the clutch 220 may grip any remaining teeth of the dentition of the patient. In some implementations, the clutch 220 may couple to an edentulous region of a patient’s dentition or to dental implants that have been placed in edentulous regions of the patient’s dentition.
[0115] In some implementations, the clutch 220 comprises a dentition coupling device 224 and a position indicator system 228. In some implementations, the clutch 220 is configured to couple to the lower dentition of the patient so as to move with the patient’s mandible. In other implementations, the clutch 220 may be configured to couple to the patient’s upper dentition so as to move with the patient’s maxilla.
[0116] The dentition coupling device 224 is configured to removably couple to the patient’s dentition. In some implementations, the dentition coupling device 224 rigidly couples to the patient’s dentition such that while coupled, the movement of the dentition coupling device 224 relative to the patient’s dentition is minimized. Various implementations include various coupling mechanisms.
[0117] For example, some implementations couple to the patient’s dentition using brackets that are adhered to the patient’s teeth with a dental or orthodontic adhesive. As another example, some implementations couple to the patient’s dentition using an impression material. For example, some implementations of the dentition coupling device 224 comprise an impression tray and an impression material such as polyvinyl siloxane. To couple the dentition coupling device 224 to the patient’s dentition, the impression tray is filled with impression material and then placed over the patient’s dentition. As the impression material hardens, the dentition coupling device 224 couples to the patient’s dentition.
[0118] Alternatively, some implementations comprise a dentition coupling device 224 that is custom designed for a patient based on a three-dimensional model of the patient’s dentition. For example, the dentition coupling device 224 may be formed using a rapid fabrication machine. One example of a rapid fabrication machine is a three-dimensional printer, such as the PROJET® line of printers from 3D Systems, Inc. of Rock Hill, South Carolina. Another example of a rapid fabrication machine is a milling device, such as a computer numerically controlled (CNC) milling device. In these implementations, the dentition coupling device 224 may comprise various mechanical retention devices such as clasps that are configured to fit in an undercut region of the patient’s dentition or wrap around any remaining teeth.
[0119] Implementations of the dentition coupling device 224 may couple to the patient’s dentition using a combination of one or more mechanical retention structures, adhesives, and impression materials. For example, the dentition coupling device 224 may include apertures through which a fastening device (also referred to as a fastener) such as a temporary anchorage device may be threaded to secure the dentition coupling device 224 to the patient’s dentition, gum tissue, or underlying bone tissue. For example, the temporary anchorage devices may screw into the patient’s bone tissue to secure the dentition coupling device 224.
[0120] In some implementations, the dentition coupling device 224 includes one or more fiducial markers, such as hemispherical inserts, that can be used to establish a static relationship between the position of the clutch 220 and the patient’s dentition. For example, the dentition coupling device 224 may include three fiducial markers disposed along its surface. The location of these fiducial markers can then be determined relative to the patient’s dentition such as by capturing a physical impression of the patient with the clutch attached or using imaging techniques such as capturing a digital impression (e.g., with an intraoral scanner) or other types of images of the dentition and fiducial markers. Some implementations of the dentition coupling device 224 do not include fiducial markers. One or more images or a digital impression of the patient’s dentition captured while the dentition coupling device 224 is mounted may be aligned to one or more images or a digital impression of the patient’s dentition captured while the dentition coupling device 224 is not mounted.
[0121] The position indicator system 228 is a system that is configured to be used to determine the position and orientation of the clutch 220. In some implementations, the position indicator system 228 includes multiple fiducial markers. In some examples, the fiducial markers are spheres. Spheres work well as fiducial markers because the location of the center of the sphere can be determined in an image regardless of the angle from which the image containing the sphere was captured. The multiple fiducial markers may be disposed (e g., non-collinearly) so that by determining the locations of each (or at least three) of the fiducial markers, the position and orientation of the clutch 220 can be determined. For example, these fiducial markers may be used to determine the position of the position indicator system 228 relative to the dentition coupling device 224, through which the position of the position indicator system 228 relative to the patient’s dentition can be determined.
[0122] Some implementations of the position indicator system 228 do not include separate fiducial markers. In at least some of these implementations, structural aspects of the position indicator system 228 may be used to determine the position and orientation of the position indicator system 228. For example, one or more flat surfaces, edges, or comers of the position indicator system 228 may be imaged to determine the position and orientation of the position indicator system 228. In some implementations, an intraoral scanner is used to capture a three- dimensional model (or image) that includes a corner of the position indicator system 228 and at least part of the patient’s dentition while the dentition coupling device 224 is mounted. This three-dimensional model can then be used to determine a relationship between the position indicator system 228 and the patient’s dentition. The determined relationship may be a static relationship that defines the position and orientation of the position indicator system 228 relative to a three-dimensional model of the patient’s dentition (e.g., based on the corner of the position indicator system 228 that was captured by the intraoral scanner).
[0123] In some implementations, the position indicator system 228 includes a light source assembly that emits beams of light. The light source assembly may emit substantially collimated light beams (e.g., laser beams). In some implementations, the light source assembly is rigidly coupled to the dentition coupling device 224 so that as the dentition coupling device 224 moves with the patient’s dentition, the beams of light move. The position of the dentition coupling device 224 is then determined by capturing images of where the light beams intersect with various surfaces (e.g., translucent screens disposed around the patient). Implementations that include a light source assembly are illustrated and described throughout.
[0124] The reference structure 222 is a structure that is configured to be worn by the patient so as to provide a point of reference to measure the motion of the clutch 220. In implementations where the clutch 220 is configured to couple to the patient’s lower dentition, the reference structure 222 is configured to mount elsewhere on the patient’s head so that the motion of the clutch 220 (and the patient’s mandible) can be measured relative to the rest of the patient’s head. For example, the reference structure 222 may be worn on the upper dentition. Beneficially, when the reference structure 222 is mounted securely to the patient’s upper dentition, the position of the reference structure 222 will not be impacted by the movement of the mandible (e.g., muscle and skin movement associated with the mandibular motion will not affect the position of the reference structure 222). Alternatively, the reference structure 222 may be configured to be worn elsewhere on the patient’s face or head.
[0125] In some implementations, the reference structure 222 is similar to the clutch 220 but configured to be worn on the dental arch opposite the clutch (e.g., the upper dentition if the clutch 220 is for the lower dentition). For example, the reference structure 222 shown in FIG. 3 includes a dentition coupling device 230 that may be similar to the dentition coupling device 224, and a position indicator system 234 that may be similar to the position indicator system 228. [0126] In some implementations, the patient assembly 204 includes a gothic arch tracer. For example, the clutch 220 may include one or more tracing components that may move across a surface of the reference structure 222. The tracing components may have adjustable heights.
[0127] FIG. 4 illustrates an implementation of a clutch 400. The clutch 400 is an example of the clutch 220. In this example, the clutch 400 includes a dentition coupling device 402 and a light source assembly 404, and an extension member 408. The dentition coupling device 402 is an example of the dentition coupling device 224, and the light source assembly 404 is an example of the position indicator system 228.
[0128] The light source assembly 404, which may also be referred to as a projector, is a device that emits light beams comprising light that is substantially collimated. Collimated light travels in one direction. A laser beam is an example of collimated light. In some implementations, the light source assembly 404 includes one or more lasers. Although alternatives are possible, the one or more lasers may be semiconductor lasers such as laser diodes or solid-state lasers such as diode-pumped solid-state lasers.
[0129] In some implementations, the light source assembly 404 comprises a first, second, and third light emitter. The first and second light emitters may emit substantially collimated light in parallel but opposite directions (i.e., the first and second light emitters may emit light in antiparallel directions) such as to the left and right of the patient when the clutch 400 is coupled to the patient’s dentition. In some implementations, the first and second light emitters are collinear or are substantially collinear (e g., offset by a small amount such as less than 5 micrometers, less than 10 micrometers, less than 25 micrometers, less than 50 micrometers, or less than 100 micrometers). The third light emitter may emit substantially collimated light in a direction of a line that intersects with or substantially intersects with lines corresponding to the direction of the first and second light emitters. Lines that intersect share a common point. Lines that substantially intersect do not necessarily share a common point, but would intersect if offset by a small amount such as less than 5 micrometers, less than 10 micrometers, less than 25 micrometers, less than 50 micrometers, or less than 100 micrometers. In some implementations, the third light emitter emits light in a direction that is perpendicular to the first and second light emitters, such as toward the direction the patient is facing.
[0130] In some implementations, the third light emitter emits light in a direction that is offset from the direction of the first light emitter so as to be directed toward the same side of the patient as the direction of the first light emitter. For example, the third light emitter may be offset from the first light emitter by an offset angle that is an acute angle. The third light emitter may be offset from the first light emitter by an offset angle that is less than 90 degrees such that the light emitted by both the first light emitter and the second light emitter intersect with the same screen (e.g., a planar screen having a rectangular shape and being disposed on a side of the patient). The third light emitter may be offset from the first light emitter by an offset angle of between approximately 1 degree to 45 degrees. In some implementations, the offset angle is between 3 degrees and 30 degrees. In some implementations, the offset angle is between 5 degrees and 15 degrees. For example, the offset angle may be less than 10 degrees.
[0131] In some implementations, one or more compensation factors are determined to compensate for an offset from the first and second light emitters being collinear, or an offset from the third light emitter emitting light in a direction of a line that intersects with the directions of the first and second light sources. A compensation factor may also be determined for the offset angle of the third light emitter with respect to the first and second light emitters. For example, an offset angle compensation factor may specify the angle between the direction of the third light emitter and a line defined by the first light emitter. In implementations in which the orientation of the third light emitter is directed perpendicular to or substantially perpendicular to the direction of the first light emitter, the offset angle compensation factor may be 90 degrees or approximately 90 degrees. In implementations in which the orientation of the third light emitter is directed toward a side of the patient, the offset angle compensation factor may, for example, be between approximately 5 and 45 degrees. The compensation factors may be determined specifically for each position indicator system manufactured to account for minor variation in manufacturing and assembly. The compensation factors may be stored in a datastore (such as on the motion determining device 206 or on a computer readable medium accessible by the motion determining device 206). Each position indicator system may be associated with a unique identifier that can be used to retrieve the associated compensation factor. The position indicator system 234 may include a label with the unique identifier or a barcode, QR code, etc. that specifies the unique identifier.
[0132] Some implementations of the light source assembly 404 include a single light source and use one or more beam splitters such as prisms or reflectors such as mirrors to cause that light source to function as multiple light emitters by splitting the light emitted by that light source into multiple beams. In at least some implementations, the emitted light emanates from a common point. As another example, some implementations of the light source assembly 404 may comprise apertures or tubes through which light from a common source is directed. Some implementations may include separate light sources for each of the light emitters.
[0133] In the example of FIG. 3, the light source assembly 404 includes light emitters 406a, 406b, and 406c (referred to collectively as light emitters 406) and a housing 410. The light emitter 406a is emitting a light beam LI, the light emitter 406b is emitting a light beam L2, and the light emitter 406c is emitting a light beam L3. The light beams LI and L2 are parallel to each other, but directed in opposite directions. The light beam L3 is perpendicular to the light beams LI and L2. In at least some implementations, the housing 410 is configured to position the light emitters 406 so that the light beams LI, L2, and L3 are approximately coplanar with the occlusal plane of the patient’s dentition. Although the light beam L3 is perpendicular to the light beams LI and L2 in the example of FIG. 3, alternatives are possible.
[0134] The housing 410 may be approximately cube shaped and includes apertures through which the light emitters 406 extend. In other implementations, the light emitters do not extend through apertures in the housing 410 and instead light emitted by the light emitters 406 passes through apertures in the housing 410.
[0135] In the example of FIG. 4, the dentition coupling device 402 is rigidly coupled to the light source assembly 404 by an extension member 408. The extension member 408 extends from the dentition coupling device 402 and is configured to extend out of the patient’s mouth when the dentition coupling device 402 is worn on the patient’s dentition. In some implementations, the extension member 408 is configured so as to be permanently attached to the light source assembly 404 such as by being formed integrally with the housing 410 or joined by welding or a permanent adhesive. In other implementations, the extension member 408 is configured to removably attach to the light source assembly 404. Because the light source assembly 404 is rigidly coupled to the dentition coupling device 402, the position and orientation of the dentition coupling device 402 can be determined from the position and orientation of the light source assembly 404.
[0136] In some implementations, the housing 410 and the dentition coupling device 402 are integral (e.g., are formed from a single material or are coupled together in a manner that is not configured to be separated by a user). In some implementations, the housing 410 includes a coupling structure configured to removably couple to the extension member 408 of the dentition coupling device 402. In this manner, the dentition coupling device 402 can be a disposable component that may be custom fabricated for each patient, while the light source assembly 404 may be reused with multiple dentition coupling devices. In some implementations, the housing 410 includes a connector that is configured to mate with a connector on the dentition coupling device 402.
[0137] Additionally or alternatively, the housing 410 may couple to the dentition coupling device 402 with a magnetic clasp. Some implementations include a registration structure that is configured to cause the housing 410 to join with the dentition coupling device 402 in a repeatable arrangement and orientation. In some implementations, the registration structure comprises a plurality of pins and corresponding receivers. In an example, the registration structure includes a plurality of pins disposed on the housing 410 and corresponding receivers (e.g., holes) in the dentition coupling device 402 (or vice versa). In some implementations, the registration structure comprises a plurality of spherical attachments and a plurality of grooves. In one example, the registration structure includes three or more spherical attachments disposed on the housing 410 and two or more v-shaped grooves disposed on the dentition coupling device 402 that are disposed such that the spherical attachments will only fit into the grooves when the housing 410 is in a specific orientation and position relative to the dentition coupling device 402. In some implementations, the registration structure includes a spring-mounted pin or screw that serves as a detent to impede movement of the housing 410 with respect to the dentition coupling device 402.
[0138] FIGS. 5A-B are cross-sectional side views that illustrate the attachment of an implementation of a dentition coupling device 520 to a dental implant 522. The dentition coupling device 520 is an example of the dentition coupling device 224 or the dentition coupling device 230. The dentition coupling device 520 may include one or more fixed arms and one or more pivotable arms that can be positioned to align with the patient’s dentition.
[0139] FIG. 5A is a cross-sectional side view that illustrates an implant abutment 526 attached to a dental implant 522 that is disposed in the patient’s gingival tissue G. The implant abutment 526 is held in place with an implant screw 524. The implant screw 524 has threads that mate with corresponding threads in a receiver of the dental implant 522. A patient receiving dentures may have several dental implants placed to support and secure the denture.
[0140] FIG. 5B is a cross-sectional side view of the dental implant 522 and gingival tissue G with the implant abutment 526 removed and the dentition coupling device 520 attached to the dental implant 522. Here, the implant screw 524 passes through a slot 592 of an arm 590 of the dentition coupling device 520, through an offset 528, and into the dental implant 522. As shown in this figure, at least a portion of the threads of the implant screw 524 are interlocked with the threads of the receiver of the dental implant 522. The offset 528 may be a cylindrical structure that includes an aperture through which a portion of the implant screw 524 may pass. For example, an aperture in the offset 528 may allow passage of the threaded portion of the implant screw 524 but not the head of the implant screw 524. The offset 528 may be sized in the occlusal dimension (O) so as to offset the arm 590 from the gingival tissue G.
[0141] Some implementations use a washer to couple the implant screw 524 to the arm 590 (e.g., when an aperture in the arm 590 is larger than the head of the screw). The washer may be formed from a flexible material such as rubber. In some implementations, the arm 590 may be secured to the threads of the receiver of the dental implant 522 with a scanning abutment. A scanning abutment may include a threaded region that is sized to fit into and mate with the threads of the receiver of the dental implant 522. The scanning abutment may also include a fiducial structure that can used to determine a location and orientation of the implant 522 when the scanning abutment is attached. For example, the scanning abutment may be imaged with a component of the image capture system (e.g., an intraoral scanner or a 2D or 3D camera) to determine the locations of the associated dental implant.
[0142] FIG. 6 illustrates an implementation of a motion capture system 600 for capturing jaw movement in which only two screens are used. The motion capture system 600 is an example of the motion capture system 200. The motion capture system 600 includes an imaging system 602 and a patient assembly 604. In this example, the imaging system 602 includes a housing 610. The imaging system also includes screen 638a and a screen 638b (collectively referred to as screens 638), which are positioned so as to be on opposite sides of the patient’s face (e.g., screen 638b to the left of the patient’s face and screen 638a to the right of the patient’s face). In some implementations, a screen framework is disposed within the housing 610 to position the screens 638 with respect to each other and the housing 610.
[0143] As can be seen in FIG. 6, this implementation does not include a screen disposed in front of the patient’s face. Beneficially, by not having a screen in front of a patient’s face, the motion capture system 600 may allow better access to the patient’s face by a caregiver. Also shown is patient assembly 604 of the motion capture system 600.
[0144] In at least some implementations, the patient assembly 604 includes a clutch 620 and a reference structure 622, each of which include a light source assembly having three light emitters. The clutch 620 is an example of the clutch 220 and the reference structure 622 is an example of the reference structure 222. In FIG. 6, the clutch 620 is attached to the patient’s mandible (i.e., lower dentition) and is emitting light beams LI, L2, and L3. Light beams LI and L3 are directed toward the screen 638a, intersecting at intersection points II and 13, respectively. Light beam L2 is directed toward the screen 638b. Although alternatives are possible, in this example, the light beams LI and L3 are offset from each other by approximately 15 degrees. The light beams LI and L2 are collinear and directed in opposite directions (i.e., L2 is offset from LI by 180 degrees). [0145] The reference structure 622 is attached to the patient’s maxilla (i.e., upper dentition) and is emitting light beams L4, L5, and L6. Light beams L4 and L6 are directed toward the screen 638b. Light beam L5 is directed toward the screen 638a, intersecting at intersection point 15. Although alternatives are possible, in this example, the light beams L4 and L6 are offset from each other by approximately 15 degrees. The light beams L4 and L5 are collinear and directed in opposite directions (i.e., L4 is offset from L5 by 180 degrees).
[0146] As the patient’s dentition moves around, the clutch 620 and the reference structure 622 will move in concert with the patient’s dentition, causing the light beams to move and the intersection points to change. An optical sensing assembly of the motion capture system 600 (e.g., cameras embedded within the housing 610 of the motion capture system 600 behind the screens 638a and 638b) may capture images of the screens 638 so that the intersection points can be determined.
[0147] The location of a first axis associated with the clutch 620 may be identified based on the intersection points from the light beams LI and L2. An intersection coordinate between the light beams LI and L3 may then be determined based on the distance between the intersection points II and 13 on the screen 638a. For example, the distance from the intersection point II along the first axis can be determined based on the distance between the points II and 13 and the angle between II and 13. As described in more detail elsewhere herein, the angle between II and 13 is determined for the clutch 620 and may be stored in a data store, for example, on a non- transitoiy computer-readable storage medium. Using this distance, the intersection coordinate can be found, which will have a known relationship to the clutch 620 and therefore the patient’s dentition. As has been described earlier, a coordinate system for the clutch 620 can be determined based on the intersection points too (e.g., a second axis is defined by the cross product of the first axis and a vector between the intersection points II and 13, and a third axis is defined by the cross product of the first axis and the second axis). In a similar manner, the position and orientation of the reference structure 622 can be determined based on the intersection points of the light beams L4, L5, and L6 with the screens 638a and 638b.
[0148] In some implementations, three-dimensional coordinate systems for the clutch and the reference structure are determined using only two screens. In some implementations, the motion capture system includes only two screens and the motion capture system does not include a third screen. In some implementations, the imaging system captures images of only two screens. Some implementations identify intersection points using images captured of only two screens. For example, two intersection points from light beams emitted by a reference structure are identified on an image of the same screen.
[0149] In some implementations, a light emitter being oriented to emit light in a first direction toward the screen means the light emitter is oriented to emit light in a first direction toward the screen when the light emitter is attached to a patient (or other structure) and positioned for motion tracking with respect to the imaging system.
[0150] FIG. 7 illustrates a top view of an implementation of a reference structure 730 and an implementation of an imaging system 732. The reference structure 730 is an example of the reference structure 622. The imaging system 732 is an example of the imaging system 602.
[0151] The reference structure 730 includes a dentition coupling device 734, an extension member 740, and a light source assembly 742. The dentition coupling device 734 is an example of the dentition coupling device 230 and may be similar to the example dentition coupling devices previously described with respect to implementations of the clutch. The light source assembly 742 is an example of the position indicator system 234. In this example, the light source assembly 742 includes light emitters 750a, 750b, and 750c (collectively referred to as light emitters 750).
[0152] The dentition coupling device 734 is configured to removably couple to the dentition of the patient. The dentition coupling device 734 is coupled to the opposite arch of the patient’s dentition as the clutch (e.g., the dentition coupling device 734 of the reference structure 730 couples to the maxillary arch when a clutch is coupled to the mandibular arch). In some implementations, the dentition coupling device 734 is coupled to the extension member 740 that is configured to extend out through the patient’s mouth when the dentition coupling device 734 is coupled to the patient’s dentition. The extension member 740 may be similar to the extension member 408.
[0153] The imaging system 732 includes screens 738a and 738b (referred to collectively as screens 738), and cameras 720a and 720b (referred to collectively as cameras 720). In this example, the screen 738a is oriented parallel to the screen 738b. In some implementations, the imaging system 732 may also include a screen framework (not shown) that positions the screens 738 with respect to each other. For example, the screen framework may extend beneath the reference structure 730 and couple to the bottoms of the screens 738. Together, the screens 738 and the screen framework are an example of the screen assembly 212. The cameras 720 are an example of the optical sensing assembly 210.
[0154] The screens 738 may be formed from a translucent material so that the points where the light beams emitted by the light source assembly 742 intersect with the screens 738 may be viewed from outside of the screens 738. Images that include these points of intersection may be recorded by the cameras 720. The motion determining device 206 may then analyze these captured images to determine the points of intersection of the light beams with the screens 738 to determine the location of the light source assembly 742. The position of the light source assembly of a clutch (not shown) may be determined in a similar manner.
[0155] The cameras 720 are positioned and oriented to capture images of the screens 738. For example, the camera 720a is positioned and oriented to capture images of the screen 738a, and the camera 720b is positioned and oriented to capture images of the screen 738b. In some implementations, the cameras 720 are mounted to the screen framework so that the position and orientation of the cameras 720 are fixed with respect to the screens. For example, each of the cameras 720 may be coupled to the screen framework by a camera mounting assembly. In this manner, the position and orientation of the cameras 720 relative to the screens 738 does not change if the screen framework is moved. In some implementations, the screen framework includes a housing (e g., as shown at 610 in FIG. 6), within which the cameras 720 are disposed. [0156] FIG. 8 illustrates a perspective view of the reference structure 730 disposed between the screens 738 of the imaging system 732. The screens 738 are joined together by a screen framework 736 that positions and orients the screens 738 with respect to one another. In this example, the light emitter 750a is emitting a light beam L4, which intersects with the screen 738b at intersection point 14; the light emitter 750b is emitting a light beam L5, which intersects with the screen 738a at intersection point 15; and the light emitter 750c is emitting a light beam L6, which intersects with the screen 738a at intersection point 16. As the position and orientation of the reference structure 730 change relative to the screens 738, the locations of at least some of the intersection points 14, 15, and 16 will change as well. [0157] The camera 720a captures images of the screen 738a, including the intersection point 15 of the light beam L5 emitted by the light emitter 750b. The camera 720a may capture a video stream of these images. Similarly, although not shown in this illustration, the camera 720b captures images of the screens 738b and the intersection points 14 and 16.
[0158] The captured images from the cameras 720 are then transmitted to the motion determining device 206. The motion determining device 206 may determine the location of the intersection points 14, 15, and 16, and from those points the location of the light source assembly 742. In some implementations, a point of common intersection for the light beams L4, L5, and L6 is determined based on the location of the intersection points 14, 15, and 16 (e.g., the point at which the light beams intersect or would intersect if extended). Based on the determined locations of the light beams, the location and orientation of the reference structure 730 relative to the screens 738 can be determined.
[0159] FIG. 9 is a flowchart of an example process 900 for fabricating a denture based on at least motion data. In some implementations, the process 900 is performed by the system 100 described herein, although one or more blocks in the process 900 can also be performed by any other computing system described throughout this disclosure. For illustrative purposes, the process 900 is described from the perspective of a computer system, which may include one or more of the components described in reference to the system 100 in FIGs. 1A-B.
[0160] Referring to the process 900 in FIG. 9, at block 902, digital patient data, including motion data and a digital dental model, is acquired for a particular patient. For example, the digital patient data may include imaging data of the patient dentition. The imaging data may be captured using various imaging modalities described herein. In some implementations, the imaging data includes a three-dimensional digital dental model of the patient’s dentition. The three-dimensional digital dental model may be captured using an intraoral scanner. The three- dimensional digital dental model may be captured by scanning a physical impression or mold formed from a physical impression using a three-dimensional scanner. Sometimes, the digital dental model can be generated by the computer system and using at least the motion data that is acquired in block 902.
[0161] The acquired digital patient data may also include motion data of the patient’s jaw. For example, the motion data may be captured using the motion capture system 200. The motion data may represent the patient’s jaw moving through various jaw movements including, for example, excursive movements and protrusive movements. The motion data may also represent that patient’s jaw position and movement as the patient pronounces specific phonetic sounds such as the “F” sound and the “S” sound. In some implementations, audio or video files may be captured as the patient pronounces the specific sounds. The motion data may map to frames or positions in the video or audio data. Based on sound processing (e.g., audio signal processing) of the audio data or image processing of the video data, various positions in the patient’s speech may be identified and the corresponding frame of the motion data may be identified.
[0162] The acquired digital patient data may also include one or more anterior facial images of the patient. The anterior facial images may include two-dimensional images or three- dimensional images. In some implementations, the anterior facial images include an image of the patient smiling and an image of the patient with lips in repose (e.g., relaxed). The anterior facial images may also include videos. For example, the videos may include video of the patient performing various jaw movements such as excursive movements and protrusive movements. The videos may also include video of the patient pronouncing specific phonetic sounds such as sibilants (e.g., the “S” sound) or fricatives (e.g., the “F” sound).
[0163] The acquired digital patient data may also include other types of patient images captured using imaging modalities such as computed tomography (CT), including cone beam computed tomography (CBCT), ultrasound, and magnetic resonance imaging (MRI).
[0164] At block 904, the computer system integrates the digital patient data. For example, the digital patient data may be integrated to a common coordinate system (e.g., positioned relative to the same XYZ axes). Different types of digital patient data may be integrated using different techniques. For example, three-dimensional data sets may be integrated using for example an iterative alignment process such as an iterative closest point technique. In some implementations, multiple types of the digital patient data include fiducial markers. The positions of the fiducial markers may be determined from the digital patient data and used to align one set of digital patient data with another.
[0165] In some implementations, the digital patient data includes two-dimensional images captured with a camera of the image capture system 107. A polygon may be generated within the common coordinate system. The two-dimensional images may be mapped to the polygon. [0166] At block 906, a vertical dimension of occlusion and an occlusal plane position and orientation is determined for the patient. The determined vertical dimension of occlusion indicates the desired position of the patient’s mandible and maxilla when the patient’s jaw is at rest. The vertical dimension of occlusion may correspond to a total distance between edentulous ridges to accommodate dentures with a desired amount of occlusal open space when the patient is at rest. The vertical dimension of occlusion influences the function, comfort, and aesthetics of dentures. The determined occlusal plane may correspond to a plane disposed between the patient’s maxilla and mandible that approximately corresponds to where the occlusal surfaces of the patient’s teeth meet. The occlusal plane may, for example, be positioned at a desired location of the incisal edge of the patient’s upper central incisors, which may be determined from photos of the patient or using a gothic arch tracer. The occlusal plane may be oriented based on the motion data. Although often referred to as an occlusal plane in the denture and dental fields, the occlusal plane need not be precisely planar and may vary from a plane to follow the curve of the patient’s lips.
[0167] In some implementations, the vertical dimension of occlusion may be specified by a care provider such as dentist or physician. The vertical dimension of occlusion may also be determined based, at least in part, on motion data of the digital patient data. For example, motion data while the patient is pronouncing specific sounds such as sibilants (e.g., the “S” sound) or fricatives (e.g., the “F” sound). A desired vertical dimension of occlusion may be determined from the relative positions of the maxilla and mandible as the sounds are pronounced. The vertical dimension of occlusion may also be determined from a two-dimensional facial image of the digital patient data.
[0168] The occlusal plane may, for example, be determined based on applying a ratio to the vertical dimension of occlusion. In some implementations, the occlusal plane may be determined based on the two-dimensional facial image of the digital patient data. For example, the occlusal plane may be positioned so as to align the incisal edges of the upper central incisors with respect to the patient’s lips.
[0169] In block 908, the digital dental model of the digital patient data is positioned based on the position and orientation of the occlusal plane. For example, a portion of the digital dental model representing the mandibular dental arch may be positioned based on the motion data so as to be positioned at the determined vertical dimension with respect to the maxillary dental arch and so that the denture teeth on the mandibular arch align with the occlusal plane. In some implementations, a frame of the motion data that positions the mandibular dental arch at the determined vertical dimension is identified by the computer system. In some implementations, the mandibular dental arch is rotated about a hinge axis to open to the determined vertical dimension of occlusion. The position of the hinge axis may be inferred based on the motion data. [0170] In some implementations, the denture design system 116 includes a user interface that displays the digital dental model, the occlusal plane, or both. The user interface may be configured to receive user input to adjust the vertical dimension of occlusion or the position of the occlusal plane. For example, the user interface may be configured to receive a drag (e.g., click-and-drag or touch-and-drag) input to interactively move the mandibular arch of the digital dental model up or down along an arch defined by the motion data or a hinge axis inferred from the motion data. Similarly, the user interface may be configured to interactively move the occlusal plane along the arch between the mandibular arch and maxillary arch of the digital dental model.
[0171] In block 910, the computer system generates an occlusal guidance surface based on the motion data. The occlusal guidance surface may be used to guide the positioning of denture teeth on one of the dental arches. The occlusal guidance surface may be generated for one or both of the mandibular arch and the maxillary arch. In some implementations, the occlusal guidance surface is generated for a dental arch by sweeping (or moving) at least a portion of the opposing dental arch according to the motion data. For example, a portion of the opposing dental arch may be swept through one or more of excursive and protrusive movements based on the motion data. In some implementations, the portion of the opposing dental arch may be swept through all of the movements represented in the motion data. In some implementations, a midline polyline segment may be swept according to the motion data. The midline polyline segment may be a cross-section of the opposing dentition at the midline (e.g., middle of the dental arch). The cross-section may be generated by slicing or intersecting a vertically oriented plane through the opposing dentition. In some implementations, the midline polyline segment is not directly based on the opposing dentition. For example, the midline polyline segment may be a line segment on the occlusal plane that extends in the anterior-posterior direction at the midline. As the portion of the opposing dentition is swept according to the motion data, the occlusal guidance surface is generated by the computer system. For example, a midline polyline segment may be duplicated in multiple locations according to the motion data (e.g., the midline polyline segment may be duplicated every 25 micron, every 50 microns, every 100 microns, or another distance). The adjacent midline polyline segments may then be joined to form a surface.
[0172] In some implementations, a polygonal surface may be deformed based on the swept midline polyline segment. For example, the polygonal surface may initially be a flat surface that is positioned at the determined occlusal plane location. As the midline polyline segment is swept through different locations, the polygonal surface may be deformed vertically to the midline polyline segment.
[0173] Next, the computer system can position digital denture teeth based on the occlusal guidance surface in block 912. Sometimes, the digital denture teeth may be loaded from a library of digital denture teeth. Some implementations include multiple libraries of denture teeth. The digital denture teeth libraries may vary functionally, aesthetically, or based on manufacturer. In some implementations, the digital denture teeth may include labels for anatomical landmarks such as cusps, marginal ridges, incisal edges, fossa, grooves, base boundaries, or other anatomical landmarks. These labels may be used to automatically position the digital denture teeth with respect to one another and digital denture teeth on the opposing dentition .
[0174] In block 914, a digital representation of a denture base is generated by the computer system. Sometimes, a soft-tissue boundary curve can be generated based on the digital dental model. The soft-tissue boundary curve represents the edge of the denture base. The soft-tissue boundary curve may surround the edentulous ridge. The soft-tissue boundary curve may be represented by a spline curve. Some implementations include a user interface through which a user may adjust the spline curve. A soft-tissue interface surface may be generated based on the soft-tissue boundary curve and the digital dental model. For example, a portion of the digital dental model that is enclosed by the soft-tissue boundary curve may be offset (e.g., by 10 microns, 25 microns, 50 microns, 100 microns, or another amount) to form the soft-tissue interface surface. The soft-tissue interface surface may be an intaglio surface (i.e., the surface of the denture that touches the gum tissue). On upper dentures, the intaglio surface may be a posterior palatal seal. The offset may provide space for a dental adhesive that can secure the denture, when fabricated, to the patient’s edentulous ridge. Some implementations are configured to fit to the patient’s edentulous ridge via suction or friction. In these embodiments, the soft tissue interface surface may not be offset from the digital dental model.
[0175] Tooth boundary curves may also be identified for each of the positioned digital denture teeth. The tooth boundary curves may be identified based, for example, on labels stored with each of the digital denture teeth that identify the portion of the tooth that should be embedded in the denture base. A surface may be formed to join the outer edges of the tooth boundary curves to the soft-tissue interface surface. Sockets may be generated within the boundary curves. The sockets may be shaped to receive the denture teeth.
[0176] In block 916, the computer system determines color and/or texture data for (i) the digital denture teeth and (ii) the denture base based on at least the digital patient data. Refer to FIGs. 10-13 for further discussion about determining the color and/or texture data for (i)-(ii). [0177] At block 918, dentures for the particular patient can be fabricated. For example, the denture base may be fabricated based on the digital representation of the denture base and the color and/or texture data for the denture base. The denture base may be fabricated using a rapid fabrication technology such as three-dimensional printing or computer numerically controlled (CNC) milling, or any other fabricating machines described in reference to FIGs. 1 A-B. The denture base may be fabricated from acrylic or another biocompatible material. The denture base may also be made from a material that has aesthetic properties that substantially match gum tissue. In some implementations, pre-manufactured denture teeth that match the digital denture teeth library may be placed and bonded into the sockets of the denture base.
[0178] The denture teeth may also be manufactured using rapid fabrication technology. For example, the denture teeth may be fabricated using a three-dimensional printer or a CNC mill, as described in reference to FIGs. 1 A-B. The denture teeth may be formed from a biocompatible material that has aesthetic properties that are similar to the aesthetic properties of teeth. In some implementations, the digital denture teeth and the denture base can be printed as a single unit by a mixed material three-dimensional printer. In some implementations, one or both of the denture base and the denture teeth may be cast using a wax casting process using a pattern fabricated by a three-dimensional printer of CNC mill. [0179] In some implementations, interferences between the digital denture teeth can be identified by moving, by the computer system, the dental arches according to the motion data. In implementations that use rapid fabrication technology to fabricate denture teeth, the digital models of the denture teeth may be adjusted, by the computer system, to remove portions of the digital denture teeth models that interfere before the denture teeth are fabricated. In implementations that place pre-manufactured denture teeth from a library into the denture base, a CNC mill may be used to remove interfering regions of the pre-manufactured denture teeth after they are placed in the denture base.
[0180] FIGs. 10A-B is a flowchart of a process 1000 for determining teeth and gingiva color data for a digital dental model of a patient. Output from the process 1000 can be used to model, design, and manufacture dentures for the patient. For example, the teeth color data and the gingiva color data can be used to fabricate realistic dentures that closely match or otherwise resemble color of the patient’s actual teeth and/or gingiva. Although the process 1000 is described from the perspective of designing and manufacturing dentures, the disclosed technology may also be used to design and manufacture crowns, bridges, and other types of dental appliances. Moreover, although the process 1000 is generally described from the perspective of determining teeth color data and gingiva color data, the process 1000 can also be performed to determine color data more particularly for one or more of the following: anterior teeth versus posterior teeth, front or external-facing side of teeth versus back or internal-facing side of teeth, and front or external -facing side of gingiva versus back or internal-facing side of gingiva.
[0181] The process 1000 can be performed by the computer system 152 or any other components described in reference to FIGs. 1A-B, including but not limited to the denture design system 116 and/or the rapid fabrication machine 119. The process 1000 can also be performed by one or more other computing systems, devices, computers, networks, cloud-based systems, and/or cloud-based services. For illustrative purposes, the process 1000 is described from the perspective of a computer system.
[0182] Referring to the process 1000 in both FIGs. 10A-B, the computer system can receive patient oral scan data in block 1002. The oral scan data can include any of the image and/or motion data described throughout this disclosure (refer to FIGs. 1 A-B). As an illustrative example, the oral scan data can include a three-dimensional image or a two-dimensional image of the patient’s mouth. The oral scan data can also include color information or other color data/values corresponding to different portions or regions of an oral scan of the patient’s mouth. Image or other processing techniques can be performed, as described below, to extract one or more of the color information from the data and use the extracted color information to determine color values for designing dentures for the patient.
[0183] In block 1004, the computer system can perform a color calibration process on the received data. For example, the computer system can adjust or otherwise correct white balance in image data of the patient’s mouth. The white balance can be adjusted using a greyscale card, which may typically be performed as part of a color retraction process. One or more other color calibration techniques can be performed.
[0184] The computer system can identify at least one zone in the calibrated data for at least one of a tooth, a set of teeth, an upper gingiva, and a lower gingiva (block 1006). In other words, the computer system can identify zones associated with a particular tooth, zones associated with the set of teeth, zones associated with the upper gingiva, and/or zones associated with the lower gingiva. Each of the abovementioned parts of the patient’s mouth can be associated with multiple zones. The zone associations can be predetermined and stored in a data store as mapping data. The computer system can then retrieve this mapping data, process the calibrated data to identify the particular tooth, set of teeth, upper gingiva, and/or lower gingiva, then map the predetermined zones to the identified part(s) of the patient’s mouth using the mapping data. Sometimes, the computer system can apply a model (e.g., machine learning-trained model) that is configured to identify the abovementioned parts of the patient’s mouth and identify or otherwise map the predetermined zones to those identified parts of the patient’s mouth. The computer system can additionally or alternatively use object detection techniques to identify the abovementioned parts of the patient’s mouth, optionally generate a bounding box around each of the identified parts, and then map the zones onto the identified part in the bounding box that correspond to the identified part.
[0185] A tooth, set of teeth, upper gingiva, and lower gingiva (e.g., parts of the patient’s mouth, as mentioned above) can each have a different quantity of zones. The quantity of zones can vary based on colors, textures, and other quality characteristics that may be needed or preferred to make the particular part of the patient’s mouth appear realistic. As an illustrative example, a single tooth may have fewer zones than a set of teeth because additional color effects may be required across a greater surface area of visible teeth (e.g., regarding the set of teeth when the patient smiles) versus a single tooth (e.g., a molar at the back of the patient’s mouth). The zones can be predefined as portions of the particular part of the patient’s mouth having nonphysical boundaries. Sometimes, one or more of the zones may overlap. Sometimes, the zones may not overlap. Regardless of whether the zones overlap, color added to each zone can be blended across zones to appear more realistic when manufactured as dentures. As described further in reference to FIGs. 12-13, each tooth can be assigned 3 zones. A 3-zone gradient of color can be applied to the tooth, where each color is assigned to each zone of the tooth to make the tooth appear realistic. Similarly, as described further in reference to FIGs. 11A-B and 13A-C, the gingiva can have 3 zones, where a color is assigned to each zone and a gradient or other blending effect is applied to make the colors in the gingiva appear more realistic.
[0186] The computer system can identify layers for each zone in block 1008. Similarly as described above in reference to block 1006, the computer system can apply mapping data, one or more rules, object detection techniques, and/or machine learning-trained models to the calibrated data to then identify layers per zone. Sometimes, the layers can be identified as part of identifying the zones in block 1006. In some implementations, the layers can be the same as the zones that are identified in block 1006. Refer to FIGs. 15-18 for further discussion about identifying layers for each zone and/or for each tooth more generally.
[0187] As an illustrative example, multiple layers can be identified in each zone for a tooth (or across multiple zones for the tooth). When looking at the tooth, layers of color having varying characteristics (e.g., brightness, opacity, translucency) can make the tooth appear more realistic — for example, at some angles, light may go through the tooth, at other angles, light may reflect back off the tooth, and at yet other angles, some light may be absorbed or otherwise stuck in the tooth. Layering of colors and their respective characteristics on the tooth can therefore compensate for effects of light transmission and light reflection on the tooth. Layering of colors and their respective characteristics can also make the tooth appear to have more depth, thereby making the tooth more realistic. [0188] In block 1010, the computer system can identify a statistical color value for each layer in each zone. The computer system can sample a portion of the calibrated oral scan data that already has color values associated with it. For example, the oral scan data can include a three-dimensional or two-dimensional image of the patient’s mouth. The computer system can use object detection techniques and/or image processing techniques to identify a portion of the image of the patient’s mouth that corresponds to a particular part of the patient’s mouth having a particular zone, such as an enamel layer of a tooth. The computer system can then identify and extract color values in the identified portion of the image and determine a statistical color value for the particular zone based on the extracted color values in the identified portion of the image. [0189] In some implementations, and as described further in reference to FIGs. 11-13, the computer system can use horizontal and/or vertical slicing techniques of an entire arch to identify color values across a slice. The identified color values across the slice can be averaged to determine a statistical color value for a desired zone of the patient’s teeth associated with that dental arch. Sometimes, the computer system can use horizontal and/or vertical slicing techniques on a particular tooth to identify the statistical color values for the zones of that tooth. [0190] The statistical color value can be an average value of colors in each layer in each zone (e.g., or, in some implementations, an average color value in each zone). The statistical color value can be some other distribution of color values in which outlier color values that are identified in a particular layer and/or in a particular zone may be removed or otherwise ignored. The statistical color value can also be a mean color value for each layer in each zone. In some implementations, the statistical color value can correspond to more than one layer in a zone for a particular part of the patient’s mouth, such as a particular tooth, set of teeth, upper gingiva, lower gingiva, exterior-facing side of teeth, interior-facing side of teeth, etc. Additionally or alternatively, the statistical color value can correspond to more than one zone in the calibrated data. As an illustrative example, each canine tooth in the patient’s mouth can have the same zones. When the statistical color values are determined for the zones in one canine tooth, those values can also be assigned to the same zones in the other canine teeth.
[0191] Often, traditional shade and color identification techniques with intraoral scans result in too many shades being identified for one tooth or small area in the patient’s mouth. When too many shades are identified, it can be challenging to accurately and easily apply the shades to dentures being manufactured, thereby resulting in unrealistic-looking dentures. The techniques described in reference to block 1010 provide a more accurate result in color identification and application by using selective collection methods to narrow in on a particular color for a tooth and/or gingiva.
[0192] In some implementations, in block 1010, the computer system may identify the statistical color value for one zone or one layer in one zone, and then use that statistical color value to derive color values for other zones and/or other layers in other zones. The derived color values can then be blended with the identified statistical color value such that resulting dentures appear more realistic.
[0193] The computer system assigns each identified statistical color value to a predetermined dental color in block 1012. In other words, each statistical color value can be associated with a common dental color or shade. That association can be saved and used for manufacturing dentures that match the identified statistical color values for the particular patient’s mouth. Accordingly, each zone for each tooth, set of teeth, and/or gingiva can be assigned a specific predetermined, prescribed dental color. The predetermined dental color can include a variety of dental colors that are typically used when generating dentures. The dental colors can include, but are not limited to, Al, A2, A3, A4, Bl, B2, B3, B4, Cl, C2, C3, C4, D2, D3, and D4. These dental colors indicate what color or colors should be used for printing or manufacturing the dentures. As an illustrative example, if the statistical color value for an enamel layer of a canine tooth is 352, the computer system may assign this statistical color value to the predetermined dental color defined as A2. This assignment can be made based on following one or more rules and/or mapping data that identifies statistical color values (and/or ranges of statistical color values) that correspond to the predetermined dental colors.
[0194] Optionally, the computer system can identify and assign at least one texture for each layer in each zone (block 1014). Typically, teeth and gingiva have known or expected texture(s). These known or expected textures can be stored in a library (e.g., texture library), such as a data store or other type of data repository described herein. Accordingly, when the computer system identifies each zone and/or each layer for the patient’s mouth (refer to blocks 1006-1008), the computer system can retrieve, from the library, at least one texture value or other texture data that is associated with the identified zone and/or identified layer. The computer system can map the texture value to the identified zone and/or identified layer. In some implementations, any of the techniques described herein to identify and assign color values may also apply to identifying and assigning texture values.
[0195] In block 1016, the computer system generates a digital model of the patient’s mouth with at least the predetermined dental color(s) mapped to the corresponding layers in zones in the digital model. Sometimes, the model can be generated earlier in the process, such as after the patient oral scan data is received in block 1002. Then, the blocks 1004-1014 can be performed using the digital model, which can have the color information from the oral scan data mapped thereto. The model can already be generated for the particular patient in some implementations, and the computer system may simply retrieve the model from a data store, and then map the predetermined dental color(s) to corresponding layers in zones in the digital model for the patient. The digital model of the patient’s mouth can be a three-dimensional or two-dimensional representation of the patient’s teeth and gingiva. The digital model can be generated using any of the techniques described in reference to FIGs. IB-9.
[0196] Optionally, the computer system can adjust the predetermined dental color(s) in one or more layers in one or more zones in the digital model (block 1018). Adjusting the colors in the digital model can make the resulting dentures appear more realistic with an appearance that is more lifelike and smooth transition between colors in the layers and/or zones. For example, the computer system can blend one or more of the colors across one or more layers and/or zones in the digital model. As another example, the computer system can adjust a gradient of one or more of the colors in one or more layers and/or zones and/or across one or more layers and/or zones. As yet another example, the computer system can adjust brightness of one or more of the colors. The computer system may also adjust opacity, transparency, and/or translucency of one or more of the colors. Any of these adjustments can be determined and made based on one or more rulesets.
[0197] In block 1018, the computer system may modify and/or create various shadow effects for any of the layers, zones, teeth, and/or gingiva represented in the digital model. Sometimes, for example, the computer system can assign an additional zone on the fly to one or more teeth represented in the digital model based on how proximate a first tooth is to a second tooth or some other landmark represented in the digital model (where the landmark can be defined according to a library of annotations and/or landmarks). As an illustrative example, the first tooth can be identified as having 3 zones, where each zone is assigned a different color having unique characteristics (e.g., brightness, gradient, opacity). The second tooth immediately adjacent one side of the first tooth and a third tooth immediately adjacent the other side of the first tooth can be identified as positioned within a threshold distance from each other and/or the first tooth, and therefore may be assigned the same colors identified in the 3 zones of the first tooth. In other words, the second and third teeth can also be identified as having 3 zones, each with the same colors and corresponding characteristics as identified for the first tooth. As another example, any other teeth that exceed the threshold distance from the first tooth can be identified as having different zones, different quantities of zones, and/or different colors for each of the zones.
[0198] Sometimes, any of these adjustments described in block 1018 can be made earlier in the process 1000, such as in block 1010 (identifying color values) and/or block 1012 (assigning the color values to predetermined dental colors). Additionally or alternatively, once the predetermined dental colors are mapped to the teeth in the digital dental model in block 1016, the computer system can make additional adjustments to ensure that the teeth appear realistic and/or have color and/or texture values that closely resemble the actual color and/or texture of the patient’s teeth in the oral scan data.
[0199] Optionally, the computer system can simulate ray-tracing of the digital model with the predetermined dental color(s) to identify one or more deviations in color that may exceed threshold color values (block 1020). As part of simulating the ray-tracing, the computer system can optionally adjust at least one of the predetermined dental color(s) in the digital model based on the identified deviation(s) in color (block 1022). Simulating ray-tracing in a CAD environment can be beneficial to automatically identify how light reflects off of or otherwise appears in the patient’s mouth. Such simulating can be used to identify areas in the patient’s mouth that may appear more shaded than others. This type of information can be useful to determine brightness adjustments, for example, to one or more colors in the identified areas so that when similar light illuminates the patient’s dentures, the dentures appear realistic.
Sometimes, the simulated ray-tracing may identify dark triangles or other geometric shapes that represent dark or shaded zones in the patient’s mouth under certain lighting conditions. The computer system can then determine adjustments to characteristics of colors identified in those dark or shaded zones to counteract the shading that would appear when the patient wears the resulting dentures in the same or similar lighting conditions.
[0200] As an illustrative example, the simulating ray-tracing may indicate that the patient’s upper canines appear more shaded when light shines directly on the front of the patient’s mouth. Therefore, the computer system can determine adjustments to enhance (e g., brightness of) the color of the upper canines (or at least one of the colors in the layers and/or zones of the canines) so that such shading does not appear when the resulting dentures are worn by the patient in the same or similar lighting conditions. The computer system can iteratively simulate the ray-tracing once such adjustments are determined and made in order to identify whether the adjustments are sufficient to reduce or otherwise eliminate the effects identified from the previously simulated ray-tracing. In other words, the ray-tracing can be simulated and adjustments can be made until deviations in the colors no longer exceeds the threshold color values (and/or the deviations in colors are within expected or normal threshold ranges of color deviations).
[0201] Although the ray-tracing is described in reference to enhancing the colors in the digital model, such as adjusting brightness, results from the simulated ray-tracing may also be used to add one or more colors to layers and/or zones, especially based on perspective of viewing the teeth. The results from the ray-tracing may also be used to remove one or more colors from layers and/or zones. Optionally, the results may be used to change an intensity of one or more colors in one or more layers and/or zones.
[0202] In block 1024, the computer system returns output including the digital model and at least color data corresponding to the mapped predetermined dental color(s). The output can indicate what colors are mapped to what layers and/or zones for which teeth and/or gingiva. The output can also indicate characteristics of each color, including but not limited to brightness, opacity, gradient, intensity, translucency, blend with other colors, etc. In some implementations, the output may also include texture data that was determined and applied in the optional block 1014.
[0203] The output can be a single file containing the digital model in addition to data about each zone, each layer per zone, associated color data, and/or associated texture data. Sometimes, the output may also include instructions for manufacturing/fabricating dentures based on the digital model. The file can be an STL file, as described herein. In some implementations, OBJ, PLY, 3MF, and/or other similar file types can be used, which allow for color and other complex data to be stored, retrieved, and accessed. For example, file types such as OBJ, PLY, and 3 MF can include not only color data but also library data such as annotations, splines, color gradient, and landmark data. All of this data can be stored in a singular file and efficiently transmitted and processed by computing systems described herein to accurately and efficiently manufacture or print dentures. In the file, the color data can be assigned to particular triangles, vertices, and other geometric shapes in the digital model, which is analogous to how computer graphics are rendered. In some implementations, the output can include multiple files (e.g., a first file containing the digital model, a second file containing the color data, a third file containing the texture data).
[0204] Optionally, the computer system can store the output in a library in block 1026. The output can then be retrieved at a later time for manufacturing and printing dentures for the particular patient. Sometimes, the output can be added to a library for determining color/texture for dentures of other patients.
[0205] In block 1028, the computer system accesses and/or generates a denture model for the patient. The denture model can be generated using any of the techniques described herein. The denture model can represent dentures to be manufactured for the particular patient. The denture model can also indicate, for the dentures, color data and/or texture data (as determined in the process 1000) that is then used for manufacturing the dentures.
[0206] The denture model may be different than the patient digital model. For example, the digital model can model all the patient’s teeth and mouth based on motion, images, and other data collected during an oral scan, as part of the oral scan data. The denture model, on the other hand, can be a three-dimensional representation of dentures to be designed and manufactured for the particular patient and based on the patient’s teeth and mouth represented in the digital model of the patient’s mouth. Refer to FIGs. IB-9 for further discussion about generating the denture model.
[0207] The computer system applies the mapped predetermined dental color(s) in layers to the denture model for the particular patient in block 1030. As a result, the dentures can be manufactured using the color data, texture data, and any adjustments made thereto that were determined in the above process 1000 to make the teeth appear realistic. Sometimes, block 1030 can be performed as part of block 1028.
[0208] In block 1032, the computer system can optionally fabricate a pair of dentures for the patient based on the denture model. The computer system can generate instructions for manufacturing the dentures. The instructions can be stored, as described herein, then later retrieved when the dentures are to be manufactured. The instructions can also be transmitted to a rapid fabrication machine as described throughout this disclosure, or any other denture-printing or manufacturing device described herein. The instructions, when received at the machine, can cause the machine to automatically print and manufacture the dentures according to the instructions, the output, and any other manufacturing information provided in the instructions. Sometimes, the instructions or the output can be transmitted to a computing device of a relevant user, such as a dentist, and outputted in a graphical user interface (GUI) display at the computing device. The user can then review the output and/or instructions, make one or more adjustments, and/or approve the instructions and transmit the instructions to a machine for manufacturing the dentures (e.g., by selecting a selectable option presented in the GUI to manufacture the dentures). [0209] FIG. 11A is a flowchart of a process 1100 for determining color data (and optionally texture data) for gingiva in a digital dental model of a patient. Although the process 1100 is described generally for determining color data of gingiva, the process 1100 can also be performed more specifically to determine color data for anterior gingiva and color data for posterior gingiva. The anterior gingiva may require more detail in color and/or texture than the posterior gingiva, especially since the anterior gingiva is front-facing and therefore more visible than the posterior gingiva. Therefore, a color and/or texture library, as described herein, may include designs, annotations, landmarks, color data, and/or texture data that are specific for anterior gingiva and other designs, annotations, landmarks, color data, and/or texture data that are specific for posterior gingiva. Refer to FIGs. 15-18 for further discussion about determining color data for the gingiva, such as determining the color data for different layers or zones of the gingiva.
[0210] The process 1100 can be performed by the computer system 152 or any other components described in reference to FIGs. 1A-B, including but not limited to the denture design system 116 and/or the rapid fabrication machine 119. The process 1100 can also be performed by one or more other computing systems, devices, computers, networks, cloud-based systems, and/or cloud-based services. For illustrative purposes, the process 1100 is described from the perspective of a computer system.
[0211] Referring to the process 1100 in FIG. 11 A, the computer system can receive a digital dental model of a patient’s mouth in block 1102. Refer to FIGs. IB- 10 for further discussion about the digital dental model.
[0212] In block 1104, the computer system can identify annotations and/or landmarks for gingiva in the model. Coloring gingiva can be achieved by relating colors to annotations and landmarks identified in the digital dental model. The annotations and landmarks can be assigned and stored in a library. The annotations and landmarks can be specific to the particular patient’s mouth. Sometimes, the annotations and landmarks can be generic and therefore associated with different patients’ mouths.
[0213] The computer system can map a first color and/or texture to an annotated portion of the gingiva that corresponds to socket halos (block 1106). For example, the computer system may map the first color and/or texture as a top surface color and/or texture (block 1108). In an example of mapping color for anterior gingiva, the first color can be a lightest pink tone. In an example of mapping color for posterior gingiva, the first color can be a light pink tone.
[0214] The computer system can map a second color and/or texture to an annotated portion of the gingiva that corresponds to each root eminence (block 1110). For example, the computer system can map the second color and/or texture as a middle surface color and/or texture (block 1112). This mapping may appear best for central incisors and canines, which may include teeth numbers 6, 8, 9, and 11. In the example of mapping color for anterior gingiva, the second color can be a pink tone that is darker than the first color. In the example of mapping color for posterior gingiva, the second color can be a pink tone that is darker than the first color that extends towards the annotation points identified on the posterior gingiva.
[0215] The computer system can map a third color and/or texture to an annotated portion of the gingiva that corresponds to portions of the gingiva between one or more annotations (block 1114). For example, the computer system may map the third color and/or texture as a bottom surface color and/or texture (block 1116). In the example of mapping color for anterior gingiva, the third color can be darkest pinks, purples, and/or blues that may appear in a stamped texture between the annotation points of teeth numbers that include, but are not limited to, 6, 8, 9, and 11. In the example of mapping color for posterior gingiva, the third color can be a darkest pink that is mapped in between the annotation points identified on the posterior gingiva.
[0216] The blocks 1106-1116 can be performed in any order. Sometimes, one or more of the blocks 1106-1116 may be performed in parallel. Although the process 1100 is described in reference to mapping 3 colors and/or textures to the digital model for the patient’s gingiva, the process 1100 can also be performed to map any other quantity of colors and/or textures to the digital model. The quantity of colors and/or textures may vary depending on how many zones and/or layers are identified in the digital model of the patient’s mouth for the gingiva. For example, the example process 1100 maps 3 colors, where each color corresponds to a different zone identified for the patient’s gingiva. As another example, 4 zones can be identified for the gingiva in the digital model, and then 4 colors and/or textures can be mapped to the zones using the techniques described in the process 1100. One or more other variations are also possible. [0217] Furthermore, the colors that are mapped in blocks 1106-1116 can obey various fundamental rules of color dominance, opacity, and/or order preference such that the gingiva has a realistic appearance when manufactured as part of dentures. For example, the lightest pink color, or the first color, can always be used as a surface color (e.g., outermost surface) for the anterior gingiva. The darker pink color, or the second color, can always be layered beneath the lightest pink color as a middle layer. The darkest pink color, or the third color, may always be layered beneath the darker pink color. Any additional colors, such as purples or blues (which can be identified as fourth and fifth colors) may always be lawyered beneath the abovementioned colors unless the additional colors represent veins or vessel in the anterior gingiva. If the additional colors represent veins or vessels in the anterior gingiva, then such additional colors may be mapped to one or more different layers of the anterior gingiva between one or more of the abovementioned first, second, and third colors.
[0218] In block 1118, the computer system can return the digital dental model with the mapped colors and/or textures. The computer system can then proceed to any of the blocks 1018- 1032 described in the process 1000. Therefore, for example, dentures can be manufactured using the digital dental model and the colors and/or textures that have been mapped for the gingiva. [0219] FIG. 1 IB illustrates an example digital dental model 1150 of the patient in FIG. 11 A for determining the gingiva color data. The digital dental model 1150 includes anterior gingiva 1152 and posterior gingiva 1154. Annotation points 1156A-N can be identified and applied to both the anterior gingiva 1152 and the posterior gingiva 1154 in the digital dental model 1150. Vertices 1158A-N can also be defined between one or more of the annotation points 1156A-N and used to assign one or more colors to the anterior gingiva 1152 and/or the posterior gingiva 1154 in the model 1150. Using the disclosed techniques, colors of the gingiva 1152 and 1154 may be non-uniform and determined, at least in part, based on location of denture library teeth in the model 1150. For example, denture library tooth 1160 may include various annotation points or markers (such as annotation point 1156C) that are used to determine color of the overlaying anterior gingiva 1152. In some implementations, the anterior gingiva 1152 surrounding or near the tooth 1160 may be assigned one or more colors based on distance from an annotation point, such as the annotation point 1156C. As described in reference to the process 1100 in FIG. 11A, for example, a lightest shade of pink can be assigned to the anterior gingiva 1152 that is closest to and/or surrounding the tooth 1160 and darker shades of pink being assigned to portions of the anterior gingiva 1152 that appear farther away in distance from the tooth 1160 and/or the annotation point 1156C for the tooth 1160.
[0220] The vertices 1158A-N can be used to define boundaries indicating the distances between certain landmarks, such as the tooth 1160 or the annotation point 1156C, and other landmarks, such as annotation point 1156A (where the annotation point 1156A is used to define a portion of the anterior gingiva 1152 that may be farthest away from one or more teeth). As an illustrative example, a lightest shade of pink can be assigned to the anterior gingiva 1152 between the vertex 1158B and the tooth 1160 (e.g., having a smallest distance from the tooth). A darker shade of pink can be assigned to a portion of the anterior gingiva 1152 that is defined by the vertices 1158A and 1158C, above the vertex 1158B and below the annotation point 1156A (e.g., having a threshold distance from the tooth). A darkest shade of pink can be assigned to a portion of the anterior gingiva 1152 that is farthest away in distance from the annotation point 1156C and/or the tooth 1160, and defined by space to the right of the vertex 1158C and/or to the left of the vertex 1158A (e.g., having a greatest distance or some distance exceeding a threshold distance value from the tooth). Any of the colors assigned to portions of the anterior gingiva 1152 can then be blended or adjusted in some way to provide more realistic characterization of the gingiva 1152 than would be possible with a single color or a uniform gradient.
[0221] Although assignment of color is described in reference to the anterior gingiva 1152, the same or similar techniques may also be used to assign color to the posterior gingiva 1154. [0222] FIG. 1 IB also depicts a denture model 1162 having teeth 1170 and gingiva 1172. Here, color is assigned to the gingiva 1172 using horizontal slicing techniques. A computer system as described herein can determine 3 horizontal slices: a first slice 1164 (corresponding to a cervical layer), a second slice 1166 (corresponding to a middle layer), and a third slice 1168 (corresponding to a base layer). In some implementations, one or more of the slices 1164-1168 can at least partially overlap such that color assigned to each layer may blend together to provide a realistic representation of the gingiva 1172. Each of the slices 1164-1168 can be assigned a different value, the value being at least one of a frequency, wavelength, numerical, or other shade value, using the disclosed techniques. Averaged values across the slices 1164-1168 can create a unique signature for the gingiva 1172. In some implementations, this unique signature can then be assigned to a commonly perceived color, shade, dental value, or other dental pattern. [0223] In some implementations, instead of or in addition to performing horizontal slicing techniques as shown with the digital model 1162, the computer system can perform a similar vertical slicing technique. Vertical slicing can be beneficial to determine shade color and apply such color(s) more effectively when a data size is small (e.g., applying the color(s) to just 1 or 2 teeth or a small portion of the gingiva rather than more teeth or a larger section of the gingiva). Vertical slices of color values may be taken at many intervals across a particular portion of the model 1162 (e.g., across a single tooth or across a particular portion of the gingiva). Then the color values identified at these intervals can be averaged and/or summated to determine a color value that can be consistently applied across the teeth or gingiva (and thus applied to multiple of the vertical slices).
[0224] FIGs. 12A-B is a flowchart of a process 1200 for determining color data for teeth in a digital dental model of a patient. The process 1200 can be performed to map color data to each individual tooth of the patient. The color data can then be stored in association with the particular tooth in a library as described throughout this disclosure. Determining the color data per tooth can beneficially make each tooth appear more realistic and lifelike, especially when this data is used to design and manufacture dentures. In some implementations, the process 1200 may be performed to determine and map color data to more than one tooth (e.g., each canine can be mapped with the same color data, a threshold quantity of teeth that are next to each other or within some threshold distance from each other can be mapped with the same color data). [0225] The process 1200 can be performed by the computer system 152 or any other components described in reference to FIGs. 1A-B, including but not limited to the denture design system 116 and/or the rapid fabrication machine 119. The process 1200 can also be performed by one or more other computing systems, devices, computers, networks, cloud-based systems, and/or cloud-based services. For illustrative purposes, the process 1200 is described from the perspective of a computer system.
[0226] Referring to the process 1200 in FIGs. 12A-B, the computer system receives a digital dental model of a patient’s mouth in block 1202. Refer to block 1102 in the process 1100 of FIG. 11A for further discussion.
[0227] In block 1204, the computer system identifies annotations and/or landmarks for teeth in the model. The annotations and/or landmarks may identify or otherwise represent, for each tooth (or more than one tooth) a cervical third, a middle third, and an incisal third. Sometimes, one or more of these zones may overlap such that color values assigned to each zone can be blended at the points of overlap to make the teeth appear realistic. In some implementations, the zones may not overlap, but color values assigned to each zone may still be blended between the zones to make the teeth appear realistic.
[0228] In some implementations, the annotations and/or landmarks can identify other portions or zones of the tooth or more than one tooth, such as a set of teeth along a dental arch. As shown in FIG. 12C, for example, the annotations and/or landmarks can identify horizontal slices for the teeth. As shown in FIG. 12D, as another example, the annotations and/or landmarks can be used to identify vertical slices for the teeth. Refer to block 1104 in the process 1100 in FIG. 11 A for further discussion about identifying annotations and/or landmarks in the model. [0229] In block 1206, the computer system selects at least one tooth.
[0230] In block 1208, the computer system can map a first color value representing a band of chroma to an annotated cervical third of the tooth. For example, the computer system can map the first color value using annotations indicating halo sockets as reference points on the tooth (block 1210). The computer system can additionally or alternatively adjust a dominance level of the first color value to exceed a threshold level of dominance (block 1212). The cervical third of the tooth typically may contain the most chroma compared to other parts of the tooth. Moreover, most area of the tooth that contacts the halo sockets will have the band of chroma mapped thereto. The chroma can be represented in the first color value that may include, but is not limited to, A (reddish brown shades), B (reddish yellow shades), C (gray shades), D (reddish gray shades), and bleach.
[0231] The computer system can also map a second color value and optionally a texture value to an annotated middle third of the tooth in block 1214. For example, the computer system can adjust a level of opacity of the mapped second color value to exceed a threshold level of opacity (block 1216). The computer system can also identify the texture value based on data indicating a type of the tooth having the second color value and the texture value mapped thereto (block 1218). Each type of tooth may have a different type of texture. Therefore, the computer system can utilize mapping data or rules to identify in a library of texture data, which texture values correspond to which teeth. The computer system can then select the texture value(s) that is associated with the type of the selected tooth to map onto the annotated middle third of the tooth. Additionally, the middle third of the tooth typically has the most value or opacity. The second color value may include, but is not limited to, shades of white and/or yellow. Therefore, the level of opacity of the second color value can be increased to a value within a range of 90- 100% opacity so that the second color value is most apparent in comparison to other color values that are mapped to the tooth in one or more different zones.
[0232] In block 1220, the computer system can map a third color value to an annotated incisal third of the tooth. The computer system may, for example, adjust a level of translucency of the third color value to exceed a threshold level of translucency (block 1222). The computer system may map the third color value as a color band across the annotated incisal third of the tooth based on a type of the tooth (block 1224). The computer system may map the third color value on the annotated incisal third of the tooth based on landmarks indicating one or more IP contacts (block 1226). Typically, the incisal third can be a thinnest and most translucent layer or region of the tooth. Therefore, the third color value mapped to the tooth can be adjusted to have a highest level of transparency (or at least a threshold level of transparency, such as at least 80% transparent) in comparison to the first and second color values. The third color value may include, but is not limited to, shades of blue, gray, and/or violet. Sometimes, the third color value can appear like highlights on the tooth. A level of transparency/translucency for the third color value can vary based on the type of the tooth. For example, central, lateral, canine, and molar teeth can each have a different placement of the color band and/or a different threshold level of translucency.
[0233] Blocks 1208-1226 can be performed in any order. For example, one or more of the blocks 1208-1226 can be performed in series. One or more of the blocks 1208-1226 can be performed in parallel. Moreover, although the process 1200 is described with respect to mapping three color values to the tooth, the process 1200 can also be performed to map any other quantity of color values (and texture values) to the tooth. The quantity of color values that are mapped to a tooth, for example, can vary based on how many zones are identified for the particular tooth, for a set of teeth, and/or for the particular patient. Sometimes, multiple color values can be mapped to one zone, where each color can be mapped to a different layer in the zone.
Accordingly, the process 1200 can be performed for each zone of each tooth (or other part of the patient’s mouth) for which more than one color value is being applied.
[0234] Still referring to the process 1200, the computer system can optionally adjust at least a gradient between the first, second, and third color values (block 1228). In other words, one or more of the color values can be adjusted to create a gradient or blend effect between the color values (e.g., across the zones identified or defined by the annotations and/or landmarks). This can make the resulting teeth (e.g., dentures) appear lifelike and realistic.
[0235] Optionally, the computer system can simulate ray-tracing on the tooth to identify and apply at least one color adjustment to the mapped first, second, and/or third color values (block 1230). Refer to blocks 1020-1022 in the process 1000 in FIGs. 10A-B for further discussion about simulating ray-tracing.
[0236] The computer system can determine whether there are more teeth to map color values to (block 1232). If there are more teeth, the computer system can return to block 1206 and repeat blocks 1206-1230 for each remaining tooth (or set of teeth). If there are no more teeth to map color values to, the computer system can return the digital dental model with data indicating the mapped color values (block 1234). This returned data can be used, as described herein, to generate a denture design for the patient and/or manufacture (e g., print, fabricate) dentures for the patient. Refer to FIG. 1A and the processes 900 in FIG. 9 and 1000 in FIGs. 10A-B for further discussion about how the returned data can be used and/or outputted.
[0237] Sometimes, the computer system can optionally generate and return a patient-specific color library based on the mapped color values (block 1236). In some implementations, block 1236 can be performed before, during, or after one or more other blocks in the process 1200. For example, the block 1236 can be performed after block 1202, in response to receiving a digital dental model of a patient’s mouth. As another example, the block 1236 can be performed in response to receiving an oral scan of the patient’s mouth, which can occur even before receiving the digital dental model of the patient’s mouth in block 1202. The computer system can perform image or scan data processing techniques described herein to extract color data from the oral scan. Then, the computer system can add the extracted color data to a patient-specific color library. The patient-specific color library can be populated with colors and/or textures that are identified for the specific patient over time (e.g., whenever oral scan data or image data of the patient are received). The patient-specific color library can therefore provide an archive of exact color and/or texture matches for each of the particular patient’s teeth and/or gingiva. The patientspecific color library can be stored in a data repository or other type of data store described herein. The patient-specific color library can be accessed, retrieved, and/or referenced with any of the disclosed processes whenever designing, adjusting, and/or generating digital dental models for the particular patient. Advantageously, maintaining and updating the patient-specific color library over time can allow for efficient, quick, and easy completion of tasks such as designing digital dental models for the patient and generating print instructions. The patient-specific library can also advantageously provide for consistency in color and/or texture applied to teeth models that are generated for the particular patient over time. In some implementations, the patientspecific library can be used by the computer system to generate one or more genus color and/or texture libraries, which can then be used to design and generate digital dental models for one or more other patients.
[0238] FIG. 12C illustrates an example digital dental model 1250 of the patient in FIGs. 12A-B for determining the teeth color data with horizontal slicing techniques. In the model 1250, zones have been identified, using the techniques described in the process 1200 of FIGs. 12A-B, for chroma 1254, a middle portion 1256 of each tooth 1252A-N, and an incisal portion 1258 of each tooth 1252A-N. Each tooth can be considered a triangular mesh having color values assigned per zone in the triangular mesh. The color value assignments per zone can be stored in association with the tooth in a library, as described herein, then used to design and manufacture dentures for the patient.
[0239] The chroma 1254 color value can be applied as a band along a top edge/portion of each tooth 1252A-N. Between a lower boundary of the chroma 1254 and an upper boundary of the incisal portion 1258, the middle portion 1256 of each tooth 1252A-N can be assigned another color value, such as yellow and/or white shades. The middle portion 1256 of each tooth 1252A- N can also have some texture applied thereto in order to make the teeth 1252A-N appear lifelike. The color and/or texture can be applied differently to each tooth 1252A-N in the respective middle portion 1256 based on the type of tooth, as shown by the vertical lines in the middle portions 1256 of the teeth 1252A-N in FIG. 12C (e.g., texture can be applied in higher concentration closer to a center of each central tooth while texture can be applied on sides of canine teeth rather than in a center of each canine tooth). The incisal portion 1258 color value can be applied from the upper boundary of the incisal portion 1258 separating the incisal portion 1258 from the middle portion 1256 to a bottom or tip of each tooth 1252A-N. The color value can be applied to the incisal portion 1258 in a pattern or design that varies based on the type of tooth, as shown in FIG. 12C. For example, the upper boundary can have a more prominent wave design (e.g., fewer but larger valleys defining the upper boundary) for canines than for central teeth. A gradient of the color value assigned to the incisal portion 1258 may also be adjusted according to one or more rules described herein that provide for more realistic-looking teeth design.
[0240] As shown in patient oral scan data 1260, horizontal slicing may be performed along a dental arch. Where the oral scan data 1260 is sliced can vary based on what type of coloring, texture, and/or appearance is desired for the particular patient. As an illustrative example, to determine an average tooth shade, three horizontal slices 1262, 1264, and 1266 can be made using the techniques described herein (the slice 1262 can correspond to an incisal third, the slice 1264 can correspond to a middle third, and the slice 1266 can correspond to a cervical third of a particular tooth and/or a set of teeth). Each of the slices 1262, 1264, and 1266 can be assigned colors having different values, including but not limited to a frequency, wavelength, or other numerical values. As described in reference to the color values assigned to gingiva in FIG. 1 IB, the color values assigned to each of the slices 1262, 1264, and 1266 can make up a unique signature or fingerprint for the particular tooth (or set of teeth), which can then be associated with most commonly assigned or perceived dental colors. The associated dental colors can then be used for designing and manufacturing dentures that match the patient’s actual teeth and appear lifelike. Although the slices 1262, 1264, and 1266 are shown as applying to multiple teeth, the same techniques described herein can be applied to smaller areas in the oral scan data 1260, such as one tooth, 2-4 teeth, 6 anterior teeth, or any other combination of teeth desired by a relevant user, such as a dentist.
[0241] FIG. 12D illustrates another example digital dental model 1270 of the patient in FIGs. 12A-B for determining the teeth color data with vertical slicing techniques. In this example, the techniques described in reference to FIGs. 12A-B are used to determine vertically-sliced zones for the patient’s teeth. Vertical slicing can beneficially be used to color a single tooth to appear more lifelike. Vertical slicing can be used to identify a lightest zone 1272, a middle zone 1274, and a darkest zone 1276 for the particular tooth. Averaging color values from each of the zones 1272, 1274, and 1276 can provide for determining an appropriate and accurate transition of color from a top portion to a bottom portion of the particular tooth.
[0242] FIG. 13A is an example digital dental model 1300 having annotations 1302A-N for defining teeth 1304A-N and corresponding gingiva 1306 color data. The digital dental model 1300 can be stored in a library and used for printing/manufacturing corresponding dentures. The digital dental model 1300 of FIG. 13A can be generic and used for denture design and manufacturing for many patients. Sometimes, the model 1300 can be specific to a particular patient’s mouth. The annotations 1302A-N can be defined in the library as reference points for identifying landmarks in the model 1300 that are then used to assign particular color values to the teeth 1304A-N and the gingiva 1306. For example, the annotations 1302A-N can be used to determine layering of one or more colors and gradient effects to apply to one or more of the layered colors applied to the teeth 1302A-N (and/or the gingiva 1306). Annotations 1308A-N over the gingiva 1306 can be used to identify sockets or boundaries 1310 of the gingiva 1306 and the teeth 1302A-N. The annotations 1308A-N can also be used to identify and assign gingiva halo color data and boundaries 1312 in order to make the gingiva 1306 appear realistic. Refer to FIGs. 11-12 for further discussion about using the annotations 1304A-N and 1308A-N to identify and assign color values to both the teeth 1302A-N and the gingiva 1306.
[0243] FIG. 13B is an example digital dental model 1320 having cutback layers 1328 of teeth color data. As shown in front 1324 and side 1326 views of the digital dental model 1320 the cutback layers 1328 can be assigned different colors and/or color values (e.g., opacity, gradient, brightness, translucency) to allow for more depth of color and more realistic-looking teeth. For example, the digital dental model 1320 can include the cutback layer 1328 that has an opaqueness level that exceeds some threshold level and a matching layer 1322 that is more transparent than the cutback layer 1328. The matching layer 1322 can overlay at least a portion of the cutback layer 1328. For example, as shown in the front view 1324 of the digital dental model 1320, the matching layer 1322 can be more apparent (e.g., more opaque) near a top portion of a tooth, but as shown in the side view 1326, the matching layer 1322 can be less apparent (e.g., more transparent) as it covers an entire front portion of the tooth and extends down the tooth towards a tip or bottom portion of the tooth. The cutback layer 1328 can be assigned color shades that include, but are not limited to, whites, yellows, greys, oranges, and/or browns. The transparent matching layer 1322 can be assigned color shades that include, but are not limited to, more transparent blues and/or greys. As mentioned above, this combination of layering colors can cause resulting teeth to appear more realistic and lifelike.
[0244] FIG. 13C is an example digital dental model 1330 having color and texture data for teeth 1332A-N and gingiva 1334. As described herein, various layers and/or zones can be identified per tooth 1332A-N and gingiva 1334. Colors and other color values (e.g., gradient, transparency, brightness) can be assigned to each layer and/or zone. For example, colors A2 and A3 can be assigned to 2 layers on the tooth 1332A. Refer to FIGs. 12A-D for further discussion about assigning color to the teeth 1332A-N. A lightest pink color can be assigned to a zone 1336A of the gingiva 1334, as defined by annotation points and landmarks described in reference to FIGs. 11A-B. A middle-level pink color can be assigned to a zone 1336B of the gingiva 1334, and a darkest pink and one or more textures can be assigned to a zone 1336C of the gingiva 1334. [0245] Any of the color data assigned and described herein can be stored in association with each layer of the teeth 1332A-N and/or each layer of the gingiva 1334 in a library described herein. Although each layer can be assigned color and/or texture data, each layer may also be further divided into additional or smaller color zones so as to increase an amount of detail and/or characteristics that can be applied to the model 1330 to resemble realistic teeth and gingiva. [0246] FIG. 14 shows an example of a computing device 1400 and an example of a mobile computing device that can be used to implement the techniques described here. The computing device 1400 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The mobile computing device is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart-phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
[0247] The computing device 1400 includes a processor 1402, a memory 1404, a storage device 1406, a high-speed interface 1408 connecting to the memory 1404 and multiple highspeed expansion ports 1410, and a low-speed interface 1412 connecting to a low-speed expansion port 1414 and the storage device 1406. Each of the processor 1402, the memory 1404, the storage device 1406, the high-speed interface 1408, the high-speed expansion ports 1410, and the low-speed interface 1412, are interconnected using various busses, and can be mounted on a common motherboard or in other manners as appropriate. The processor 1402 can process instructions for execution within the computing device 1400, including instructions stored in the memory 1404 or on the storage device 1406 to display graphical information for a GUI on an external input/output device, such as a display 1416 coupled to the high-speed interface 1408. In other implementations, multiple processors and/or multiple buses can be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices can be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi -processor system).
[0248] The memory 1404 stores information within the computing device 1400. In some implementations, the memory 1404 is a volatile memory unit or units. In some implementations, the memory 1404 is a non-volatile memory unit or units. The memory 1404 can also be another form of computer-readable medium, such as a magnetic or optical disk.
[0249] The storage device 1406 is capable of providing mass storage for the computing device 1400. In some implementations, the storage device 1406 can be or contain a computer- readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product can also contain instructions that, when executed, perform one or more methods, such as those described above. The computer program product can also be tangibly embodied in a computer- or machine-readable medium, such as the memory 1404, the storage device 1406, or memory on the processor 1402.
[0250] The high-speed interface 1408 manages bandwidth-intensive operations for the computing device 1400, while the low-speed interface 1412 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In some implementations, the highspeed interface 1408 is coupled to the memory 1404, the display 1416 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 1410, which can accept various expansion cards (not shown). In the implementation, the low-speed interface 1412 is coupled to the storage device 1406 and the low-speed expansion port 1414. The low-speed expansion port 1414, which can include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) can be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
[0251] The computing device 1400 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as a standard server 1420, or multiple times in a group of such servers. In addition, it can be implemented in a personal computer such as a laptop computer 1422. It can also be implemented as part of a rack server system 1424. Alternatively, components from the computing device 1400 can be combined with other components in a mobile device (not shown), such as a mobile computing device 1450. Each of such devices can contain one or more of the computing device 1400 and the mobile computing device 1450, and an entire system can be made up of multiple computing devices communicating with each other.
[0252] The mobile computing device 1450 includes a processor 1452, a memory 1464, an input/output device such as a display 1454, a communication interface 1466, and a transceiver 1468, among other components. The mobile computing device 1450 can also be provided with a storage device, such as a micro-drive or other device, to provide additional storage. Each of the processor 1452, the memory 1464, the display 1454, the communication interface 1466, and the transceiver 1468, are interconnected using various buses, and several of the components can be mounted on a common motherboard or in other manners as appropriate.
[0253] The processor 1452 can execute instructions within the mobile computing device 1450, including instructions stored in the memory 1464. The processor 1452 can be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor 1452 can provide, for example, for coordination of the other components of the mobile computing device 1450, such as control of user interfaces, applications run by the mobile computing device 1450, and wireless communication by the mobile computing device 1450.
[0254] The processor 1452 can communicate with a user through a control interface 1458 and a display interface 1456 coupled to the display 1454. The display 1454 can be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 1456 can comprise appropriate circuitry for driving the display 1454 to present graphical and other information to a user. The control interface 1458 can receive commands from a user and convert them for submission to the processor 1452. In addition, an external interface 1462 can provide communication with the processor 1452, so as to enable near area communication of the mobile computing device 1450 with other devices. The external interface 1462 can provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces can also be used.
[0255] The memory 1464 stores information within the mobile computing device 1450. The memory 1464 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. An expansion memory 1474 can also be provided and connected to the mobile computing device 1450 through an expansion interface 1472, which can include, for example, a SIMM (Single In Line Memory Module) card interface. The expansion memory 1474 can provide extra storage space for the mobile computing device 1450, or can also store applications or other information for the mobile computing device 1450. Specifically, the expansion memory 1474 can include instructions to carry out or supplement the processes described above, and can include secure information also. Thus, for example, the expansion memory 1474 can be provide as a security module for the mobile computing device 1450, and can be programmed with instructions that permit secure use of the mobile computing device 1450. In addition, secure applications can be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
[0256] The memory can include, for example, flash memory and/or NVRAM memory (nonvolatile random access memory), as discussed below. In some implementations, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The computer program product can be a computer- or machine-readable medium, such as the memory 1464, the expansion memory 1474, or memory on the processor 1452. In some implementations, the computer program product can be received in a propagated signal, for example, over the transceiver 1468 or the external interface 1462.
[0257] The mobile computing device 1450 can communicate wirelessly through the communication interface 1466, which can include digital signal processing circuitry where necessary. The communication interface 1466 can provide for communications under various modes or protocols, such as GSM voice calls (Global System for Mobile communications), SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), among others. Such communication can occur, for example, through the transceiver 1468 using a radio-frequency. In addition, short- range communication can occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, a GPS (Global Positioning System) receiver module 1470 can provide additional navigation- and location-related wireless data to the mobile computing device 1450, which can be used as appropriate by applications running on the mobile computing device 1450. [0258] The mobile computing device 1450 can also communicate audibly using an audio codec 1460, which can receive spoken information from a user and convert it to usable digital information. The audio codec 1460 can likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device 1450. Such sound can include sound from voice telephone calls, can include recorded sound (e.g., voice messages, music files, etc.) and can also include sound generated by applications operating on the mobile computing device 1450.
[0259] The mobile computing device 1450 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as a cellular telephone 1480. It can also be implemented as part of a smart-phone 1482, personal digital assistant, or other similar mobile device.
[0260] Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
[0261] These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms machine-readable medium and computer-readable medium refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine- readable medium that receives machine instructions as a machine-readable signal. The term machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
[0262] To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
[0263] The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), and the Internet.
[0264] The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
[0265] FIG. 15A illustrates an example system 1500 for storing digital tooth libraries that are augmented with data such as color, layers, texture, and/or transparency data. The system 1500 can include an augmented digital tooth libraries data store 1502. The data store 1502 can be configured to store a plurality of augmented tooth libraries 1504A-N. Each library can include information, data, and/or properties/physical characteristics for individual teeth and/or groups/sets of teeth (e.g., blocks of teeth, an arch form of teeth, anterior teeth, posterior teeth, molars). In some implementations, the data store 1502 can be configured to store color and/or texture libraries for one or more different teeth and/or gingivae. The color and/or texture libraries can be archived in the data store 1502, then later retrieved for use in designing teeth for one or more patients. Sometimes, the color and/or texture libraries can be stored in association with particular teeth and/or gingivae. Sometimes, the color and/or texture libraries can be stored in association with a particular patient(s). The color and/or texture libraries can be used to design teeth for any patient. In some implementations, the color and/or texture libraries may only be used to design teeth for patients having particular conditions and/or design needs. In yet some implementations, one or more of the libraries 1504A-N can be patient-specific color and/or texture libraries. The data store 1502 can maintain and update one or more libraries for each patient, as described in reference to block 1236 in the process 1200 of FIGs. 12A and 12B. [0266] Each of the libraries 1502A-N can include a plurality of sub-libraries. The sublibraries can have a same set of teeth but with different color properties (e.g., different general colors, which can be applicable to a variety of different use cases) or other physical properties (e.g., texture, transparency). Therefore, when designing a dental appliance, any one of the sublibraries can be selected. Using the sub-libraries can advantageously reduce an amount of time needed to design the dental appliance since a relevant user would not have to determine and apply a specific set of colors to the selected tooth library for the dental appliance design. Instead, by selecting one of the sub-libraries for the particular tooth library, the dental appliance can be designed with the specific color(s) associated with the selected sub-library. The color(s) of the selected sub-library can also be further customized. For example, a sub-library can include a plurality of layers, zones, or meshes. Each of the layers, zones, or meshes can have different color, texture, and/or transparency properties. Any of these properties can be adjusted for a layer, zone, or mesh, then automatically applied to any other portion of the selected tooth library having the same layer, zone, or mesh. In some implementations, each of the meshes can be further divided into zones. Sometimes, instead of zones, each mesh can have RGBA values/properties. The RGB can correspond to color data for the mesh and the A can correspond to a translucency for the mesh (e.g., a percentage of translucency, a percentage of opaqueness). [0267] In the example of FIG. 15 A, the library 1504A contains sub-libraries for different colors Al, A2, and D2. Any of the sub-libraries can be selected and applied to a dental appliance design so that the properties associated with the selected sub-library are automatically applied to the corresponding tooth/teeth in the dental appliance design. Each of the libraries 1504A-N can contain additional or fewer sub-libraries.
[0268] Each of the sub-libraries can include a plurality of meshes associated with the tooth. Sometimes, a mesh can have multiple nested meshes. The meshes can also be considered layers and/or zones. In the example of FIG. 15A, mesh N corresponds to a mesh for tooth gums. Sometimes, the gums can have multiple meshes or sub-meshes, like a tooth. As an illustrative example, the Color Al sub-library for the tooth library 1504A can include 4 meshes (e.g., layers). In some implementations, one or more of the same or different teeth can have additional or fewer meshes.
[0269] For example, if the library 1504A comprises 1 tooth, then each of the sub-libraries in the library 1504A can include a plurality of meshes for the 1 tooth in different colors. The Color Al sub-library, for example, can include a plurality of meshes for Tooth A, where each mesh includes values, identifiers, labels, or other indicators for mesh properties. The mesh properties can include color, transparency, and/or texture.
[0270] Each of the sub-libraries can also include at least one texture map. The texture map can be pinned to or otherwise associated with different parts of the tooth. The texture map can be applied to one or more different layers/meshes of the tooth. The texture map can include stippling or other types of texture features to make the tooth (or gingiva) appear more realistic. Each of the sub-libraries for the library 1504A can be stored in the data store 1502 with color values or identifiers for each mesh, translucency or transparency values for each mesh, and/or texture values for each mesh. In some implementations, different parts of each mesh can include different property values, which may also be customizable depending on the patient and/or dental appliance being designed.
[0271] FIG. 15A also illustrates a front view 1506 and a side view 1514 of a tooth 1512 and gingiva 1510, which correspond to the library 1504A. Each of the meshes 1, 2, 3, 4, and N 1508A, B, C, D, and N, respectively, can include different shapes and different colors, transparency, and/or texture properties. As shown in the front and side views 1506 and 1514, the mesh 1 1508A can be an innermost layer that encompasses a portion of the tooth 1512 and a portion of the gingiva 1510. In other words, the mesh 1 1508A can include a root portion of the tooth 1512 that can be at least partly visible through the gingiva 1510. Coloring and adding texture to the root portion with the mesh 1 1508A can advantageously provide for making the tooth 1512 appear realistic. Sometimes, the gums or gingiva 1510 surrounding the tooth’s root can appear whiter or lighted in color (e.g., lighter pink) than other portions of the gums because the gums can have some degree of translucency. Thus, the portion of the tooth 1512 that includes the root may be partially visible underneath the gums. As a result of the disclosed techniques, the root of the tooth 1512 can be fabricated (e.g., 3D printed) underneath the gingiva 1510 with one or more color values and the gingiva 1510 can be fabricated with some degree or percentage of translucency so that the root portion can be partially visible through the gingiva 1510. Accordingly, the gingiva 1510 can have different levels of thickness in each layer/mesh around the root portion of the tooth 1512 as well as different levels of transparency in each layer/mesh around the root portion of the tooth 1512. The different levels of thickness and/or transparency per layer/mesh can be stored in the augmented tooth libraries 1504A-N described herein.
[0272] A second, one larger layer, mesh 2 1508B can be more transparent than the mesh 1 1508A and can include a larger portion of the visible tooth 1512 than the mesh 1 1508A. A third, one larger layer, mesh 3 1508C can be more transparent than both the meshes 1 and 2, 1508 A and 1508B, respectively, and can include a larger portion of the visible tooth 1512. The mesh 4 1508D can be an outermost layer of the tooth 1512, and can correspond to a fully transparent (and/or mostly transparent) layer that, may be designated to provide a protective coating on an outer surface of the tooth. The mesh N 1508N can be a mesh for the gingiva 1510, which can overlay or otherwise be an outermost layer for the gingiva 1510 (e.g., overlaying the mesh 1 1508A). The shapes of the meshes 1, 2, 3, 4, and N 1508A, B, C, D, and N, respectively, can be shaped differently based on the type of tooth, a shape of the tooth itself, a desired dental appearance for the particular patient, and/or a dental appliance to be designed with the augmented tooth libraries described herein.
[0273] As described further below, one or more of the meshes shown and described in FIG. 15A can overlap to provide a more realistic appearance to the tooth when used in designing and fabricating a dental appliance. The overlapping of meshes can also provide for increased depth in color and/or texture of the particular tooth, thereby making the tooth look more realistic when fabricated (e.g., 3D printed). [0274] FIG. 15B illustrates an example system 1520 for generating augmented digital tooth libraries from new and/or preexisting digital tooth libraries. Sometimes, each tooth library can be generated for a specific patient. In some implementations, tooth libraries can be generated for a genus or population of patients having similar conditions, teeth, gingiva, and/or other characteristics (e.g., demographics, age, geographic region). An augmentation computer system 1522 can communicate (e.g., wired and/or wirelessly via networks) with the augmented tooth libraries data store 1502, a non-augmented digital tooth libraries data store 1524, a layer models and rulesets data store 1526, and a gingiva models and rulesets data store 1528. Sometimes, one or more of the data stores 1502, 1524, 1526, and 1528 can be a same data store and/or network of data stores.
[0275] In brief, the augmentation computer system 1522 can be any type of computer system, cloud-based system, and/or network of computing devices configured to augment digital tooth libraries and/or digital tooth libraries with additional data and/or properties, including but not limited to color, texture, and/or transparency properties described throughout this disclosure. [0276] The augmented digital tooth libraries data store 1520 can be configured to store digital tooth libraries and/or digital tooth files that have been augmented using the disclosed techniques by the augmentation computer system 1522. The augmented digital tooth libraries can be retrieved by one or more computing systems during runtime for use in designing realistic dental appliances for patients. Refer to description throughout this disclosure for additional details.
[0277] The non-augmented digital tooth libraries data store 1524 can be configured to store a variety of non-augmented, standard digital tooth libraries that have been generated by a variety of different sources and/or users. The non-augmented digital tooth libraries may not contain additional data or properties such as color, texture, and/or transparency properties. For example, a non-augmented digital tooth library can be grey-scale or may have no color, texture, or transparency. The non-augmented digital tooth libraries and/or files may only contain a digital tooth. Sometimes, the non-augmented digital tooth libraries and/or files may not contain any gingiva component. In some implementations, one or more of the non-augmented digital tooth libraries and/or files may contain at least a portion of the gingiva component for the tooth. Sometimes, the non-augmented digital tooth libraries and/or files can include more than one tooth, such as adjacent teeth, a bridge or arch form of teeth, a block of teeth, or another group of teeth.
[0278] The layer models and rulesets data store 1526 can be configured to store information that can be used by the augmentation computer system 1522 to generate layers (e.g., meshes, zones) for each tooth in the non-augmented digital tooth libraries and/or digital tooth files according to morphologies that have been generated for each type of tooth (e.g. refer to FIG. 17). The data store 1526 can also be accessed by the computer system 1522 to retrieve one or more rules that can be used to apply color, texture, and/or transparency properties to each of the layers of each of the digital tooth libraries and/or digital tooth files. The rules can be associated with different types of teeth, groups of teeth, and/or dental appliances that the digital libraries are intended to be used for.
[0279] The gingiva models and rulesets data store 1528 can be configured to store information that can be used by the augmentation computer system 1522 to generate layers (e.g., meshes, zones) for each gingiva component in the non-augmented digital tooth libraries and/or digital tooth files. The data store 1528 can also be accessed by the computer system 1522 to retrieve one or more rules that can be used to apply color, texture, and/or transparency properties to each of the layers of each gingiva component in the digital tooth libraries and/or digital tooth files. The rules can be associated with different types of teeth, groups of teeth, dental appliances that the digital libraries are intended to be used for, types of gingiva, locations of the gingiva, groups of gingiva, etc.
[0280] The data store 1528 can also include rules and/or models for filling in gaps or missing parts of gingiva that may not be part of individual digital tooth libraries. For example, each digital tooth library can include a single tooth and a gingiva component that extends like a cone from a root of the tooth to a predetermined endpoint. When a plurality of digital tooth libraries are loaded together into a digital dental model, gaps may exist between the gingiva components of the teeth once the teeth are placed adjacent to each other. This is because the digital tooth libraries may not include gingiva components that extend beyond reference points of the corresponding tooth. The rules in the data store 1528 can therefore be used to fill in the gaps with gingiva having color, texture, and/or transparency properties to create a uniform, realistic grouping of teeth and gingiva in the digital dental model. [0281] Still referring to FIG. 15C, the augmentation computer system 1522 can receive, access, or retrieve non-augmented digital tooth libraries from the non-augmented digital tooth libraries data store 1524 in block A (1530). The non-augmented digital tooth libraries can be preexisting.
[0282] The computer system 1522 can additionally or alternatively receive, access, or retrieve non-augmented digital tooth fdes from one or more other sources in block B (1532). The other sources can include, but are not limited to, user and/or computing devices of relevant user such as dentists, orthodontists, technicians, teeth designer computing system, dental appliance designer computing system, fabrication devices/machines, etc. The non-augmented digital tooth files can be new tooth libraries.
[0283] The computer system 1522 can identify and retrieve one or more models and/or rulesets for tooth layers and/or gingiva components in block C (1534). For example, the computer system 1522 can receive a non-augmented digital tooth library for a central incisor. The computer system 1522 can then access the layer models and rulesets data store 1526 and request/retrieve layer models that correspond to central incisors and/or rules for applying different color, texture, and/or transparency properties to central incisors. The computer system 1522 can also retrieve a morphology (refer to FIG. 17) from the data store 1526, which can be used to determine how many and arrangement of layers (e.g., meshes, zones) for the central incisor.
[0284] In block C (1534), the computer system 1522 can also access the gingiva models and rulesets data store 1528 to request/retrieve layer models that correspond to a gingiva component of central incisors and/or rules for applying different color, texture, and/or transparency properties to the gingiva component of central incisors. As described herein, each type of tooth can have different layer models and/or rulesets.
[0285] Sometimes, the computer system 1522 can access the data stores 1526 and 1528 at a same time. Sometimes, the computer system 1522 can access the data stores 1526 and 1528 at different times (e.g., the data store 1526 can be accessed when the computer system 1522 is augmenting the digital tooth library for the tooth and the data store 1528 can be accessed when the computer system 1522 is augmenting the digital tooth library for the gingiva component of the tooth). [0286] In block D (1536), the computer system 1522 can apply the models and/or rulesets to the digital tooth libraries and/or digital tooth files to generate 3D layers for the tooth. In other words, the computer system 1522 can generate a 3D model of where one or more layers may exist in the tooth. Sometimes, the computer system 1522 can also apply the models and/or rulesets to generate 3D layers for the gingiva component (if the non-augmented digital tooth library and/or file has a gingiva component or portion thereof). If, for example, the digital tooth library only has an outer surface layer, the computer system 1522 can apply the retrieved rulesets to determine how to make one or more interior layers (e.g., meshes, surfaces, zones) and shape of the interior layers. Refer to at least the process 1000 in FIGs. 10A and 10B for further discussion about generating the 3D layers for the tooth.
[0287] The computer system 1522 can generate gingiva for the corresponding digital tooth (block E, 1538). If the non-augmented digital tooth library does not contain any portion of the gingiva component for the tooth, then the computer system 1522 can generate the gingiva component for the tooth using the retrieved models and/or rules from the gingiva models and rulesets data store 1528. The retrieved models and/or rules can be specific to the particular type of tooth in the digital tooth library/file. Generating the gingiva can include generating 3D layers (e.g., meshes, zones, surfaces) of the gingiva component for the tooth in the digital tooth libraiy/file. Refer to at least the process 1100 in FIGs. 11 A and 1 IB for further discussion about generating the gingiva.
[0288] The computer system 1522 can generate coloring, transparency, and/or texture data for each layer of the digital tooth and/or gingiva (block F, 1540). Refer to FIGs. 15 A, 16, 17, and 18 for further discussion. For example, the computer system 1522 can apply one or more of the retrieved rules to each layer of the digital tooth to assign a color, transparency, and/or texture value/identifier to the layer. The rules can indicate, for each layer, a preferred color, transparency, and/or texture that the tooth can be for a particular type of dental appliance. Each layer of the tooth and/or gingiva can include a unique, different RGBA value, where RGB can be a color value and A can be a transparency value. In some implementations, generating the data for each layer can also include assigning each layer to one or more zones. As a result, when desired, changes can be made to color, texture, and/or transparency in a particular zone, and those changes can be implemented across all layers and/or portions of the layers that are assigned to the particular zone. This can advantageously reduce an amount of time in the dental appliance design process while also using fewer compute power and resources to affect changes across multiple parts of the tooth.
[0289] The computer system 1522 can perform same or similar operations for the gingiva layers. The computer system 1522 can also generate a plurality of sub-libraries for the digital tooth library, where each sub-library contains different color, transparency, and/or texture values/identifiers for the same layers of the tooth and/or gingiva. Each of the sub-libraries can provide tooth and/or gingiva properties that are unique to different dental appliances or other use cases. Refer to at least the processes 1000 in FIGs. 10A and 10B and 1100 in FIGs. 11 A and 1 IB for further discussion about applying color, texture, and/or transparency data to layers and/or zones of a tooth and/or gingiva.
[0290] In block G (1542), the computer system 1522 can generate augmented digital tooth libraries. The augmented digital tooth libraries can include the nesting of layers for each of the tooth and the gingiva component. The augmented digital tooth libraries can include color, texture, and/or transparency values/identifiers that have been assigned, by the computer system 1522, to each of the nested layers for the tooth and the gingiva component. The augmented digital tooth libraries can include a plurality of sub-libraries, where the sub-libraries include the same layers per tooth and gingiva, but may have different color, texture, and/or transparency values/identifiers assigned thereto. Storing this variety of information in the augmented digital tooth libraries can allow for dental appliances to be more quickly and efficiently designed. The color, texture, and transparency properties can be stored and loaded with the digital tooth libraries, which can reduce an amount of processing time and use of compute power needed to design dental appliances using the digital tooth libraries.
[0291] The augmented digital tooth libraries can then be universally used for designing any type of dental appliance, including but not limited to dentures, crowns, bridges, and/or caps. As described further in reference to FIGs. 15A and 15C, when designing a particular type of dental appliance, a computer system (and/or user) can select predefined color, texture, and/or transparency properties for a selected augmented digital tooth library that is intended for use in designing the particular type of dental appliance. The augmented digital tooth library can therefore be stored with a plurality of different predefined color, texture, and/or transparency properties so that the augmented digital tooth library can be easily and efficiently stored and then retrieved for designing any type of dental appliance.
[0292] The computer system 1522 can return the augmented digital tooth libraries by storing them in the augmented digital tooth libraries data store 1502 (block H, 1544). As described herein, each augmented digital tooth library can have multiple possible zones where different color, texture, and/or transparency schemes can be applied. Each library may also have preset colors, textures, and/or transparency levels for each zone. Each library can be stored with default color, texture, and/or transparency values that can be modified later during runtime generation of dental appliances using digital tooth libraries. That way, the digital tooth libraries can be preloaded and preprocessed without having to do process the libraries and add color, texture, and/or transparency values at runtime generation of dental appliances.
[0293] Sometimes, the augmented digital tooth libraries can be stored individually (e.g., per tooth) and/or in blocks or quadrants. Storing the tooth libraries in blocks or quadrants can advantageously speed up placement and positioning of the teeth in a digital dental model for designing dental appliances.
[0294] The digital tooth libraries described herein can be stored as 3MF files. Sometimes, the digital tooth libraries can be STL files, that can be converted and outputted as 3MF files. The 3MF files can advantageously store color, texture, and transparency data in relation with digital tooth libraries and/or digital dental models. One or more other file formats can also be realized and used, as described herein.
[0295] FIG. 15C illustrates an example system 1550 for generating fabrication instructions and/or digital teeth graphics using augmented digital tooth libraries. An augmented digital tooth design and fabrication system 1554 can communicate (e.g., wired, wirelessly) with the augmented digital tooth libraries data store 1502, the gingiva models and rulesets data store 1528, a fabrication device modeling and rulesets data store 1552, a dental appliance fabrication device 1558, a display device 1556, and/or a digital dental appliance design system 1560.
[0296] In brief, the augmented digital tooth design and fabrication system 1554 can be any type of computing system described herein. The computer system 1554 can be configured to generate fabrication instructions and/or graphical representation of digital teeth setups using augmented digital tooth libraries. [0297] The fabrication device modeling and rulesets data store 1552 can be configured to store rules for generating dental appliance fabrication instructions. Each fabrication device 1558 can have different rules for processing and interpreting digital dental models and digital teeth setups. Each fabrication device 1558 can produce dental appliances with different types of equipment, machinery, inks/dyes, and/or file formats. Thus, the rules in the data store 1552 can be used by the computer system 1554 to generate instructions for a particular fabrication device to manufacture a particular dental appliance that is designed using the disclosed techniques. [0298] The dental appliance fabrication device 1558 can be any type of manufacturing, printing, and/or 3D printing machine or system. Refer to the rapid fabrication machine 119 described in at least FIG. 1 A for further discussion.
[0299] The display device 1556 can be any type of user device, computing device, LCD screen, other type of screen, touch screen, etc. that can be configured to present information to a relevant user. The display device 1556 can be part of any one or more computing systems described herein. For example, the display device 1556 can be part of the computer system 152 described in reference to at least FIG. 1 A.
[0300] The digital dental appliance design system 1560 can be configured to design a dental appliance using a digital dental model and tooth scan data for a particular patient. The computer system 1560 can be the same as or similar to the denture design system 116 described in reference to at least FIG. 1A. In some implementations, the computer system 1560 can be the same as or similar to the computer system 152 described in reference to at least FIG. 1A.
[0301] Still referring to FIG. 15C, the augmented digital tooth design and fabrication system 1554 can receive a tooth library selection in block A (1562) from the digital dental appliance design system 1560. Additionally, the computer system 1554 can receive a dental appliance setup from the design system 1560 (block B, 1564). The tooth library selection and the setup can indicate a desired setup for the dental appliance design for a particular patient.
[0302] The computer system 1554 can retrieve the selected library from the augmented digital tooth libraries data store 1502 in block C (1566). The selected library can include individual augmented digital tooth libraries (e.g., each library is for an individual tooth). The selected library can include a block, group, set, and/or arch of teeth. The computer system 1554 can retrieve any of the augmented tooth libraries that have been generated as described in reference to FIG. 15B.
[0303] The computer system 1554 can retrieve one or more rulesets from the gingiva models and rulesets data store 1528 and/or the fabrication device modeling and rulesets data store 1552 in block D (1568). Sometimes, the computer system 1554 can retrieve rulesets from the data store 1528 and the data store 1552 at different times. Refer to FIG. 15B for further discussion about the different rulesets and data stores.
[0304] In block E, the computer system 1554 can apply the dental appliance setup to the selected library (1570). For example, the computer system 1554 can generate a digital dental model using any of the techniques described herein. Refer to at least FIGs. 1A, IB, 9, 10A, and 10B for further discussion.
[0305] The computer system 1554 can adjust relative positioning of meshes of the teeth in the digital dental model based on mesh reference points in block F (1572). The teeth can be adjusted (e.g., moved, enlarged, shrunken) in order to better fit the digital dental model and/or achieve a desired appearance for a resulting dental appliance. As the teeth are adjusted, their corresponding meshes may be automatically moved within each other and based on the reference points. The reference points can be identified for particular types of teeth, such as shown in FIG. 17 with teeth morphologies. The reference points can be retrieved from the gingiva models and rulesets data store 1528. The reference points can indicate relative distances and/or relationships between different colored/textured shares, layers, and/or zones that are applied to the teeth. The reference points can be used as guides to move and align the teeth to permit for the teeth to be properly arranged and sized relative to each other in the teeth setup/digital dental model.
[0306] Each mesh, as shown and described in reference to FIG. 16 and 18 have relative positioning next to each other, which can be used to determine how the meshes are moved as the tooth or teeth are also moved. Therefore, gingiva and respective festooning can travel with the teeth as the teeth are adjusted. Refer to at least FIG. 9 for further discussion about adjusting positioning of the meshes of the teeth.
[0307] The computer system 1554 can also generate portions of gingiva and/or archways that may be missing from the teeth libraries in the digital dental model (block G, 1576). When the teeth are arranged adjacent/next to each other in the digital dental model, gaps or missing portions of the gingiva and/or archway may appear in the model. The computer system 1554 can apply rules, algorithms, and/or machine learning models to fdl in the gaps with portions of gingiva and/or archway to make the teeth appear cohesive and part of one arch form, bridge, or gingiva. As part of generating the portions of the gingiva, the computer system 1554 can apply one or more rules to make an appearance of the gingiva more realistic, as described further in reference to at least FIGs. 11A, 1 IB, and 15B.
[0308] Optionally, the computer system 1554 may generate specific fabrication material, color, and/or printing instructions based on the teeth setup and the retrieved rules for the particular fabrication device 1558 (block H, 1578). Based on specific colors, textures, and/or transparencies of the teeth and gingiva in the teeth setup, the computer system 1554 can generate instructions to set up the particular fabrication device 1558 used to manufacture the dental appliance according to the teeth setup.
[0309] Optionally, the computer system 1554 may generate a graphical representation of the teeth setup (block I, 1580). The graphical representation can indicate how the teeth setup may appear as the dental appliance. The graphical representation can indicate how the teeth setup may visually look in the patient’s mouth and relative to the rest of their physical appearance and attributes.
[0310] The computer system 1554 can return the completed augmented teeth setup in block J (1582). For example, the computer system 1554 can optionally return fabrication instructions 1584 to the dental appliance fabrication device 1558 (with or without the augmented teeth setup and/or the graphical representation of the teeth setup). The instructions can indicate what colors, textures, and/or transparency properties are mapped to which portions of the teeth setup. The instructions can indicate what types of material to use for printing and the color values that can be applied to each of the materials. The instructions can beneficially provide for ensuring consistency across fabrication of similar dental appliances by the particular fabrication device 1558.
[0311] The fabrication device 1558, as described herein (refer to the rapid fabrication machine 119 in at least FIG. 1 A) can be configured to manufacture, fabricate, produce, and/or 3D print different types of dental appliances using different types of materials, inks, dyes, etc. Some fabrication devices 1558 can be configured to use light features to produce evenly-colored 3D printed objects at a quality level of traditional 2D printers. For example, some 3D printers can use a binding agent to fuse plastic powder locally. Color can then be applied at a level of 3D printing called a ‘voxel,’ which can be a 3D pixel. As a result, color 3D printing can be achieved for complex objects, including but not limited to dental appliances such as dentures, crowns, bridges, and/or caps. Some fabrication devices 1558 can use filaments that already contain color for 3D printing while other fabrication devices can apply color from one or more external sources for 3D printing.
[0312] The fabrication devices 1558 with direct color 3D printing can use colorful filaments to 3D print dental appliances, which can work with fused deposition modeling (FDM) technology (a process for making physical objects by building up successive layers of material). FDM printing can include multi-jet and/or poly-jet printing. Other printing techniques may also be used with the disclosed techniques.
[0313] Bright colors and details can be achieved with direct color 3D printing, depending on the 3D printer and quality of filament that is used. Sometimes, colors may not be mixed with direct color 3D printing techniques. 3D color printing can be achieved if the FDM printer has a single extruder. For example, when the 3D files for the dental appliance are sent to the printer, the fabrication instructions can set multiple tasks and a g-code with instructions for the 3D printer to stop at certain levels of printing. When the 3D printer stops, the filament can be switched out for a different or other color and then the printing job can be restarted/continued. In some implementations, a dual extruder can be used, which can be used to print with 2 colors of filaments or more. Some 3D printers can have additional extruders (e.g., 3, 4, 5, 8, etc.).
[0314] The fabrication devices 1558 can perform indirect color 3D printing by applying color from an external source during the printing process. This technology can be more precise and can allow for a more realistic appearance of the 3D prints. To perform indirect color 3D printing, the printing instructions can include color and texture information. When a layer of material is spread during printing, the printer can be instructed to apply color that adheres to the layer. A next layer can be spread and then the process can repeat. In some implementations, between layers, color can be applied and cured with UV light, which allows for adjusting transparency of the colors or other properties of each layer in the dental appliance that is being 3D printed. [0315] As another example of returning information in block J (1582), the computer system 1554 can optionally return a graphical representation 1586 of the augmented teeth setup to the display device 1556 (with or without the augmented teeth setup and/or the fabrication instructions). The display device 1556 can output or otherwise present the graphical representation in one or more graphical user interface (GUI) displays. The outputted graphical representation can be used to show the patient and/or dentist or other relevant user what the dental appliance may look like and/or how the teeth setup may look with the patient’s physical features. For example, the graphical representation can provide a realistic view of colors, textures, and/or transparency of the teeth setup.
[0316] The relevant user can make adjustments to the outputted teeth setup, which can be transmitted to any of the computing systems described herein (e.g., the denture design system 116 in FIG. 1 A). The user adjustments can be used by the receiving computing system to modify the teeth setup for the particular patient’s dental appliance design. Then, the adjusted teeth setup can be presented back at the display device 1556 and/or used by the computer system 1554 to generate fabrication instructions for the fabrication device 1558.
[0317] FIG. 16 illustrates example digital teeth 1600, 1602, 1604, 1606, and 108 with layers having different color, texture, and/or transparency properties. The central incisor 1600 represents how the tooth 1600 with corresponding gingiva may appear in comparison to teeth 1610A-N from other tooth libraries that do not contain properties such as color, texture, and/or transparency. The teeth 1610A-N do not have roots, nor do they have gingiva structures or layers. The teeth 1610A-N each have a single mesh. Using the disclosed techniques (refer to FIGs. 15 A, 15B, and 15C), all of the teeth 1610A-N can be modified like the tooth 1600 to include various layers having different color, texture, and/or transparency properties. The tooth 1600 can be generated using the disclosed techniques, which can include attaching a tissue-side scan to a bottom of a digital tooth library corresponding to the tooth 1600.
[0318] In some implementations, a gingiva component of the tooth 1600 could be present with or without a tooth root behind it. For example, for some fabrication devices, such as printers or 3D printing machines, and for some fabrication strategies, it can be preferred to not have the root behind the gingiva. On the other hand, for some fabrication devices and/or fabrication strategies, it can be preferred to have a white color of the root reflecting at least partially through a pink color of the gingiva component.
[0319] In addition to having the gingiva component saved and shaped in the anatomical library for the tooth 1600, the tooth and gingiva components may each have several layers (that is, each of the tooth and gingiva can be made up of multiple meshes on top of or inside each other to provide a realistic appearance and coloring when fabricating (e.g., 3D printing) a resulting dental appliance.
[0320] As shown in FIG. 16, each of the teeth 1602, 1604, 1606, and 1608 have different layers of both tooth and gingiva components. For example, the tooth 1602 is depicted as having only outside layers to the tooth and gingiva components. The tooth 1604 is depicted as having a root and a simple reduced layer of the tooth 1604 inside the gingival mesh. The tooth 1606 is depicted as having all layers of each of the tooth and gingiva components with full transparency for each layer.
[0321] The tooth 1608 provides an illustrative example of the tooth having 2 layers and the gingiva component having only 1 layer. Here, a second tooth layer can be approximately 95% the size of the first layer for the tooth 1608. In some implementations, there can be 0 dimensions between any of the layers for the tooth and/or the gingiva components.
[0322] In some implementations, additional layers may be required to produce an appropriate color depth and/or detail for a desired outcome, dental appearance, dental standards, etc. Refer to FIG. 18 for additional discussion about the various layers that can be determined and applied to a tooth and/or gingiva component.
[0323] FIG. 17 illustrates example morphology that can be sculpted on inside layers of different types of digital teeth. An MX central incisor can have a morphology 1700. An MX lateral incisor can have an example morphology 1702. An MX canine can have an example morphology 1704. Similarly, an MD central incisor, an MD lateral incisor, an MD canine, premolars, and molars may all have different morphology that can be custom sculpted on an inside layer of the tooth. The morphologies can be custom sculpted using one or more techniques described herein. Each morphology can be different to provide a realistic appearance of color, texture, and/or transparency depending on features of the corresponding tooth (e.g., shape, location in the mouth, visibility when a person smiles or opens their mouth). [0324] FIG. 18 illustrates an example tooth 1800 with layers having different amounts of opacity, translucency, and/or transparency. As described herein, any tooth can have one or more layers. Sometimes, a tooth can have 1 layer. A tooth can also have 2 layers. In some implementations, a tooth can have 4 layers. In the example of FIG. 18, the tooth has 5 layers. Each of the 5 layers can be in direct proportion to an outermost layer of the tooth 1800. Each layer can also be a percentage of the outermost layer (e.g., 99.9-97.5%, 95%, 90%, 80%, etc.). An innermost layer may be called a stump shade layer. The innermost layer can represent a root layer and can sometimes, as shown in FIG. 15A, move up into a portion of gingiva for the tooth. A layer that is next or one larger than the innermost later can be considered a morphology layer. A next, next larger layer can be a dentin layer, then a near can be an enamel layer, and a next or largest layer can be a gloss/effects layer. Each of the layers can have different degrees of transparency — in other words, each layer may not just be translucent or just opaque. The dentin layer, for example, can be partially translucent, as defined according to one or more dentin layer rules and/or dentistry standards.
[0325] In some implementations, such as for designing dentures, a 6th layer can be added to the tooth 1800, which can be a gloss/glazing layer that eliminates and/or aides in providing a sheen to the outermost surface of the tooth 1800. The gloss/glazing layer can advantageously reduce a polishing time and can provide a desirable polish. The 6th layer can also have an option to incorporate a text box where the patient’s name or identification number can be applied during design of the patient’s dental appliance. In some implementations, the gloss/glazing layer can be approximately 25-800 microns thick.
[0326] One or more inner layers of the library tooth 1800 may have custom shaping to represent a correct tooth morphology for that specific tooth in the arch. The shaping may provide background texture and coloring. The shaping can also aide in reflecting light back out of the tooth 1800 in different directions. The diffusion of light can be essential or otherwise desired to make realistic dental objects during design and then produce a realistic 3D printed object. Accordingly, some teeth may have more than one morphology layer. Sometimes, the shapes may not be uniform between and amongst each of the morphology layers.
[0327] The example tooth 1800 shows differing amounts of translucency per one or more layers. Each layer and/or group of layers can have a different amount of opacity, translucency, and/or transparency. The outermost gloss/glazing layer can be considered a transparent outer layer 1802. The transparent outer layer 1802 can be completely transparent. In some implementations, the transparent outer layer 1802 can have a predetermined amount of transparency that is a little less than completely transparent (e.g., 99.5% transparent, 98.7%, 98.5%, 97%, 95%, etc.).
[0328] A second layer can be a transparent enamel. A third layer can be a translucent enamel. In some implementations, the second and/or third layers can be considered translucent layers 1804. A fourth layer can be a more opaque dentin layer. A fifth layer can be the opaque stump, as described herein, which can be considered an opaque inner layer 1806. Varying percentages of translucency or opaqueness can be used for each of the layers described herein. The percentages of translucency or opaqueness can be defined by a relevant user, such as a dentist preparing/designing dental appliances for the patient, and/or according to rules, standards, or patient expectations/preferences for designing the particular dental appliances. In addition, additional or fewer layers can be added for more or less detail. Regardless of how many layers are part of the tooth 1800, outermost layers can be transparent (e.g., completely or near- completely transparent), and moving through varying states, degrees, levels, and/or percentages of translucent until an innermost layer. The inner most layer can be completely opaque or near- completely opaque.
[0329] In some implementations, changes in scaling, brushing, and/or sculpting surface layers in a tooth design can be carried proportionally to other layers of the tooth. Therefore, an intended effect of the tooth library can remain, despite being altered for designing a particular patient’s dental appliances. Sometimes, a bottom of a library mesh may be open to aide in connecting the library mesh with an object it is intended to be joined to. For dentures, for example, the object to connect to can be a tissue side scan, which can then become a tissue side of the dentures. As another example, for crowns, the object to connect to can be a tooth preparation scan. The layered library techniques described herein can be relevant and applicable for designing all different types of dental appliances, including but not limited to dentures, implants, crowns, bridges, and other types of dental prostheses.
[0330] FIG. 19 illustrates a tooth cross section 1900 having mesh layers 1902A-N that can be exported to a 3D color printer. Triangle vertices of the mesh layers 1902A-N can be assigned color and/or opacity values using the disclosed techniques. For 3D printing, the mesh layers 1902A-N can be exported, using a computer system described herein, to a 3D color printer in one or more different fde formats, including but not limited to a 3MF file.
[0331] The tooth cross section 1900 is represented in a voxel volume 1904. The voxel volume 1904 can be a 3D voxel volume. The voxel volume 1904 includes a layer of voxels 1906A-N. The layer of voxels 1906A-N can be a single layer. The layer of voxels 1906A-N can be a 2D layer of voxels within the voxel volume 1904. Some 3D printers may accept voxel volume files, which can allow for the disclosed technology to set each voxel in the voxel volume 1904 independently. Setting each voxel in the volume 1904 independently can provide a graduated color and/or opacity changes in addition to one or more textures (e.g., speckling, other realistic details) to the 3D-printed tooth. Therefore, the color and/or opacity values that are assigned to the mesh layers 1902A-N of the tooth can be exported and applied in 3D printing of that tooth.
[0332] While this specification contains many specific implementation details, these should not be construed as limitations on the scope of the disclosed technology or of what may be claimed, but rather as descriptions of features that may be specific to particular implementations of particular disclosed technologies. Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation in part or in whole. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described herein as acting in certain combinations and/or initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations may be described in a particular order, this should not be understood as requiring that such operations be performed in the particular order or in sequential order, or that all operations be performed, to achieve desirable results. Particular implementations of the subject matter have been described. Other implementations are within the scope of the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method for modeling a patient’ s mouth with color data, the method comprising: receiving, by a computer system, oral scan data for a patient, wherein the oral scan data includes at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system; identifying, by the computer system and for at least one tooth represented in the oral scan data for the patient, a first zone, a second zone, and a third zone, wherein the first, second, and third zones are non-overlapping zones; identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth; and generating, by the computer system, a digital dental model for the patient based on mapping the identified statistical color value for each identified zone for the at least one tooth to corresponding zones for at least one tooth represented in the digital dental model.
2. The method of claim 1, further comprising generating, by the computer system, a digital denture model for the patient based on the digital dental model having the mapped color values.
3. The method of claim 2, further comprising transmitting, by the computer system to a rapid fabrication machine, instructions that, when executed by the rapid fabrication machine, cause the rapid fabrication machine to manufacture dentures based on the digital denture model for the patient.
4. The method of claim 3, wherein the instructions include at least one data file having information about the mapped color values for printing the dentures in corresponding dental colors.
5. The method of claim 1, further comprising: identifying, by the computer system and for each identified zone for the at least one tooth, at least one layer; and identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth comprises identifying a statistical color value for the at least one layer for the identified zone.
6. The method of claim 1, further comprising assigning, by the computer system, the statistical color value for each identified zone to a predetermined dental color.
7. The method of claim 1, further comprising adjusting, by the computer system, the statistical color value for at least one of the identified zones for the at least one tooth.
8. The method of claim 7, wherein adjusting, by the computer system, the statistical color value comprises blending the statistical color value across at least two of the identified zones.
9. The method of claim 1, further comprising simulating, by the computer system, raytracing of the digital dental model for the patient to identify deviations in at least one of the statistical color values that exceeds a threshold color value.
10. The method of claim 9, further comprising: adjusting, by the computer system, the at least one of the statistical color values based on the identified deviation.
11. The method of claim 1, further comprising: identifying, by the computer system and for the at least one tooth, a tooth type; retrieving, by the computer system from a data store, texture data associated with the identified tooth type; and assigning, by the computer system, the texture data to at least one zone for the at least one tooth.
12. The method of claim 11, wherein the at least one zone for the at least one tooth is the second zone.
13. The method of claim 1, wherein identifying, by the computer system and for at least one tooth represented in the oral scan data for the patient, a first zone, a second zone, and a third zone comprises identifying annotations for library teeth in the oral scan data for the patient.
14. The method of claim 1, wherein identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth comprises: mapping a first color value representing a band of chroma to the first zone, wherein the first zone represents a cervical third of the at least one tooth and the first color value is mapped to the first zone using annotations in the oral scan data that indicate halo sockets as reference points.
15. The method of claim 14, further comprising: adjusting a dominance level of the first color value to exceed a threshold level of dominance.
16. The method of claim 1, wherein identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth comprises: mapping a second color value to the second zone, wherein the second zone represents a middle third of the at least one tooth.
17. The method of claim 16, further comprising: adjusting a level of opacity of the mapped second color value to exceeds a threshold level of opacity.
18. The method of claim 16, wherein the second color value is at least one of a yellow shade and a white shade. The method of claim 1, wherein identifying, by the computer system, a statistical color value for each identified zone for the at least one tooth comprises: mapping a third color value to the third zone, wherein the third zone represents an incisal third of the at least one tooth. The method of claim 19, further comprising: adjusting a level of translucency of the third color value to exceed a threshold level of translucency. The method of claim 19, wherein mapping a third color value to the third zone comprises mapping the third color value as a color band across an annotated incisal third of the at least one tooth in the oral scan data, wherein the incisal third of the at least one tooth is annotated differently based on a tooth type. The method of claim 19, wherein mapping a third color value to the third zone comprises mapping the third color value onto an annotated incisal third of the at least one tooth based on landmarks in the oral scan data that indicate IP contacts. The method of claim 1, further comprising adjusting, by the computer system, a gradient of the statistical color values across the first, second, and third zones. The method of claim 1, further comprising: performing, by the computer system, a color calibration process on the oral scan data. The method of claim 1, further comprising: identifying, by the computer system, at least one zone across multiple teeth in the oral scan data, wherein the multiple teeth in the oral scan data are a threshold distance from each other along a dental arch of the patient. The method of claim 1, wherein the statistical color value, for each zone, is an average of color values identified for the zone.
27. The method of claim 1, wherein the statistical color value, for each zone, is a mean color value determined from a plurality of color values identified for the zone.
28. The method of claim 1, wherein the statistical color value, for each zone, is a summation of a plurality of color values identified for the zone.
29. The method of claim 1, further comprising: identifying, by the computer system and for a gingiva represented in the oral scan data for the patient, at least one zone; and identifying, by the computer system, a statistical color value for the at least one zone for the gingiva.
30. The method of claim 29, further comprising: identifying, by the computer system, a texture value for the at least one zone for the gingiva.
31. The method of claim 29, wherein the at least one zone includes a first zone, a second zone, and a third zone, wherein the first, second, and third zones for the gingiva are nonoverlapping zones.
32. The method of claim 31, further comprising: mapping, by the computer system, at least one of a first color and a first texture to an annotated portion of the gingiva in the oral scan data that corresponds to socket halos.
33. The method of claim 32, wherein the first color is a top surface color for the gingiva and the first texture is a top surface texture for the gingiva.
34. The method of claim 31, further comprising: mapping, by the computer system, at least one of a second color and a second texture to an annotated portion of the gingiva in the oral scan data that corresponds to each root eminence in the gingiva.
35. The method of claim 34, wherein the second color is a middle surface color for the gingiva and the second texture is a middle surface texture for the gingiva.
36. The method of claim 1, further comprising: mapping, by the computer system, at least one of a third color and a third texture to an annotated portion of the gingiva in the oral scan data that corresponds to portions of the gingiva between annotations.
37. The method of claim 36, wherein the third color is a bottom surface color for the gingiva and the third texture is a bottom surface texture for the gingiva.
38. A method for modeling a patient’s mouth with color data, the method comprising: receiving, by a computer system, oral scan data for a patient, wherein the oral scan data includes at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system; identifying, by the computer system and for at least a portion of teeth represented in the oral scan data for the patient, at least one zone; identifying, by the computer system, a statistical color value for the at least one zone for the portion of teeth; mapping, by the computer system, the statistical color value to a predetermined dental color value for teeth; generating, by the computer system, a digital dental model for the patient based at least in part on the oral scan data for the patient and the mapped predetermined dental color value for the teeth; and transmitting, by the computer system to a rapid fabrication machine, data representative of the digital dental model with the mapped predetermined dental color value for the teeth, wherein the rapid fabrication machine is configured to manufacture at least a portion of dentures for the patient based on the transmitted data.
39. A method for modeling a patient’s mouth with color data, the method comprising: receiving, by a computer system, oral scan data for a patient, wherein the oral scan data includes at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system; identifying, by the computer system and for at least a portion of gingiva represented in the oral scan data for the patient, at least one zone; identifying, by the computer system, a statistical color value for the at least one zone for the portion of gingiva; mapping, by the computer system, the statistical color value to a predetermined dental color value for gingiva; generating, by the computer system, a digital dental model for the patient based at least in part on the oral scan data for the patient and the mapped predetermined dental color value for the gingiva; and transmitting, by the computer system to a rapid fabrication machine, data representative of the digital dental model with the mapped predetermined dental color value for the gingiva, wherein the rapid fabrication machine is configured to manufacture at least a portion of dentures for the patient based on the transmitted data.
40. A method for modeling a patient’s mouth with color data, the method comprising: receiving, by a computer system, oral scan data for a patient, wherein the oral scan data includes at least one of (i) a dental impression generated by a dental impression station, (ii) image data of the patient’s mouth generated by an image capture system, and (iii) motion data of the patient’s jaw movement generated by a motion capture system; identifying, by the computer system, (i) at least one zone for at least a portion of teeth represented in the oral scan data for the patient and (ii) at least one zone for at least a portion of gingiva represented in the oral scan data for the patient; identifying, by the computer system, (i) at least one statistical color value for the at least one zone for the portion of teeth and (ii) at least one statistical color value for the at least one zone for the portion of gingiva; generating, by the computer system, a digital dental model for the patient based at least in part on the oral scan data for the patient and the identified statistical color values; and transmitting, by the computer system to a rapid fabrication machine, data representative of the digital dental model with the identified statistical color values, wherein the rapid fabrication machine is configured to manufacture at least a portion of dentures for the patient based on the transmitted data.
41. The method of one or more of claims 1-40, wherein the statistical color value is selected from among a plurality of predetermined standard dental colors that are part of a color tooth library.
42. The method of claim 41, wherein the statistical color is selected for each of the each identified zones for the at least one tooth.
43. The method of claim 41 or 42, wherein the statistical color is selected as a predetermined standard dental color from among the plurality of predetermined standard dental colors that most closely matches one or more color values derived from the oral scan data.
44. The method of claim 41, 42, or 43, wherein the color tooth library is specific to one or more individual teeth.
45. The method of claim 41, 42, or 43, wherein the color tooth library is for a full denture library.
46. The method of claim 41, 42, or 43, wherein the color tooth library is archived in a data store and retrieved, by the computer system, for generating digital dental models for one or more patients.
47. A method for augmenting a digital tooth library with one or more properties, the method comprising: accessing, by an augmentation computer system from a non-augmented digital tooth libraries data store, a non-augmented digital tooth library; retrieving, by the augmentation computer system from at least one models and rulesets data store, models and rulesets for tooth layers and gingiva; generating, by the augmentation computer system, 3D layers for the non-augmented digital tooth library based on applying a first portion of the retrieved models and rulesets to the non-augmented digital tooth library, wherein the first portion corresponds to a first subset of the models and rulesets for teeth; generating, by the augmentation computer system, 3D layers for gingiva for the nonaugmented digital tooth library based on applying a second portion of the retrieved models and rulesets to the non-augmented digital tooth library, wherein the second portion corresponds to a second subset of the models and rulesets for gingiva; generating, by the augmentation computer system, values for one or more properties for each of the 3D layers of the non-augmented digital tooth library and the gingiva, wherein the one or more properties include at least one of color, texture, and transparency properties; generating, by the augmentation computer system, an augmented digital tooth library that comprises the 3D layers for the non-augmented digital tooth library and the gingiva and the generated values for the one or more properties for each of the 3D layers; and returning, by the augmentation computer system, the augmented digital tooth library.
48. The method of claim 47, wherein generating, by the augmentation computer system, 3D layers for the non-augmented digital tooth library comprises generating: an opaque innermost layer, at least one translucent layer that is larger than the opaque innermost layer and overlays at least a portion of the opaque innermost layer, and at least one transparent outermost layer that is larger than the at least one translucent layer and overlays at least a portion of the opaque innermost layer and the at least one translucent layer, wherein the at least one translucent layer comprises a translucency value that is greater than a translucency value of the opaque innermost layer but less than a translucency value of the at least one transparent outermost layer.
49. A method for generating an augmented teeth setup using augmented digital tooth libraries, the method comprising: receiving, by a computer system, a tooth library selection and a dental appliance setup from a design computing system; retrieving, by the computer system, the selected tooth library from an augmented digital tooth libraries data store, wherein the selected tooth library comprises 3D layers for a tooth, 3D layers for corresponding gingiva, and predefined values for the one or more properties for each of the 3D layers, wherein the 3D layers comprise meshes and wherein the one or more properties include at least one of color, texture, and transparency properties; retrieving, by the computer system and from one or more data stores, one or more rulesets for generating the dental appliance setup; applying, by the computer system, the dental appliance setup to the selected tooth library; adjusting, by the computer system, relative positioning of the meshes of the selected tooth library based on mesh reference points corresponding to the selected tooth library; generating, by the computer system, portions of the gingiva that are missing from the selected tooth library based on applying a portion of the retrieved rulesets that correspond to the gingiva; and returning, by the computer system, an augmented teeth setup that comprises the dental appliance setup with the adjusted selected tooth library, the generated portions of the gingiva, and the one or more properties for each of the 3D layers that corresponds to the selected tooth library.
50. The method of claim 49, the method further comprising generating specific fabrication material, color, and printing instructions based on the augmented teeth setup and a fabrication device. The method of claim 49 or 50, wherein the fabrication device is a 3D printer. The method of claim 49, 50, or 51, the method further comprising transmitting, by the computer system to the fabrication device, fabrication instructions for execution by the fabrication device in printing a dental appliance based on the augmented teeth setup. The method of claim 49, 50, 51, or 52, the method further comprising generating, by the computer system, a graphical representation of the augmented teeth setup. The method of claim 49, 50, 51, 52, or 53, the method further comprising transmitting, by the computer system, the graphical representation to a display device, wherein the display device is configured to output the graphical representation in a graphical user interface (GUI) at the display device.
PCT/US2023/078382 2022-11-01 2023-11-01 Systems for generating, storing, and using augmented digital tooth libraries design WO2024097777A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263421565P 2022-11-01 2022-11-01
US63/421,565 2022-11-01
US202363506517P 2023-06-06 2023-06-06
US63/506,517 2023-06-06

Publications (2)

Publication Number Publication Date
WO2024097777A2 true WO2024097777A2 (en) 2024-05-10
WO2024097777A3 WO2024097777A3 (en) 2024-06-27

Family

ID=90931486

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/078382 WO2024097777A2 (en) 2022-11-01 2023-11-01 Systems for generating, storing, and using augmented digital tooth libraries design

Country Status (1)

Country Link
WO (1) WO2024097777A2 (en)

Also Published As

Publication number Publication date
WO2024097777A3 (en) 2024-06-27

Similar Documents

Publication Publication Date Title
CN104363857B (en) The method for being used to prepare part or dental pattern section dummy
ES2731900T3 (en) System for planning, visualization and optimization of dental restorations
CA2725818C (en) Methods for designing a customized dental prosthesis using digital images of a patient
RU2567604C2 (en) Dynamic virtual articulator
AU2011273999B2 (en) 2D image arrangement
US8706672B2 (en) Computer-assisted creation of a custom tooth set-up using facial analysis
US20120277899A1 (en) Computer-aided Fabrication Of A Removable Dental Prosthesis
KR20180126015A (en) Facial feature scanning systems and methods
US20240033061A1 (en) Digital denture design and replacement
US20230035538A1 (en) Tools and automation for tooth setup
US20230263605A1 (en) Color digital denture design and replacement
KR102538681B1 (en) Dental arrangement system using data of edentulous patient and method of operating the same
WO2024097777A2 (en) Systems for generating, storing, and using augmented digital tooth libraries design
Lin et al. Virtual Articulators

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23886942

Country of ref document: EP

Kind code of ref document: A2