WO2006065955A2 - Image based orthodontic treatment methods - Google Patents

Image based orthodontic treatment methods Download PDF

Info

Publication number
WO2006065955A2
WO2006065955A2 PCT/US2005/045351 US2005045351W WO2006065955A2 WO 2006065955 A2 WO2006065955 A2 WO 2006065955A2 US 2005045351 W US2005045351 W US 2005045351W WO 2006065955 A2 WO2006065955 A2 WO 2006065955A2
Authority
WO
WIPO (PCT)
Prior art keywords
model
tooth
teeth
treatment
patient
Prior art date
Application number
PCT/US2005/045351
Other languages
French (fr)
Other versions
WO2006065955A3 (en
Inventor
Huafeng Wen
Original Assignee
Orthoclear Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/013,146 external-priority patent/US20060127852A1/en
Priority claimed from US11/013,153 external-priority patent/US20060127854A1/en
Priority claimed from US11/013,147 external-priority patent/US20060127836A1/en
Application filed by Orthoclear Holdings, Inc. filed Critical Orthoclear Holdings, Inc.
Publication of WO2006065955A2 publication Critical patent/WO2006065955A2/en
Publication of WO2006065955A3 publication Critical patent/WO2006065955A3/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1111Detecting tooth mobility
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1127Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using markers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C13/00Dental prostheses; Making same
    • A61C13/0003Making bridge-work, inlays, implants or the like
    • A61C13/0004Computer-assisted sizing or machining of dental prostheses
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6814Head
    • A61B5/682Mouth, e.g., oral cavity; tongue; Lips; Teeth
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C7/00Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
    • A61C7/002Orthodontic computer assisted systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture

Definitions

  • Orthodontics is the practice of manipulating a patient's teeth to provide better function and appearance.
  • orthodontists utilize their expertise to first determine a three-dimensional mental image of the patient's physical orthodontic structure and a three-dimensional mental image of a desired physical orthodontic structure for the patient, which may be assisted through the use of X-rays and/or models.
  • X-rays and/or models typically, based on these mental images the orthodontist designs and implements a treatment.
  • Examples and variations of methods and apparatus for using photographic images in the course of or as an aid to dental or other medical treatments are disclosed, m one aspect, methods and apparatus are disclosed for using photographic images to generate three dimensional (3D) digital models that may be used, for example, for dental or medical treatments.
  • 3D digital models of a patient's face, smile, jaw, tooth arches, individual teeth, and/or gingiva may be generated from information derived, at least in part, from images.
  • Three dimensional digital models of other body parts or structures may also be generated from one or more images by variations of the methods and apparatus disclosed herein.
  • information not derived from the images is also used to generate the 3D digital model.
  • Physical models of body parts or structures such as, for example, physical models of tooth arches and individual physical tooth models may also be digitized by variations of the methods and apparatus disclosed herein.
  • the positions of a patient's teeth or the positions of physical tooth models may be derived from one or more images of the teeth or physical tooth models.
  • a 3D digital model of a first arrangement of teeth or physical tooth models is acquired by, for example, generating it from one or more images of the arrangement of teeth or physical tooth models.
  • the positions of the teeth or physical tooth models in a second arrangement are determined from one or more images of the second arrangement.
  • the 3D digital model is then modified to reflect the positions of the teeth or physical tooth models in the second arrangement.
  • the 3D digital model may be used in some variations to track the positions of the teeth or physical tooth models.
  • the modified 3D digital model may be used in fabricating a physical dental model and/or a dental appliance such as, for example, a dental aligner for rendering corrective teeth movement.
  • methods and apparatus for generating 3D digital models and/or images of predicted final or intermediate results of dental or other medical treatments.
  • one or more images are acquired of a patient's face and teeth prior to an orthodontic treatment.
  • a 3D digital model of the patient's pre-treatment face and teeth is then generated from information derived from these images and, in some variations, from other information as well.
  • Pre-treatment and predicted post-treatment three dimensional digital models of the patient's jaw and/or teeth are acquired and used in combination with the 3D digital model of the pre-treatment face to generate a 3D digital model of the patient's post- treatment face and teeth.
  • This post-treatment model may be rendered into a photo-realistic image of the predicted result of the treatment.
  • Some variations of the methods disclosed herein may be used to generate 3D digital models or images of predicted final or intermediate results of other dental and medical treatments such as, for example, of plastic surgery. Also, in some variations, one or more 3D digital models of a patient at intermediate stages of a treatment are generated. These intermediate stage models may be generated, for example, by morphing a 3D digital pre-treatment model into a 3D digital post-treatment model.
  • FIG. 1 shows an exemplary process for generating a 3D digital model from one or more images according to one variation.
  • FIG. 2 shows a tooth comprising a plurality of registration marks easily distinguishable in an image according to one variation.
  • FIG. 3 shows an exemplary multiple camera set up for acquiring images from which to generate a 3D digital model according to one variation.
  • FIG. 4 shows an exemplary process for determining and tracking tooth or tooth model movements according to some variations.
  • FIG. 5 A shows another exemplary process for determining and tracking tooth or tooth model movements according to some variations.
  • FIG. 5B shows an exemplary process for modifying a 3D digital model to represent a changed arrangement of teeth or physical tooth models according to one variation.
  • FIG. 5C shows an exemplary process for modifying a 3D digital model to represent a changed arrangement of teeth or physical tooth models according to another variation.
  • FIG. 6 shows an exemplary process for generating a photo-realistic image of the predicted result of a dental or other medical treatment according to some variations.
  • FIG. 7 shows an exemplary pre-treatment image of teeth.
  • FIG. 8 shows an exemplary image of the predicted result of an orthodontic treatment of the teeth of FIG. 7 generated according to one variation.
  • FIG. 9 shows an exemplary process for generating photo-realistic images of predicted intermediate results of a dental or other medical treatment according to some variations.
  • a tooth is intended to mean a single tooth or a combination of teeth.
  • generating”, “creating”, and “formulating” a digital representation or digital model mean the process of utilizing computer calculation to create a numeric representation of one or more objects.
  • the digital representation or digital model may comprise a file saved on a computer, wherein the file includes numbers that represent a three-dimensional projection of a tooth arch.
  • the digital representation comprises a data set including parameters that can be utilized by a computer program to recreate a digital model of the desired objects.
  • photographic image and “image” refer to images acquired and/or stored electronically as well as to images acquired and/or stored on film.
  • photographic images” and “images” may be acquired and stored by either digital or analog processes.
  • the first section discloses methods and apparatus for using two-dimensional images (e.g., digital photographic images) to generate three dimensional (3D) digital models that may be used, for example, for dental or medical treatments.
  • Such treatments include, but are not limited to, the fabrication of dental models and dental appliances such as aligners for use in orthodontic treatment.
  • 3D digital models include, but are not limited to, digital models of a patient's face, smile, jaw, tooth arches, individual teeth, and gingiva.
  • 3D digital models include, but are not limited to, digital models of physical models of a patient's dental arches or individual teeth.
  • Such 3D digital models may be generated, as described below, from images including but not limited to images of a patient's face, smile, mouth, jaws, and teeth, and images of physical models of a patient's dental arches or individual teeth.
  • the first section also discloses variations and examples of methods and apparatus in which 3D or other information maybe determined from images and used, for example, for dental or medical treatment without necessarily generating a 3D digital model from the image.
  • the second section discloses examples and variations of methods and apparatus for using two-dimensional images (e.g., digital photographic images) to track tooth movement during a dental treatment. These methods may enable, for example, tracking of tooth movements in a patient's mouth during the course of an orthodontic treatment and tracking of the movements of physical tooth models in a physical model of a patient's tooth arch as the physical model of the tooth arch is manipulated to simulate or plan a course of orthodontic treatment. Such tracking may be based on images including, but not limited to, images of teeth in a patient's mouth or of physical tooth models in a physical dental arch model.
  • the second section also discloses methods for tracking tooth movements that do not use photographic images.
  • the third section discloses examples and variations of methods and apparatus for using two-dimensional images (e.g., digital photographic images) to generate a 3D digital model and/or an image of the predicted result of a dental or other medical treatment, hi some variations, information from one or more 3D digital models is combined with an image showing a current dental or medical condition to generate an image of the projected result of treatment of the condition.
  • information from one or more 3D digital models is combined with an image showing a current dental or medical condition to generate an image of the projected result of treatment of the condition.
  • an image of a patient's face and smile may be combined with information from 3D digital models of the patient's current dental arches and information from 3D digital models of the patient's projected post-treatment dental arches to generate a photo-realistic image of the patient's face and smile after orthodontic treatment.
  • the third section also discloses examples and variations of methods and apparatus for generating photo-realistic images and/or 3D digital models representing intermediate stages of treatment.
  • Such methods may include, for example, morphing a pre-treatment image and/or 3D digital model into the predicted post-treatment image and/or 3D digital model.
  • FIG. 1 shows an exemplary process for capturing 3D dental or other medical data and/or generating 3D digital models using one or more photographic images according to one variation.
  • a common problem in deriving a 3D model of an object from one or more images of the object is to find the projective geometric relationship between object points and image points. This may be conventionally accomplished by determining a mathematical model of the camera that describes how the camera forms an image, i.e., how points in 3D space are projected onto an image sensor that results in the images.
  • Such models generally include parameters characterizing the optical properties of the camera.
  • step 100 internal geometries such as, for example, focal length, focal point, and lens shape are characterized for each camera to be used in the process.
  • the camera lens will distort the rays coming from the object to the recording medium.
  • the internal features and geometry of the camera should be specified so that corrections to the images gathered can be applied to account for distortions of the image.
  • Information about the internal geometries of the camera collected in step 100 may be used for making adjustments to the image data to correct for such distortions.
  • each camera is calibrated by using it to acquire images of one or more objects having precisely known shapes and dimensions. Any distortions observed in the images may used to determine optical properties of the camera. In some variations environmental conditions such as lighting, for example, may also be determined from these images. In some variations lighting conditions may also be determined from known positions of lights, and/or lighting from many angles may be used so that there are no shadows.
  • the projective relationship between object points and image points may be determined from the information collected in steps 100 and 105 by conventional methods and using conventional algorithms known to one of ordinary skill in the art. Examples of such methods and algorithms are described, for example, in U.S. Patent No. 6,415,051 entitled “GENERATING 3D MODELS USING A MANUALLY OPERATED STRUCTURED LIGHT SOURCE” issued to Callari et al., dated July 2, 2002 and U.S. Patent No. 6,563,499 entitled “METHOD AND APPARATUS FOR GENERATING A 3D REGION FROM A SURROUNDING IMAGERY” issued to Waupotitsch et al., dated May 13, 2003.
  • a coordinate system may be established for the generation of three dimensional digital models from images. Also, a distortion-corrected image may be generated.
  • the resolution of a 3D digital model of an object generated from one or more images depends on the ease with which features on the object may be distinguished in the image. This depends on the resolution of the image, which is determined by the camera, and on the size, shape, and other characteristics of the features to be distinguished in the image.
  • easily distinguishable registration marks or features are added to or identified on the object.
  • registration features may include but are not limited to, for example, sparkles (e.g., reflectors) and features or marks of known and easy to distinguish shape and color.
  • registration mark enhancement may relax the resolution required of the cameras and images to produce a 3D digital model of a given resolution.
  • a sufficient number of registration marks e.g., three or more if they are point-like
  • Registration marks or sparkles may also be used to identify areas or features of interest in the object to be imaged.
  • registration features may include but are not limited to points marked on the cusps of the teeth, points marked on the facial axis of the clinical crown (FACC), and points marked on the gingiva line. Such registration features may enable subsequent identification of these features and separation in the image and the 3D digital model of the gingiva from the teeth.
  • sparkles or other features may be attached to or placed on the teeth or tooth models.
  • Registration marks may also painted on to teeth or tooth models, for example. Some registration marks may fluoresce or phosphoresce under ultraviolet light illumination. Referring to FIG.
  • a tooth or tooth model 200 comprises a plurality of registration marks 205 that are easily distinguishable in a photographic image and hence allow the 3D position and orientation of tooth or tooth model 200 to be determined (by methods described below) with high resolution from a photographic image (e.g., an image captured by a CCD digital camera).
  • registration marks such as sparkles, for example, may be attached to a patient's tooth or a tooth model by methods including but not limited to attachment by adhesives and attachment by a wire, bracket, or band attached to the tooth or tooth model.
  • registration marks are identified or placed on surfaces of the teeth that face the inside of the patient's mouth and hence are not readily seen by casual observers.
  • registration marks are formed on an object such as a tooth model, for example, by laser marking.
  • laser marking of a tooth model a minute amount of material on the surface of the tooth model is removed and colored. This removal is not visible after the tooth model has been enameled.
  • a spot shaped indentation is produced on the surface of the material.
  • a variation of laser marking is center marking. In center marking a spot shaped indentation is produced on the surface of the object. Center marking can be circular center marking or dot point marking.
  • one or more images are acquired of the object for which a 3D digital model is to be generated.
  • a single stationary camera acquires one or more images.
  • multiple stationary cameras acquire one or more images from a variety of angles. Partial object occlusion may be reduced as additional images are acquired from additional angles.
  • acquiring multiple images with multiple cameras may allow calibration of the cameras from images of the objects to be digitized rather than in a separate and prior step, hi another variation, one or more moving cameras each acquire a plurality of images from a variety of angles. Very high resolution 3D digital models may be generated where many pictures of a small area are acquired from various angles.
  • images are acquired by multiple stationary and moving cameras. The positions of the camera or cameras at the time the images are acquired may be known (by measurement, for example) or later derived from the images by conventional methods known to one of ordinary skill in the art.
  • FIG. 3 shows an exemplary set-up including multiple cameras according to one variation.
  • Cameras 300 and 305 are positioned to acquire images of tooth 310 (including registration marks 315) from different angles indicated by light rays 320 and 325.
  • cameras 300 and 305 maybe conventional digital cameras or digital video cameras.
  • cameras 300 and 305 may be conventional film cameras or video cameras which generate images that may be subsequently digitized.
  • Cameras 300 and 305 may be stationary or moving with respect to tooth 310.
  • the cameras may acquire simultaneous images and thus prevent relative motion of the object with respect to the cameras between images. This simplifies determination of 3D information from the images and may be particularly useful, for example, where the object or objects imaged are teeth or other features on or in a patient who might otherwise move during any interval between images.
  • a 3D digital model may be generated from the images acquired in step 115 and the information characterizing and calibrating the cameras acquired in steps 100 and 105.
  • this 3D digital model can be generated using conventional methods and conventional algorithms known to one of ordinary skill in the art.
  • some variations may utilize commercial software products such as, for example, PhotoModeler available from Eos Systems Inc.
  • 3D information such as, for example, the relative positions of objects in the images may be determined from the images without constructing a 3D digital model of the objects, and no 3D digital model is generated. This may also be accomplished by one of ordinary skill in the art having the benefit of this disclosure by using conventional methods and algorithms.
  • conventional triangulation algorithms may be used to compute the 3D digital model for the object. This may be done by intersecting the rays with high precision and accounting for the camera internal geometries. The result is the coordinate of the desired point.
  • the identified structures may be used to generate 3D digital models that can be viewed and or manipulated using conventional 3D CAD tools.
  • a 3D digital model in the form of a triangular surface mesh is generated.
  • the model is in voxels and a marching cubes algorithm may be applied to convert the voxels into a mesh, which can undergo a smoothing operation to reduce the jaggedness on the surfaces of the 3D model caused by the marching cubes conversion. For example, one smoothing operation moves individual triangle vertices to positions representing the averages of connected neighborhood vertices to reduce the angles between triangles in the mesh.
  • Some variations include the optional step of applying a decimation operation to the smoothed mesh to eliminate data points, which improves processing speed.
  • an error value is calculated based on the differences between the resulting mesh and the original mesh or the original data, and the error is compared to an acceptable threshold value.
  • the smoothing and decimation operations may be applied to the mesh once again if the error does not exceed the acceptable value.
  • the last set of mesh data that satisfies the threshold may be stored as the 3D model.
  • the triangles form a connected graph
  • two nodes in a graph are connected if there is a sequence of edges that forms a path from one node to the other (ignoring the direction of the edges).
  • connectivity is an equivalence relation on a graph: if triangle A is connected to triangle B and triangle B is connected to triangle C, then triangle A is connected to triangle C.
  • a set of connected nodes is then called a patch.
  • a graph is fully connected if it consists of a single patch.
  • a 3D digital model in the form of a mesh may be simplified by removing unwanted or unnecessary sections of the model to increase data processing speed and enhance the visual display.
  • unnecessary sections of the 3D digital model may include those not needed for creation of the appliance. The removal of these unwanted sections reduces the complexity and size of the digital data set, thus accelerating manipulations of the data set and other operations.
  • all triangles within a box including an unwanted section are deleted and all triangles that cross the border of the box are clipped. This requires generating new vertices on the border of the box.
  • the holes created in the model at the faces of the box are re-triangulated and closed using the newly created vertices.
  • the resulting mesh may be viewed and/or manipulated using a number of conventional CAD tools.
  • 3D digital models of a patient's teeth, gingiva, jaw, and/or face are generated from one or more images, hi some variations the teeth, gingiva, jaw, and/or face are separately modeled, hi other variations a single 3D digital model includes some or all of these objects.
  • One variation generates 3D digital models from images acquired directly of these objects.
  • Another variation uses images of a negative impression of the patient's dental arch, images of a positive dental arch mold cast from the negative impression, and/or images of tooth models such as, for example, tooth models separated from a positive dental arch mold.
  • images are used to measure the position, orientation, and/or size of a patient's teeth, gingiva, jaw, and/or face.
  • individual physical tooth models are separated from a positive mold cast from a negative impression of the patient's tooth arch.
  • a 3D digital model of each of the patient's teeth is generated from one or more images of each physical tooth model.
  • An image of the patient's tooth arch, of a negative impression of the tooth arch, or of a positive mold cast from the negative impression is used to determine the position and orientation of each tooth relative to the others in the patient's jaw or tooth arch.
  • a 3D digital or physical model of the patient's jaw or tooth arch may then be constructed from the 3D digital or physical tooth models.
  • one or more images are acquired to determine the relative positions of a patient's upper and lower jaws and thus determine the type of malocclusion suffered by the patient.
  • An appropriate treatment may then be prescribed, hi some variations, 3D digital models of the upper and lower teeth and jaws are generated from the images to enable determination of the malocclusion.
  • the required 3D information for diagnosing the malocclusion is determined from the image or images without generation of such 3D digital models.
  • tooth or gingival features are recognized from images of a patient's teeth or tooth arch.
  • cusps on molar teeth may be recognized.
  • These and other recognizable tooth features may be used to identify each tooth in a 3D digital or physical model of the patient's tooth arch.
  • Registration marks such as gingival lines, for example, may be used to identify various parts of the gingiva. This may enable separation of the gingiva from the rest of a 3D digital jaw model.
  • images of a patient's tooth or tooth arch are used to identify and separate a 3D digital model of the tooth from a 3D digital model of the patient's jaw or tooth arch. This may be accomplished, for example, by recognizing gingival lines or inter-proximal areas of the teeth. Registration marks may be used to identify the inter-proximal areas and the gingival lines.
  • a 3D digital model of an object is generated using a combination of 3D information derived from one or more images of the object and other 3D information not derived from the images.
  • Such a 3D digital model may be generated by one of ordinary skill in the art having the benefit of this disclosure by using conventional methods and algorithms.
  • gaps in 3D models of faces, jaws, tooth arches, and/or teeth derived from images can be filled in with information from a database containing models and information about faces, jaws, tooth arches, and teeth.
  • a facial/orthodontic database of prior knowledge may be used, for example, to fill in missing pieces such as muscle structure in a model.
  • Such a database can also be used for filling in other missing data with good estimates of what the missing part should look like.
  • separate 3D digital models of a patient's face and jaw may be generated from images and then combined to form a 3D digital model of the face and internals of the head by using information from a facial/orthodontic database to fill in missing pieces.
  • the resulting 3D digital model may be a hierarchical model of the head, face, jaw, gingiva, teeth, bones, muscles, and facial tissues.
  • a 3D digital model of a patient's face is generated from 1) images of the patient's head/face, 2) images of the patient's jaw and teeth, 3) X-rays providing bone and tissue information, and 4) information about the environment in which the images were acquired so that color pigment information may be separated from shading and shadow information.
  • the environmental information may be generated, for example, by positioning lights with known coordinates when the images are acquired. Alternatively, lighting from many angles can be used so that there are no shadows and lighting can be incorporated into the 3D digital model.
  • This data maybe combined to create a complete 3D digital model of the patient's face using the patient's 3D geometry, texture, and environment shading and shadows.
  • the 3D digital model may be a true hierarchical model with bone, teeth, gingiva, joint information, muscles, soft tissue, and skin. Missing data such as internal muscle may be added using prior knowledge of facial models.
  • a 3D digital model of a patient's tooth arch generated by the methods described above is used in the fabrication of dental appliances or physical dental models for use in a dental treatment.
  • the 3D digital model of the patient's tooth arch may be used to fabricate one or more dental aligners using computer numerical control (CNC) based manufacturing techniques.
  • the 3D digital model of the patient's tooth arch may be used to fabricate a physical model of the tooth arch by CNC based manufacturing.
  • the physical model of the tooth arch may then be used in the fabrication of a dental aligner. Suitable methods for fabricating dental aligners and physical models of tooth arches by CNC based manufacturing are disclosed, for example, in U.S. Patent Application No.
  • the 3D digital model may be used to guide the arrangement of physical tooth models on a base or in a wax set-up, for example, to produce a physical model of the patient's tooth arch which may then be used in the fabrication of a dental aligner.
  • a 3D digital model of a tooth arch is disclosed, for example, in U.S. Provisional Application No. 60/676,546 entitled “DIGITIZATION OF DENTAL ARCH MODEL,” filed April 29, 2005 incorporated herein by reference in its entirety for all purposes.
  • FIG. 4 shows an exemplary process for determining and tracking tooth or tooth model movements.
  • step 400 one or more features are identified on or added to the tooth or tooth model. In some variations, these features may be selected to be easily distinguishable in photographic images. In such variations, any of the features identified above with respect to step 110 of the process of FIG. 1 as possible registration features that are suitable for use with teeth or tooth models may be used here as well. In some variations a sufficient number of registration marks (e.g., three or more if they are point-like) are used to define a coordinate system on the tooth or tooth model and hence represent the position and orientation of the tooth or tooth model.
  • step 405 the positions of the features on the tooth or tooth model are detected when the tooth or tooth model is in a first position.
  • the positions of the features on the tooth or tooth model are detected when the tooth or tooth model is in a second position.
  • step 415 the difference between the first position of each feature and the second position of each feature is determined. The tooth or tooth model's change in position can then be determined from these differences.
  • teeth 4 may be applied, for example, to one or more teeth in a patient's mouth, to tooth impressions in negative impressions of the patient's tooth arch, to teeth in positive molds made from the negative impressions, and to individual physical tooth models arranged on a base or in a wax set-up in two or more different arrangements.
  • the positions of, for example, one or more of a patient's teeth or the positions of one or more physical tooth models in an arrangement of physical tooth models may be determined from photographic images of the teeth or tooth models.
  • the positions of a patient's teeth may also be determined from one or more images of a negative impression of the patient's tooth arch, of a positive tooth arch mold of the negative impression, or of a wax bite.
  • one or more images are acquired of the patient's teeth, an arrangement of physical tooth models, a negative impression, a positive tooth arch mold, or a wax bite and then the positions of the teeth or physical tooth models are determined from these images using, for example, the methods described above with respect to FIG 1. This process may be repeated to determine, for example, new positions of the teeth at a later stage of treatment or to determine new positions of physical tooth models in a different arrangement.
  • the images of the teeth, physical tooth models, negative impression, positive tooth arch mold, or wax bite may be acquired by one or more stationary cameras, one or more moving cameras, or a combination of stationary or moving cameras.
  • the cameras may be, for example, conventional digital cameras, digital video cameras, film cameras, or video cameras in some variations.
  • the resolution of the images of the teeth, physical tooth models, negative impression, positive tooth arch mold, or wax bite is sufficiently high that the positions of the teeth or physical tooth models may be determined from the images with sufficient precision without the addition of registration marks to, for example, the teeth or physical tooth models.
  • the determination of 3D positional information from the images may utilize naturally occurring tooth or gingival features that are easily distinguishable in the images such as those discussed above with respect to step 110 of the process of FIG 1.
  • the process of FIG 4 may be applied, in some variations, to these naturally occurring features to determine the movements of the teeth between different stages of treatment, for example, or of physical tooth models between different arrangements.
  • the determination of 3D information from the images may utilize chamfer matching as discussed below.
  • registration features that are easily distinguishable in images may be added to, for example, the teeth or physical tooth models.
  • the addition of such registration features may reduce the resolution required of the images to determine the positions of the teeth with a particular precision, hi some variations, a sufficient number of registration marks are added to each tooth to define a coordinate system on the tooth and hence represent its position.
  • Any of the registration features discussed above with respect to step 110 of the process of FIG. 1 that are suitable for use with teeth or physical tooth models may be used.
  • the process of FIG 4 may be applied to these registration features to determine tooth or physical tooth model movements.
  • a 3D digital model of the patient's tooth arch is generated from one or more images of the teeth, a negative impression of the tooth arch, a positive tooth arch mold, or a wax bite by the methods described with respect to FIG. 1.
  • a 3D digital model of an arrangement of physical tooth models on a base or in a wax set-up is determined from one or more images of the arrangement.
  • Such 3D digital models include the positions of the teeth or physical tooth models and hence may be used to track those positions.
  • FIG. 5 A shows another exemplary method for tracking the movements of teeth or physical tooth models in some variations.
  • step 500 a 3D digital model of a first arrangement of the teeth or physical tooth models is acquired.
  • this 3D digital model is generated from one or more images of the first arrangement by the methods described above.
  • the 3D digital model may be acquired, for example, by methods described below or by other methods known to one of ordinary skill in the art for digitizing physical objects.
  • step 505 the positions of the teeth or physical tooth models in a second arrangement are acquired. In some variations these positions are acquired from one or more images of the arrangement by the methods described above, m other variations these positions may be acquired, for example, by other methods described below including methods not requiring the use of images.
  • step 510 the 3D digital model is modified to represent the positions of the teeth or physical tooth models in the second arrangement.
  • steps 505 and 510 are accomplished by superimposing a projection of the 3D digital model of the first arrangement onto an image of the second arrangement (FIG. 5B).
  • steps 505 and 510 are accomplished by superimposing the 3D digital model of the first arrangement onto a 3D digital model of the second arrangement derived from images of the second arrangement (FIG. 5C).
  • step 530 an image of the second arrangement is acquired.
  • step 535 a distortion-corrected image of the second arrangement is generated from the image acquired in step 530.
  • the original image may be acquired, and the distortion-corrected image generated, by the methods described above with respect to FIG. 1, for example.
  • step 540 a static reference point in the 3D digital model is selected. This static reference point, which may also be referred to as an anchor point, is a point in the 3D digital model that has not substantially moved between the first and second arrangements.
  • the static reference point may be selected, for example, by identifying teeth or tooth models that have not substantially moved, by identifying portions of the gingiva that have not substantially moved, or by determining the center of mass of the first arrangement.
  • a static subset of the 3D digital model is identified. This static subset is a portion of the 3D digital model that has not substantially moved between the two arrangements. Steps 540 and 545 may occur together.
  • step 550 the static subset of the 3D digital model of the first arrangement is projected onto the distortion-corrected image of the second arrangement.
  • the projection is then rotated, translated, and otherwise transformed to substantially superimpose the projection on a portion of the second arrangement in the distortion-corrected image.
  • the transformation required at this step provides any information required to modify the static subset of the 3D digital model to represent a portion of the second arrangement.
  • Li step 555 the transformation required to superimpose a projection of the non-static portions of the 3D digital model onto the distortion-corrected image of the second arrangement is determined. This transformation provides the additional information required to modify the 3D digital model to represent the second arrangement.
  • Steps 540-555 may be applied in an iterative approach in which a static reference point and a static subset of the 3D digital model are selected, transformations for the non-static portions are determined, a new static reference point and a new static subset are selected, and transformations for the newly designated non-static portions are determined.
  • steps 505 and 510 in FIG. 5 A are accomplished by the process shown in FIG 5C.
  • hi step 570 multiple images of the second arrangement are acquired
  • hi step 575 a 3D digital model of the second arrangement is generated from the images acquired in step 570. These images may be acquired, and the 3D digital model of the second arrangement maybe generated, by the methods described above with respect to FIG. 1.
  • step 580 the transformation that superimposes the 3D digital model of the first arrangement onto the 3D digital model of the second arrangement is determined. This transformation provides the information required to modify the 3D digital model of the first arrangement to represent the second arrangement.
  • the 3D digital model of the second arrangement generated in step 575 from images should include sufficient information to enable accurate modification of the 3D digital model of the first arrangement to represent the second arrangement.
  • the 3D digital model generated in step 575 from images need not necessarily be as detailed or include as much information as the 3D digital model of the first arrangement, however.
  • the two arrangements of teeth or physical tooth models represent different stages of an orthodontic treatment process.
  • the initial 3D digital model represents the arrangement of the patient's teeth at an earlier stage of treatment
  • the modified 3D digital model represents the patient's current arrangement (the second arrangement) of teeth.
  • a first arrangement of physical tooth models represents an actual or predicted arrangement of teeth during treatment
  • a second arrangement of physical tooth models represents a desired arrangement of teeth at a later stage of treatment.
  • the modified 3D digital model represents a desired tooth arrangement and may be used in the fabrication of dental appliances such as dental aligners, for example, or physical dental models for use in a treatment plan designed to achieve that tooth arrangement.
  • chamfer matching is an edge matching technique in which the edge points of one image are transformed by a set of parametric transformation equations to edge points of a similar image that is slightly different.
  • digital pictures of the jaw are acquired from different angles (such as seven angles for each stage of treatment, for example). Those pictures are acquired at a plurality of different resolutions such as, for example, four resolutions.
  • a hierarchical method for computing the analysis compares all the pictures of one stage with all the pictures of the other stage.
  • the chamfer matching operation determines the total amount of movement of the teeth per stage.
  • the movement of individual tooth can then be used for calculating information required for aligner fabrication.
  • 'laser marking' a minute amount of material on the surface of the tooth model is removed and colored. This removal is not visible after the object has been enameled. Li this process a spot shaped indentation is produced on the surface of the material.
  • Another method of laser marking is called 'Center Marking'. In this process a spot shaped indentation is produced on the surface of the object. Center marking can be 'circular center marking' or 'dot point marking'.
  • marking or reflective markers are placed on the body or object to be motion tracked.
  • the sparkles or reflective objects can be placed on the body/object to be motion tracked in a strategic or organized manner so that reference points can be created from the original model to the models of the later stages.
  • a wax setup is done and the teeth models are marked with sparkles.
  • the system marks or paints the surface of the crown model with sparkles.
  • Pictures of the jaw are acquired from different angles. Computer software determines and saves those pictures. After that, the teeth models are moved. Each individual tooth is mounted on top of the other and tooth movement can be determined. Then the next stage is performed, and the same procedure is repeated.
  • a mechanical based system is used to measure the position of features on teeth or tooth models.
  • the model of the jaw is placed in a container.
  • a user takes a stylus and places the tip on different points on the tooth.
  • the points touched by the stylus tip are selected in advance.
  • the user then tells the computer to calculate the value of the point.
  • the value is then preserved in the system.
  • the user takes another point until all points have been digitized.
  • two points on each tooth are captured. However, depending on need, the number of points to be taken on each tooth can be increased.
  • the points on all teeth are registered in computer software. Based on these points the system determines the differences between planned versus actual teeth position for aligner fabrication. These points are taken on each individual stage, hi this way, this procedure can also be used to calculate the motion/movement of the tooth per stage.
  • a mechanical based system for 3D digitization such as Microscribe from Immersion Corporation or Phantom from SenseAble Technology Incorporated, can be used.
  • the 3D digitizer implements counterbalanced mechanical arms (with a number of mechanical joints with digital optical sensors inside) that are equipped with precision bearings for smooth, effortless manipulation.
  • the end segment is a pen like device called a stylus which can be used to touch any point in 3D space.
  • Accurate 3D position information on where the probe touches is calculated by reading each joint decoder's information; 3D angular information can also be provided at an extra cost.
  • an extra decoder can be added for reading pen self rotation information.
  • Some additional sensors can be placed at the tip of the pen, so the computer can read how hard the user is pressing the pen.
  • a special mechanical device can be added to give force feedback to the user.
  • Immersion Corporation's MicroScribe uses a pointed stylus attached to a CMM- type device to produce an accuracy of about .01 inch. It is a precision portable digitizing arm with a hand-held probe used at a workstation, mounted or on a tripod or similar fixture for field use or a manufacturing environment.
  • the MicroScribe digitizer is based on optical angle encoders at each of the five arm joints, embedded processor, USB port and software application interface for the host computer. The user selects points of interest or sketches curves on the surface of an object with the hand-held probe tip and foot switch. Angle information from the MicroScribe arm is sent to the host computer through a USB or serial port.
  • the MicroScribe utility software calculates the Cartesian XYZ coordinates of the acquired points and the coordinates are directly inserted into keystroke functions in the user's active Windows application.
  • the users design and modeling application functions are used to connect the 3D points as curves and objects to create surfaces and solids integrated into an overall design.
  • Li another variation, 3D motion tracking/capture is based on an optical or magnetic system. These require placing markers at specific points on the teeth and digitally recording the movements of the actual teeth so their movements can be played back with computer animation.
  • the computer uses software to post-process this mass of data and determine the exact movement of the teeth, as inferred from the 3D position of each tooth marker at each moment.
  • a magnetic motion capture systems utilize sensors placed on the teeth or physical tooth models to measure the low-frequency magnetic field generated by a transmitter source.
  • the sensors and source are cabled to an electronic control unit that correlates their reported locations within the field.
  • the electronic control units are networked with a host computer that uses a software driver to represent these positions and rotations in 3D space.
  • the sensors report position and rotational information.
  • sensors are applied to each individual tooth or tooth model.
  • three sensors are used: one on the buccal side, one on the lingual side and the one on the occlusal side. The number of sensors can be increased depending on the case.
  • the jaw is placed in a housing or cabin.
  • the sensors are attached to the teeth/jaw at predetermined points and connected to an electronic system with the help of cables.
  • the electronic system is in turn connected to a computer.
  • the movement of the teeth at each stage is calculated by these sensors.
  • the computer manipulates the coordinates and gives the proper values which are then used to perform the required procedures for aligner fabrication, among others.
  • wireless sensors which operate at different frequencies can also be used.
  • the movements are once again captured by electronics attached to the computer. With the help of the sensors, positional values are determined for aligner fabrication and other procedures that need to be performed.
  • Optical Motion Capture Systems may be used.
  • Reflective and Pulsed-LED light emitting diodes.
  • Optical motion capture systems utilize proprietary video cameras to track the motion of reflective markers (or pulsed LEDs) attached to an object.
  • Reflective optical motion capture systems use Infra-red (IR) LEDs mounted around the camera lens, along with IR pass filters placed over the camera lens.
  • IR Infra-red
  • Optical motion capture systems based on Pulsed-LEDs measure the infra-red light emitted by the LED's rather than light reflected from markers. The centers of the marker images are matched from the various camera views using triangulation to compute their frame-to-frame positions in 3D space.
  • a studio enclosure houses a plurality of video cameras (such as seven, for example) attached to a computer. Dental impressions are placed inside the studio. Each of the teeth has a plurality of reflective markers attached. For example, markers can be placed on the buccal side, the lingual side and the occlusal side. More markers can be deployed if required. Infra-red (IR) LEDs are mounted around the camera lens, along with IR pass filters placed over the lens. When the light emits form the LED's it gets reflected by the markers. The coordinates are captured and matched with the, e.g., seven different camera views to ultimately get the position data for aligner making and other computations.
  • IR Infra-red
  • a wax setup operation is done in freehand without the help of any mechanical or electronic systems. Tooth movement is determined manually with scales and/or rules and these measurements are entered into the system.
  • Some variations use a wax set up in which the tooth abutments are placed in a base which has wax in it.
  • One variation uses robots and clamps to set the teeth at each stage.
  • Another variation uses a clamping base plate, i.e. a plate on which teeth can be attached on specific positions. Teeth are setup at each stage using this process. Measurement tools such as the micro scribe are used to get the tooth movements which can be used later by the universal joint device to specify the position of the teeth.
  • the FACC lines are marked on the teeth or tooth models.
  • Movement is determined by a non mechanical method or by a laser pointer.
  • the distance and angle of the FACC line reflects the difference between the initial position and the next position on which the FACC line lies.
  • tooth or tooth model movements are checked in real time.
  • the tooth models are placed in a container attached to motion sensors. These sensors track the motion of the tooth models in real time.
  • the motion can be done with freehand or with a suitably controlled robot.
  • Stage x and stage x+1 pictures are overlaid, and the change of the points reflects the exact amount of movement.
  • FIG. 6 shows an exemplary process for generating a photo-realistic image of the predicted result of a dental or other medical treatment.
  • steps of the process shown in FIG. 6 refer to the generation of an image of a patient's face and teeth showing the predicted result of an orthodontic or other dental treatment, one of ordinary skill in the art with the benefit of this disclosure would recognize that a similar or equivalent process may be used to generate predicted post-treatment images for other medical or dental treatments as well.
  • step 600 one or more pre-treatment images of the patient's face and teeth are acquired.
  • these images may be acquired using, for example, methods and apparatus described above with respect to the process of FIG. 1, facilitating their use in the generation of a 3D digital model of the patient's face and teeth.
  • a 3D digital model of the patient's pre-treatment face and teeth is generated from the image or images acquired in step 600.
  • this 3D digital model is generated using, for example, the methods and apparatus described above with respect to the process shown in FIG. 1.
  • the pre-treatment 3D digital model is generated using a combination of information derived from the pre-treatment image or images and other information not derived from the images. For example, in some variations missing information may be supplied from a database containing models and information about faces, jaws, tooth arches, and teeth. X-ray or CT data providing bone and tissue information may be used in generating the pre-treatment 3D digital model in some variations.
  • a 3D digital model of the patient's pre-treatment tooth arches may also be used in generating the pre-treatment 3D digital model of the patient's face and teeth.
  • the generation of such 3D digital tooth arch models is described below with respect to step 610.
  • 3D scans of the patient's head, face, jaw, and or teeth prior to treatment are used in generating the pre-treatment digital model of the face and teeth.
  • information regarding the environment in which the image or images were acquired is collected at the time the images are acquired or extracted from the images so that, for example, color pigment information may be separated from texture, shading, and shadow information.
  • This environment information maybe used in subsequent steps (e.g., steps 615 and 620 below).
  • step 610 3D digital models of the patient's pre-treatment and predicted post- treatment tooth arches are acquired.
  • These tooth arch models may include, for example, the patient's jaws, teeth, and/or gingiva, hi some variations, the 3D digital model of the pre- treatment tooth arches may be generated from images by the methods described above with respect to the process shown in FIG. 1.
  • the 3D digital model of the pre-treatment tooth arches is generated by, for example, scanning and digitizing the patient's teeth in the patient's mouth, scanning and digitizing negative impressions of the patient's tooth arches, scanning and digitizing positive molds of the tooth arches cast from the negative impressions, and/or scanning and digitizing individual physical models of the patient's teeth.
  • the 3D digital model of the predicted post treatment tooth arches may be generated, in some variations, by modifying a 3D digital model of the pre-treatment arches to represent the expected results of an orthodontic or other dental treatment.
  • Methods and apparatus, for generating 3D digital models of pre-treatment and predicted post-treatment tooth arches are disclosed, for example, in U.S. Provisional Application No. 60/676,546 entitled “DIGITIZATION OF DENTAL ARCH MODEL," filed April 29, 2005.
  • a 3D digital model of the patient's predicted post-treatment face and teeth is generated from the 3D digital model of the patient's pre-treatment face and teeth (generated in step 605) and the 3D digital models of the patient's pre-treatment and post- treatment tooth arches (generated in step 610). Texture, environment, shadow, and shading information may also be used in generating the 3D digital models of the patient's predicted post- treatment face and teeth in some variations.
  • the 3D digital model of the patient's predicted post-treatment face and teeth may be partially or entirely generated with methods and algorithms known to one of ordinary skill in the art and conventional in, for example, the movie and gaming industries.
  • Such known methods may include conventional morphing methods which enable smooth transformations of 3D digital (e.g., mesh or voxel) models.
  • Such known methods may also include conventional methods by which may be generated a hierarchical 3D digital face model including teeth, bone, joints, gingiva, muscle, soft tissue and skin in which changes in the position or shape of one level of the hierarchy (e.g., teeth or bones) changes all dependent levels in the hierarchy (e.g., muscle, soft tissue, and skin).
  • the 3D digital model of the patient's predicted post-treatment face and teeth may be partially or entirely generated with commercial software products or with conventional algorithms and methods related to those on which commercial software products are based. Examples of such commercial software products include the Maya® family of integrated 3D modeling, animation, visual effects, and rendering software products available from Alias Systems Corporation.
  • the 3D digital models of the pre-treatment and post-treatment tooth arches can provide information about predicted or projected tooth movement in an anticipated treatment process. This information about tooth movements may be used in step 615 in conjunction with the 3D digital model of the patient's face and teeth generated in step 605 to predict how changes in particular tooth positions result in changes in, for example, the bone structure and/or soft tissue (e.g., gingiva) of the patient's face and jaw, and hence in predicting the overall view of the patient's face and teeth (e.g., projecting a partial or full facial profile during and/or after treatment).
  • the bone structure and/or soft tissue e.g., gingiva
  • the teeth in the pre-treatment and post-treatment 3D digital tooth arch models may be matched with the teeth in the pre-treatment 3D digital model of the patient's face and teeth.
  • the changes in the bones and tissues in the face that occur as a result of the forces applied to them by the teeth as the teeth move from pre-treatment to post-treatment positions may then be simulated, for example, by treating the tissue and bones as an elastic continuum or by using a finite elements analysis.
  • Elastic continuum analyses and finite elements analyses are conventional methods for determining deformations in material resulting from the application of forces. Techniques such as collision computation between the jaw and the facial bones and tissue may also be used to calculate deformations in the face.
  • predicted movements in the jaw and/or teeth may result in predicted changes to the 3D digital model of the face and teeth, including the gingiva.
  • the impact of the tooth movements may also be determined and visualized using, for example, 3D morphing of the 3D digital model of the face and teeth.
  • a texture based 3D geometry reconstruction may be implemented in some variations.
  • the face colors/pigments may be determined from the image or images acquired in step 600, for example, and stored as a texture. Since different parts of the facial skin may have different colorations, texture maps may store colors corresponding to each position on the face 3D digital model of the face and teeth.
  • the 2D and/or 3D digital model of the patient's post-treatment face and teeth is rendered into a photo-realistic image using conventional rendering methods known to one of ordinary skill in the art.
  • the rendering process may utilize environment information such as lighting, shadows, shading, color, and texture collected or extracted from images as described at step 605.
  • the photo-realistic image may then be viewed or printed, for example.
  • FIG. 7 shows a pre-treatment image of teeth
  • FIG. 8 shows an exemplary image, generated according to one variation, of the predicted result of an orthodontic treatment of these teeth.
  • a 3D digital model of the patient's predicted tooth arches at an intermediate step of a treatment process is used rather than the 3D digital model of the predicted post-treatment tooth arches.
  • the process of FIG. 6 may generate a photo-realistic image of the patient's face and teeth at an intermediate stage of treatment.
  • FIG. 9 shows an exemplary process for generating photo-realistic images of predicted intermediate results of a dental or other medical treatment according to some other variations.
  • a 3D digital model of the patient's pre-treatment face and teeth is acquired. This may be accomplished by, for example, the methods described with respect to the process of FIG. 1 and/or with respect to step 605 of the process of FIG. 6.
  • a 3D digital model of the patient's predicted post-treatment face and teeth is acquired. In some variations this may be accomplished using, for example, the methods described with respect to step 615 of FIG. 6.
  • step 910 features in the pre-treatment and post-treatment 3D digital models of the face and teeth are mapped onto each other.
  • step 915 one or more 3D digital models of the patient's face and teeth at intermediate stages of treatment are generated by interpolating between the pre-treatment and post-treatment 3D digital models.
  • the interpolation process may utilize information regarding the locations of the teeth in order to avoid, for example, unphysical interpolations in which teeth collide or pass through one another.
  • Steps 910 and 915 may be accomplished in some variations using, for example, morphing methods known to one of ordinary skill in the art and conventional in the movie and gaming industries. Such morphing methods may gradually convert one graphical object into another.
  • the 3D digital models of the patient's face at intermediate stages of treatment maybe rendered into photo-realistic images by conventional rendering methods and then viewed or printed.
  • Some variations may utilize commercial software products, or conventional algorithms and methods related to those on which commercial software products are based, to generate the 3D digitals model of the patient's face and teeth at intermediate stages of treatment.
  • commercial software products include the Maya® family of integrated 3D modeling, animation, visual effects, and rendering software products available from Alias Systems Corporation.
  • the feature mapping in step 910 includes teeth and/or lips on the initial and final 3D digital models.
  • feature mapping may specify polyhedron faces, edges, or vertices.
  • appropriate voxels are specified.
  • the methods described with respect to FIG. 6 and/or FIG. 9 enable patients, doctors, dentists and other interested parties to view a photorealistic rendering of the expected appearance of a patient after treatment, hi the case of an orthodontic treatment, for example, a patient can view his or her expected post-treatment smile.
  • the methods of FIG. 6 and/or FIG. 9 may be used to simulate the results of other medical or surgical treatments.
  • the post- treatment result of a plastic surgery procedure may be simulated.
  • the final tooth color as well as intermediate stages between the initial and final tooth colors may be simulated.
  • wound healing on the face for example, may be simulated through progressive morphing.
  • a growth model based on a database of prior organ growth information may be used to predict how an organ would be expected to grow and the growth may be visualized using morphing.
  • a hair growth model may be used to show a person his or her expected appearance three to six months from the day of a haircut.
  • the methods and apparatus disclosed herein may be used to perform lip sync
  • the methods and apparatus disclosed herein may be used to perform face detection. For example, a person can have different facial expressions at different times. Multiple facial expressions may be simulated and compared to a scanned face for face detection.
  • the methods disclosed in this patent may be implemented in hardware or software, or a combination of the two.
  • the methods may be implemented in computer programs executing on programmable computers that each includes a processor, a storage medium readable by the processor (including volatile and nonvolatile memory and/or storage elements), and suitable input and output devices.
  • Program code may be applied to data entered using an input device to perform the functions described and to generate output information.
  • the output data may be processed by one or more output devices for transmission.
  • the computer system includes a CPU, a RAM, a ROM and an I/O controller coupled by a CPU bus.
  • the I/O controller is also coupled by an I/O bus to input devices such as a keyboard and a mouse, and output devices such as a monitor.
  • the I/O controller also drives an I/O interface which in turn controls a removable disk drive such as a floppy disk, among others.
  • each program is implemented in a high level procedural or object-oriented programming language to communicate with a computer system.
  • the programs may be implemented in assembly or machine language, if desired. In either case, the language may be a compiled or interpreted language
  • each such computer program may be stored on a storage medium or device (e.g., CD-ROM, hard disk or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer to perform the procedures described
  • the system may be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner.

Abstract

Examples and variations of methods and apparatus for using 3D digital models for treatment planning or other visualization applications are disclosed. In one aspect, two-dimensional images are utilized in the course of or as an aid to dental or other medical treatment. In one variation, methods and apparatus are disclosed for using photographic images to generate three dimensional (3D) digital models that may be used, for example, for dental or medical treatments. In another aspect, methods and apparatus are disclosed for tracking tooth or tooth model movements during a dental treatment. In yet another aspect, methods and apparatus are disclosed for generating 3D digital models and/or images of predicted final or intermediate results of dental or other medical treatments.

Description

IMAGE BASED ORTHODONTIC TREATMENT METHODS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation in part of U.S. Patent Application No.
11/013,146, entitled "IMAGE BASED ORTHODONTIC TREATMENT VIEWING SYSTEM," filed December 14, 2004, U.S. Patent Application No. 11/013,147, entitled "TOOTH MOVEMENT TRACKING SYSTEM," filed December 14, 2004, and U.S. Patent Application No. 11/013,153, entitled "MAGE BASED DENTITION RECORD DIGITIZATION," filed December 14, 2004. Each of these U.S. Patent Applications is incorporated herein by reference in its entirety for all purposes.
BACKGROUND OF THE INVENTION
[0002] Orthodontics is the practice of manipulating a patient's teeth to provide better function and appearance. To achieve tooth movement, orthodontists utilize their expertise to first determine a three-dimensional mental image of the patient's physical orthodontic structure and a three-dimensional mental image of a desired physical orthodontic structure for the patient, which may be assisted through the use of X-rays and/or models. Typically, based on these mental images the orthodontist designs and implements a treatment.
[0003] Unfortunately, in the oral environment, it is difficult for a human being to accurately develop a visual three-dimensional image of an orthodontic structure due to the limitations of human sight and the physical structure of a human mouth. Hence, in the practice of orthodontics there is a need for improved methods for creating 3D models of the patient's orthodontic structures, and for measuring and tracking changes in those structures during treatment, hi addition, the practice of orthodontics, as well as other dental and medical practices, would benefit from methods that would enable practitioners and patients to view photo-realistic images of the predicted intermediate and final stages of treatment before embarking on the course of treatment.
[0004] Over the years, various methods and devices have been developed to assist dentists with delivery of orthodontic treatments. Examples of these methods and devices are disclosed in U.S. Patent No. 6,699,037 B2 titled "METHOD AND SYSTEM FOR INCREMENTALLY MOVING TEETH" issued to Chishti et al., dated March 2, 2004; U.S. Patent No. 6,682,346 B2 titled "DEFINING TOOTH-MOVING APPLIANCES COMPUTATIONALLY" issued to Chishti et al., dated January 27, 2004; U.S. Patent No. 6,471,511 titled "DEFINING TOOTH-MOVING APPLIANCES COMPUTATIONALLY" issued to Chishti et al., dated October 29, 2002; U.S. Patent No. 5,645,421 titled "ORTHODONTIC APPLIANCE DEBONDER" issued to Slootsky, dated July 8, 1997; U.S. Patent No. 5,618,176 titled "ORTHODONTIC BRACKET AND LIGATURE AND METHOD OF LIGATING ARCHWIRE TO BRACKET" issued to Andreiko et al., dated April 8, 1997; U.S. Patent No. 5,607,305 titled "PROCESS AND DEVICE FOR PRODUCTION OF THREE- DIMENSIONAL DENTAL BODIES" issued to Andersson et al., dated March 4, 1997; U.S. Patent No. 5,605,459 titled "METHOD OF AND APPARATUS FOR MAKING A DENTAL SET-UP MODEL" issued to Kuroda et al., dated February 25, 1997; U.S. Patent No. 5,587,912 titled "COMPUTER AIDED PROCESSING OF THREE-DIMENSIONAL OBJECT AND APPARATUS THEREFOR" issued to Andersson et al., dated December 24, 1996; U.S. Patent No. 5,549,476 titled "METHOD FORMAKTNG DENTAL RESTORATIONS AND THE DENTAL RESTORATION MADE THEREBY" issued to Stern, dated August 27, 1996; U.S. Patent No. 5,533,895 titled "ORTHODONTIC APPLIANCE AND GROUP STANDARDIZED BRACKETS THEREFOR AND METHODS OF MAKING, ASSEMBLING AND USING APPLIANCE TO STRAIGHTEN TEETH" issued to Andreiko et al., dated July 9, 1996; U.S. Patent No. 5,518,397 titled "METHOD OF FORMING AN ORTHODONTIC BRACE" issued to Andreiko et al., dated May 21, 1996; U.S. Patent No. 5,474,448 titled "LOW PROFILE ORTHODONTIC APPLIANCE" issued to Andreiko et al., dated December 12, 1995; U.S. Patent No. 5,454,717 titled "CUSTOM ORTHODONTIC BRACKETS AND BRACKET FORMING METHOD AND APPARATUS" issued to Andreiko et al., dated October 3, 1995; U.S. Patent No. 5,452,219 titled "METHOD OF MAKING A TOOTH MOLD" issued to Dehoff et al., dated September 19, 1995; U.S. Patent No. 5,447,432 titled "CUSTOM ORTHODONTIC ARCHWIRE FORMING METHOD AND APPARATUS" issued to Andreiko et al., dated September 5, 1995; U.S. Patent No. 5,431,562 titled "METHOD AND APPARATUS FOR DESIGNING AND FORMING A CUSTOM ORTHODONTIC APPLIANCE AND FOR STRAIGHTENING OF TEETH THEREWITH" issued to Andreiko et al., dated July 11, 1995; U.S. Patent No. 5,395,238 titled "METHOD OF FORMING ORTHODONTIC BRACE" issued to Andreiko et al., dated March 7, 1995; U.S. Patent No. 5,382,164 titled "METHOD FOR MAKING DENTAL RESTORATIONS AND THE DENTAL RESTORATIONS MADE THEREBY" issued to Stern, dated January 17, 1995; U.S. Patent No. 5,368,478 titled "METHOD FOR FORMING JIGS FOR CUSTOM PLACEMENT OF ORTHODONTIC APPLIANCES ON TEETH" issued to Andreiko et al, dated November 295 1994; U.S. Patent No. 5,342,202 titled "METHOD FOR MODELING CRANIO-F ACIAL ARCHITECTURE" issued to Deshayes, dated August 30, 1994; U.S. Patent No. 5,340,309 titled "APPARATUS AND METHOD FOR RECORDING JAW MOTION" issued to Robertson, dated August 23, 1994; U.S. Patent No. 5,338,198 titled "DENTAL MODELING SIMULATOR" issued to Wu et al., dated August 16, 1994; U.S. Patent No. 5,273,429 titled "METHOD AND APPARATUS FOR MODELING A DENTAL PROSTHESIS" issued to Rekow et al., dated December 28, 1993; U.S. Patent No. 5,186,623 titled "ORTHODONTIC FINISHING POSITIONER AND METHOD OF CONSTRUCTION" issued to Breads et al., dated February 16, 1993; U.S. Patent No. 5,139,419 titled "METHOD OF FORMING AN ORTHODONTIC BRACE" issued to Andreiko et al., dated August 18, 1992; U.S. Patent No. 5,059,118 titled "ORTHODONTIC FINISHING POSITIONER AND METHOD OF CONSTRUCTION" issued to Breads et al., dated October 22, 1991; U.S. Patent No. 5,055,039 titled "ORTHODONTIC POSITIONER AND METHODS OF MAKING AND USING SAME" issued to Abbatte et al., dated October 8, 1991; U.S. Patent No. 5,035,613 titled "ORTHODONTIC FINISHING POSITIONER AND METHOD OF CONSTRUCTION" issued to Breads et al., dated July 30, 1991; U.S. Patent No. 5,011,405 titled "METHOD FOR DETERMINING ORTHODONTIC BRACKET PLACEMENT" issued to Lemchen, dated April 30, 1991; U.S. Patent No. 4,936,862 titled "METHOD OF DESIGNING AND MANUFACTURING A HUMAN JOINT PROSTHESIS" issued to Walker et al., date June 26, 1990; U.S. Patent No. 4,856,991 titled "ORTHODONTIC FINISHING POSITIONER AND METHOD OF CONSTRUCTION" issued to Breades et al., dated August 15, 1989; U.S. Patent No. 4,798,534 titled "METHOD OF MAKING A DENTAL APPLIANCE" issued to Breads, dated January 17, 1989; U.S. Patent No. 4,755,139 titled "ORTHODONTIC ANCHOR APPLIANCE AND METHOD FOR TEETH POSITIONING AND METHOD OF CONSTRUCTING THE APPLIANCE" issued to Abbatte et al., dated July 5, 1988; U.S. Patent No. 3,860,803 titled "AUTOMATIC METHOD AND APPARATUS FOR FABRICATING PROGRESSIVE DIES" issued to Levine, dated January 14, 1975; U.S. Patent No. 3,660,900 titled "METHOD AND APPARATUS FOR IMPROVED ORTHODONTIC BRACKET AND ARCH WIRE TECHNIQUE" issued to Andrews, dated may 9, 1972; each of which is incorporated herein by reference in its entirety for all purposes. [0005] The practice of orthodontics and other dental treatments can benefit from a computer model that is representative of the position of the teeth in a tooth arch. The computer model may be prepared based on an impression model taken from the patient and may be utilized to assist the dentist in planning an orthodontic treatment by providing visual feedback of possible treatment steps in particular treatment regimen.
[0006] Examples of devices and methods for producing three dimensional computer models are disclosed in U.S. Patent No. 6,359,680 entitled "THREE-DIMENSIONAL OBJECT MEASUREMENT PROCESS AND DEVICE" issued to Rubbert, dated March 19, 2002; U.S. Patent No. 6,415,051 entitled "GENERATING 3D MODELS USING A MANUALLY OPERATED STRUCTURED LIGHT SOURCE" issued to Callari et al., dated July 2, 2002; U.S. Patent No. 6,512,994 entitled "METHOD AND APPARATUS FOR PRODUCING A THREE-DIMENSIONAL DIGITAL MODEL OF AN ORTHODONTIC PATIENT" issued to Sachdeva, dated January 28, 2003; and U.S. Patent No. 6,563,499 entitled "METHOD AND APPARATUS FOR GENERATING A 3D REGION FROM A SURROUNDING IMAGERY" issued to Waupotitsch et al., dated May 13, 2003; each of which is incorporated herein by reference in its entirety for all purposes.
[0007] Examples of morphing and warping methods for transforming 3D computer models are disclosed in U.S. Patent No. 6,268,846 entitled "3D GRAPHICS BASED ON IMAGES AND MORPHING" issued to Georgiev, dated July 31, 2001; and U.S. Patent No. 6,573,889 entitled "ANALYTIC WARPING" issued to Georgiev, dated June 3, 2003; both of which are incorporated herein by reference in their entirety for all purposes. In addition, warping methods are disclosed in George Wolberg, Digital Image Warping, IEEE Computer Society Press (1990), which is incorporated herein by reference in its entirety for all purposes.
[0008] Examples of methods and devices for tracking the positions of objects are disclosed in U.S. Patent No. 6,820,025 entitled "METHOD AND APPARATUS FOR MOTION TRACKING OF AN ARTICULATED RIGID BODY" issued to Bachman et al., dated November 16, 2004 and incorporated herein by reference in its entirety for all purposes. BRIEF SUMMARY OF THE INVENTION
[0009] Examples and variations of methods and apparatus for using photographic images in the course of or as an aid to dental or other medical treatments are disclosed, m one aspect, methods and apparatus are disclosed for using photographic images to generate three dimensional (3D) digital models that may be used, for example, for dental or medical treatments. In some variations, 3D digital models of a patient's face, smile, jaw, tooth arches, individual teeth, and/or gingiva may be generated from information derived, at least in part, from images. Three dimensional digital models of other body parts or structures may also be generated from one or more images by variations of the methods and apparatus disclosed herein. In some variations, information not derived from the images is also used to generate the 3D digital model. Physical models of body parts or structures such as, for example, physical models of tooth arches and individual physical tooth models may also be digitized by variations of the methods and apparatus disclosed herein.
[0010] In another aspect, methods and apparatus are disclosed for tracking tooth or tooth model movements during, for example, a dental treatment. In some variations, the positions of a patient's teeth or the positions of physical tooth models may be derived from one or more images of the teeth or physical tooth models. In some variations, for example, a 3D digital model of a first arrangement of teeth or physical tooth models is acquired by, for example, generating it from one or more images of the arrangement of teeth or physical tooth models. The positions of the teeth or physical tooth models in a second arrangement are determined from one or more images of the second arrangement. The 3D digital model is then modified to reflect the positions of the teeth or physical tooth models in the second arrangement. Hence, the 3D digital model may be used in some variations to track the positions of the teeth or physical tooth models. In some variations, the modified 3D digital model may be used in fabricating a physical dental model and/or a dental appliance such as, for example, a dental aligner for rendering corrective teeth movement.
In another aspect, methods and apparatus are disclosed for generating 3D digital models and/or images of predicted final or intermediate results of dental or other medical treatments. In some variations, one or more images are acquired of a patient's face and teeth prior to an orthodontic treatment. A 3D digital model of the patient's pre-treatment face and teeth is then generated from information derived from these images and, in some variations, from other information as well. Pre-treatment and predicted post-treatment three dimensional digital models of the patient's jaw and/or teeth are acquired and used in combination with the 3D digital model of the pre-treatment face to generate a 3D digital model of the patient's post- treatment face and teeth. This post-treatment model may be rendered into a photo-realistic image of the predicted result of the treatment. Some variations of the methods disclosed herein may be used to generate 3D digital models or images of predicted final or intermediate results of other dental and medical treatments such as, for example, of plastic surgery. Also, in some variations, one or more 3D digital models of a patient at intermediate stages of a treatment are generated. These intermediate stage models may be generated, for example, by morphing a 3D digital pre-treatment model into a 3D digital post-treatment model.
[0011] These and other embodiments, features and advantages of the present invention will become more apparent to those skilled in the art when taken with reference to the following more detailed description of the invention in conjunction with the accompanying drawings that are first briefly described.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] FIG. 1 shows an exemplary process for generating a 3D digital model from one or more images according to one variation.
[0013] FIG. 2 shows a tooth comprising a plurality of registration marks easily distinguishable in an image according to one variation.
[0014] FIG. 3 shows an exemplary multiple camera set up for acquiring images from which to generate a 3D digital model according to one variation.
[0015] FIG. 4 shows an exemplary process for determining and tracking tooth or tooth model movements according to some variations.
[0016] FIG. 5 A shows another exemplary process for determining and tracking tooth or tooth model movements according to some variations.
[0017] FIG. 5B shows an exemplary process for modifying a 3D digital model to represent a changed arrangement of teeth or physical tooth models according to one variation. [0018] FIG. 5C shows an exemplary process for modifying a 3D digital model to represent a changed arrangement of teeth or physical tooth models according to another variation.
[0019] FIG. 6 shows an exemplary process for generating a photo-realistic image of the predicted result of a dental or other medical treatment according to some variations.
[0020] FIG. 7 shows an exemplary pre-treatment image of teeth.
[0021] FIG. 8 shows an exemplary image of the predicted result of an orthodontic treatment of the teeth of FIG. 7 generated according to one variation.
[0022] FIG. 9 shows an exemplary process for generating photo-realistic images of predicted intermediate results of a dental or other medical treatment according to some variations.
DETAILED DESCRIPTION OF THE INVENTION
[0023] The following detailed description should be read with reference to the drawings, in which identical reference numbers refer to like elements throughout the different figures. The drawings, which are not necessarily to scale, depict selective embodiments and are not intended to limit the scope of the invention. The detailed description illustrates by way of example, not by way of limitation, the principles of the invention. This description will clearly enable one skilled in the art to make and use the invention, and describes several embodiments, adaptations, variations, alternatives and uses of the invention, including what is presently believed to be the best mode of carrying out the invention.
[0024] Before describing the present invention, it is to be understood that unless otherwise indicated this invention need not be limited to applications in orthodontic treatments. As one of ordinary skill in the art having the benefit of this disclosure would appreciate, variations of the invention may be utilized in various other medical and dental applications, such as fabrication and/or treatment planning for dental crowns, dental bridges, and aligners. Three- dimensional digital models and methods for their generation described herein may also be modified to support research and/or teaching applications. Moreover, it should be understood that variations of the present invention may be applied in combination with various medical and dental diagnostic and treatment devices.
[0025] It must also be noted that, as used in this specification and the appended claims, the singular forms "a," "an" and "the" include plural referents unless the context clearly indicates otherwise. Thus, for example, the term "a tooth" is intended to mean a single tooth or a combination of teeth. Furthermore, as used herein, "generating", "creating", and "formulating" a digital representation or digital model mean the process of utilizing computer calculation to create a numeric representation of one or more objects. For example, the digital representation or digital model may comprise a file saved on a computer, wherein the file includes numbers that represent a three-dimensional projection of a tooth arch. In another variation, the digital representation comprises a data set including parameters that can be utilized by a computer program to recreate a digital model of the desired objects. Also, as used herein, "photographic image" and "image" refer to images acquired and/or stored electronically as well as to images acquired and/or stored on film. Hence, as used herein, "photographic images" and "images" may be acquired and stored by either digital or analog processes.
[0026] Examples and variations of methods and apparatus for using photographic images in the course of or as an aid to dental or other medical treatment are disclosed herein. The Detailed Description of the Invention is separated into three sections to streamline the organization and provide clarity. One of ordinary skill in the art having the benefit of this disclosure would appreciate that methods and systems described in the different sections can be implemented with each other in various combinations. The combinations may not be limited to elements disclosed in one specific section.
[0027] The first section discloses methods and apparatus for using two-dimensional images (e.g., digital photographic images) to generate three dimensional (3D) digital models that may be used, for example, for dental or medical treatments. Such treatments include, but are not limited to, the fabrication of dental models and dental appliances such as aligners for use in orthodontic treatment. Such 3D digital models include, but are not limited to, digital models of a patient's face, smile, jaw, tooth arches, individual teeth, and gingiva. In addition, such 3D digital models include, but are not limited to, digital models of physical models of a patient's dental arches or individual teeth. Such 3D digital models may be generated, as described below, from images including but not limited to images of a patient's face, smile, mouth, jaws, and teeth, and images of physical models of a patient's dental arches or individual teeth. The first section also discloses variations and examples of methods and apparatus in which 3D or other information maybe determined from images and used, for example, for dental or medical treatment without necessarily generating a 3D digital model from the image.
[0028] The second section discloses examples and variations of methods and apparatus for using two-dimensional images (e.g., digital photographic images) to track tooth movement during a dental treatment. These methods may enable, for example, tracking of tooth movements in a patient's mouth during the course of an orthodontic treatment and tracking of the movements of physical tooth models in a physical model of a patient's tooth arch as the physical model of the tooth arch is manipulated to simulate or plan a course of orthodontic treatment. Such tracking may be based on images including, but not limited to, images of teeth in a patient's mouth or of physical tooth models in a physical dental arch model. The second section also discloses methods for tracking tooth movements that do not use photographic images.
[0029] The third section discloses examples and variations of methods and apparatus for using two-dimensional images (e.g., digital photographic images) to generate a 3D digital model and/or an image of the predicted result of a dental or other medical treatment, hi some variations, information from one or more 3D digital models is combined with an image showing a current dental or medical condition to generate an image of the projected result of treatment of the condition. For example, in some variations an image of a patient's face and smile may be combined with information from 3D digital models of the patient's current dental arches and information from 3D digital models of the patient's projected post-treatment dental arches to generate a photo-realistic image of the patient's face and smile after orthodontic treatment. The third section also discloses examples and variations of methods and apparatus for generating photo-realistic images and/or 3D digital models representing intermediate stages of treatment. Such methods may include, for example, morphing a pre-treatment image and/or 3D digital model into the predicted post-treatment image and/or 3D digital model.
Generation of 3D Digital Dental/Medical Models from Images
[0030] FIG. 1 shows an exemplary process for capturing 3D dental or other medical data and/or generating 3D digital models using one or more photographic images according to one variation. A common problem in deriving a 3D model of an object from one or more images of the object is to find the projective geometric relationship between object points and image points. This may be conventionally accomplished by determining a mathematical model of the camera that describes how the camera forms an image, i.e., how points in 3D space are projected onto an image sensor that results in the images. Such models generally include parameters characterizing the optical properties of the camera.
[0031] Accordingly, in step 100, internal geometries such as, for example, focal length, focal point, and lens shape are characterized for each camera to be used in the process. Generally, the camera lens will distort the rays coming from the object to the recording medium. In order to reconstruct the rays properly, the internal features and geometry of the camera should be specified so that corrections to the images gathered can be applied to account for distortions of the image. Information about the internal geometries of the camera collected in step 100 may be used for making adjustments to the image data to correct for such distortions.
[0032] In addition, in step 105, each camera is calibrated by using it to acquire images of one or more objects having precisely known shapes and dimensions. Any distortions observed in the images may used to determine optical properties of the camera. In some variations environmental conditions such as lighting, for example, may also be determined from these images. In some variations lighting conditions may also be determined from known positions of lights, and/or lighting from many angles may be used so that there are no shadows.
[0033] The projective relationship between object points and image points may be determined from the information collected in steps 100 and 105 by conventional methods and using conventional algorithms known to one of ordinary skill in the art. Examples of such methods and algorithms are described, for example, in U.S. Patent No. 6,415,051 entitled "GENERATING 3D MODELS USING A MANUALLY OPERATED STRUCTURED LIGHT SOURCE" issued to Callari et al., dated July 2, 2002 and U.S. Patent No. 6,563,499 entitled "METHOD AND APPARATUS FOR GENERATING A 3D REGION FROM A SURROUNDING IMAGERY" issued to Waupotitsch et al., dated May 13, 2003. Given the relationship between object and image points, a coordinate system may be established for the generation of three dimensional digital models from images. Also, a distortion-corrected image may be generated. [0034] The resolution of a 3D digital model of an object generated from one or more images depends on the ease with which features on the object may be distinguished in the image. This depends on the resolution of the image, which is determined by the camera, and on the size, shape, and other characteristics of the features to be distinguished in the image. In some variations, in optional step 110 easily distinguishable registration marks or features are added to or identified on the object. Such registration features may include but are not limited to, for example, sparkles (e.g., reflectors) and features or marks of known and easy to distinguish shape and color. The use of such registration mark enhancement may relax the resolution required of the cameras and images to produce a 3D digital model of a given resolution. In one variation, a sufficient number of registration marks (e.g., three or more if they are point-like) are used to define a coordinate system on the object to be imaged. This may allow the position and orientation of the object to be easily determined from the image even if the image is not of particularly high resolution. Registration marks or sparkles may also be used to identify areas or features of interest in the object to be imaged.
[0035] As an example, where the objects to be imaged and modeled include one or more teeth or tooth models, registration features may include but are not limited to points marked on the cusps of the teeth, points marked on the facial axis of the clinical crown (FACC), and points marked on the gingiva line. Such registration features may enable subsequent identification of these features and separation in the image and the 3D digital model of the gingiva from the teeth. In addition, sparkles or other features may be attached to or placed on the teeth or tooth models. Registration marks may also painted on to teeth or tooth models, for example. Some registration marks may fluoresce or phosphoresce under ultraviolet light illumination. Referring to FIG. 2, for example, in one variation a tooth or tooth model 200 comprises a plurality of registration marks 205 that are easily distinguishable in a photographic image and hence allow the 3D position and orientation of tooth or tooth model 200 to be determined (by methods described below) with high resolution from a photographic image (e.g., an image captured by a CCD digital camera).
[0036] In some variations, registration marks such as sparkles, for example, may be attached to a patient's tooth or a tooth model by methods including but not limited to attachment by adhesives and attachment by a wire, bracket, or band attached to the tooth or tooth model. Ih some variations in which teeth in a patient's mouth are imaged for generation of a 3D digital model, registration marks are identified or placed on surfaces of the teeth that face the inside of the patient's mouth and hence are not readily seen by casual observers.
[0037] In some variations, registration marks are formed on an object such as a tooth model, for example, by laser marking. In laser marking of a tooth model, a minute amount of material on the surface of the tooth model is removed and colored. This removal is not visible after the tooth model has been enameled. In this process a spot shaped indentation is produced on the surface of the material. A variation of laser marking is center marking. In center marking a spot shaped indentation is produced on the surface of the object. Center marking can be circular center marking or dot point marking.
[0038] In step 115, one or more images are acquired of the object for which a 3D digital model is to be generated. In one variation, a single stationary camera acquires one or more images. In another variation, multiple stationary cameras acquire one or more images from a variety of angles. Partial object occlusion may be reduced as additional images are acquired from additional angles. In addition, acquiring multiple images with multiple cameras may allow calibration of the cameras from images of the objects to be digitized rather than in a separate and prior step, hi another variation, one or more moving cameras each acquire a plurality of images from a variety of angles. Very high resolution 3D digital models may be generated where many pictures of a small area are acquired from various angles. In another variation, images are acquired by multiple stationary and moving cameras. The positions of the camera or cameras at the time the images are acquired may be known (by measurement, for example) or later derived from the images by conventional methods known to one of ordinary skill in the art.
[0039] FIG. 3 shows an exemplary set-up including multiple cameras according to one variation. Cameras 300 and 305 are positioned to acquire images of tooth 310 (including registration marks 315) from different angles indicated by light rays 320 and 325. In one variation, cameras 300 and 305 maybe conventional digital cameras or digital video cameras. In another variation, cameras 300 and 305 may be conventional film cameras or video cameras which generate images that may be subsequently digitized. Cameras 300 and 305 may be stationary or moving with respect to tooth 310.
[0040] In variations using two or more cameras, as in FIG. 3, the cameras may acquire simultaneous images and thus prevent relative motion of the object with respect to the cameras between images. This simplifies determination of 3D information from the images and may be particularly useful, for example, where the object or objects imaged are teeth or other features on or in a patient who might otherwise move during any interval between images.
[0041] In step 120, a 3D digital model may be generated from the images acquired in step 115 and the information characterizing and calibrating the cameras acquired in steps 100 and 105. One of ordinary skill in the art having the benefit of this disclosure would appreciate that this 3D digital model can be generated using conventional methods and conventional algorithms known to one of ordinary skill in the art. For example, some variations may utilize commercial software products such as, for example, PhotoModeler available from Eos Systems Inc. In some variations, in step 120 3D information such as, for example, the relative positions of objects in the images may be determined from the images without constructing a 3D digital model of the objects, and no 3D digital model is generated. This may also be accomplished by one of ordinary skill in the art having the benefit of this disclosure by using conventional methods and algorithms.
[0042] In some variations conventional triangulation algorithms may be used to compute the 3D digital model for the object. This may be done by intersecting the rays with high precision and accounting for the camera internal geometries. The result is the coordinate of the desired point. In some variations the identified structures may be used to generate 3D digital models that can be viewed and or manipulated using conventional 3D CAD tools.
[0043] In one variation, a 3D digital model in the form of a triangular surface mesh is generated. In another variation, the model is in voxels and a marching cubes algorithm may be applied to convert the voxels into a mesh, which can undergo a smoothing operation to reduce the jaggedness on the surfaces of the 3D model caused by the marching cubes conversion. For example, one smoothing operation moves individual triangle vertices to positions representing the averages of connected neighborhood vertices to reduce the angles between triangles in the mesh. Some variations include the optional step of applying a decimation operation to the smoothed mesh to eliminate data points, which improves processing speed.
[0044] In some variations, after the smoothing and decimation operations have been performed an error value is calculated based on the differences between the resulting mesh and the original mesh or the original data, and the error is compared to an acceptable threshold value. The smoothing and decimation operations may be applied to the mesh once again if the error does not exceed the acceptable value. The last set of mesh data that satisfies the threshold may be stored as the 3D model.
[0045] hi variations in which the 3D digital model is in the form of a triangular surface mesh, the triangles form a connected graph, hi this context, two nodes in a graph are connected if there is a sequence of edges that forms a path from one node to the other (ignoring the direction of the edges). Thus defined, connectivity is an equivalence relation on a graph: if triangle A is connected to triangle B and triangle B is connected to triangle C, then triangle A is connected to triangle C. A set of connected nodes is then called a patch. A graph is fully connected if it consists of a single patch.
[0046] hi some variations, a 3D digital model in the form of a mesh may be simplified by removing unwanted or unnecessary sections of the model to increase data processing speed and enhance the visual display. For example, in a variation in which a 3D digital model of a tooth arch is generated to be used in creation of a tooth repositioning appliance, unnecessary sections of the 3D digital model may include those not needed for creation of the appliance. The removal of these unwanted sections reduces the complexity and size of the digital data set, thus accelerating manipulations of the data set and other operations, hi one variation, all triangles within a box including an unwanted section are deleted and all triangles that cross the border of the box are clipped. This requires generating new vertices on the border of the box. The holes created in the model at the faces of the box are re-triangulated and closed using the newly created vertices. The resulting mesh may be viewed and/or manipulated using a number of conventional CAD tools.
[0047] In some variations, 3D digital models of a patient's teeth, gingiva, jaw, and/or face are generated from one or more images, hi some variations the teeth, gingiva, jaw, and/or face are separately modeled, hi other variations a single 3D digital model includes some or all of these objects. One variation generates 3D digital models from images acquired directly of these objects. Another variation uses images of a negative impression of the patient's dental arch, images of a positive dental arch mold cast from the negative impression, and/or images of tooth models such as, for example, tooth models separated from a positive dental arch mold. [0048] In some variations images are used to measure the position, orientation, and/or size of a patient's teeth, gingiva, jaw, and/or face. For example, in one variation individual physical tooth models are separated from a positive mold cast from a negative impression of the patient's tooth arch. A 3D digital model of each of the patient's teeth is generated from one or more images of each physical tooth model. An image of the patient's tooth arch, of a negative impression of the tooth arch, or of a positive mold cast from the negative impression is used to determine the position and orientation of each tooth relative to the others in the patient's jaw or tooth arch. A 3D digital or physical model of the patient's jaw or tooth arch may then be constructed from the 3D digital or physical tooth models.
[0049] In some variations, one or more images are acquired to determine the relative positions of a patient's upper and lower jaws and thus determine the type of malocclusion suffered by the patient. An appropriate treatment may then be prescribed, hi some variations, 3D digital models of the upper and lower teeth and jaws are generated from the images to enable determination of the malocclusion. In other variations, the required 3D information for diagnosing the malocclusion is determined from the image or images without generation of such 3D digital models.
[0050] hi some variations, tooth or gingival features are recognized from images of a patient's teeth or tooth arch. As an example, cusps on molar teeth may be recognized. These and other recognizable tooth features may be used to identify each tooth in a 3D digital or physical model of the patient's tooth arch. Registration marks such as gingival lines, for example, may be used to identify various parts of the gingiva. This may enable separation of the gingiva from the rest of a 3D digital jaw model.
[0051] hi other variations, images of a patient's tooth or tooth arch are used to identify and separate a 3D digital model of the tooth from a 3D digital model of the patient's jaw or tooth arch. This may be accomplished, for example, by recognizing gingival lines or inter-proximal areas of the teeth. Registration marks may be used to identify the inter-proximal areas and the gingival lines.
[0052] hi some variations, a 3D digital model of an object is generated using a combination of 3D information derived from one or more images of the object and other 3D information not derived from the images. Such a 3D digital model may be generated by one of ordinary skill in the art having the benefit of this disclosure by using conventional methods and algorithms. For example, gaps in 3D models of faces, jaws, tooth arches, and/or teeth derived from images can be filled in with information from a database containing models and information about faces, jaws, tooth arches, and teeth. Such a facial/orthodontic database of prior knowledge may be used, for example, to fill in missing pieces such as muscle structure in a model. Such a database can also be used for filling in other missing data with good estimates of what the missing part should look like.
[0053] In another example, separate 3D digital models of a patient's face and jaw may be generated from images and then combined to form a 3D digital model of the face and internals of the head by using information from a facial/orthodontic database to fill in missing pieces. The resulting 3D digital model may be a hierarchical model of the head, face, jaw, gingiva, teeth, bones, muscles, and facial tissues.
[0054] In yet another example, a 3D digital model of a patient's face is generated from 1) images of the patient's head/face, 2) images of the patient's jaw and teeth, 3) X-rays providing bone and tissue information, and 4) information about the environment in which the images were acquired so that color pigment information may be separated from shading and shadow information. The environmental information may be generated, for example, by positioning lights with known coordinates when the images are acquired. Alternatively, lighting from many angles can be used so that there are no shadows and lighting can be incorporated into the 3D digital model. This data maybe combined to create a complete 3D digital model of the patient's face using the patient's 3D geometry, texture, and environment shading and shadows. The 3D digital model may be a true hierarchical model with bone, teeth, gingiva, joint information, muscles, soft tissue, and skin. Missing data such as internal muscle may be added using prior knowledge of facial models.
[0055] In some variations, a 3D digital model of a patient's tooth arch generated by the methods described above is used in the fabrication of dental appliances or physical dental models for use in a dental treatment. For example, the 3D digital model of the patient's tooth arch may be used to fabricate one or more dental aligners using computer numerical control (CNC) based manufacturing techniques. In another variation, the 3D digital model of the patient's tooth arch may be used to fabricate a physical model of the tooth arch by CNC based manufacturing. The physical model of the tooth arch may then be used in the fabrication of a dental aligner. Suitable methods for fabricating dental aligners and physical models of tooth arches by CNC based manufacturing are disclosed, for example, in U.S. Patent Application No. 10/979,823 entitled "METHOD AND APPARATUS FOR MANUFACTURING AND CONSTRUCTING A PHYSICAL DENTAL ARCH MODEL," filed November 2, 2004, and U.S. Patent Application No. 10/979,497 entitled "METHOD AND APPARATUS FOR MANUFACTURING AND CONSTRUCTING A DENTAL ALIGNER," filed November 2, 4004 both of which are incorporated herein by reference in their entirety for all purposes.
[0056] In another variation, the 3D digital model may be used to guide the arrangement of physical tooth models on a base or in a wax set-up, for example, to produce a physical model of the patient's tooth arch which may then be used in the fabrication of a dental aligner. Such use of a 3D digital model of a tooth arch is disclosed, for example, in U.S. Provisional Application No. 60/676,546 entitled "DIGITIZATION OF DENTAL ARCH MODEL," filed April 29, 2005 incorporated herein by reference in its entirety for all purposes.
Tracking Tooth Movements
[0057] FIG. 4 shows an exemplary process for determining and tracking tooth or tooth model movements. First, in step 400, one or more features are identified on or added to the tooth or tooth model. In some variations, these features may be selected to be easily distinguishable in photographic images. In such variations, any of the features identified above with respect to step 110 of the process of FIG. 1 as possible registration features that are suitable for use with teeth or tooth models may be used here as well. In some variations a sufficient number of registration marks (e.g., three or more if they are point-like) are used to define a coordinate system on the tooth or tooth model and hence represent the position and orientation of the tooth or tooth model.
[0058] Next, in step 405, the positions of the features on the tooth or tooth model are detected when the tooth or tooth model is in a first position. In step 410, the positions of the features on the tooth or tooth model are detected when the tooth or tooth model is in a second position. In step 415, the difference between the first position of each feature and the second position of each feature is determined. The tooth or tooth model's change in position can then be determined from these differences. [0059] The process illustrated in FIG. 4 may be applied, for example, to one or more teeth in a patient's mouth, to tooth impressions in negative impressions of the patient's tooth arch, to teeth in positive molds made from the negative impressions, and to individual physical tooth models arranged on a base or in a wax set-up in two or more different arrangements.
[0060] In some variations, the positions of, for example, one or more of a patient's teeth or the positions of one or more physical tooth models in an arrangement of physical tooth models may be determined from photographic images of the teeth or tooth models. The positions of a patient's teeth may also be determined from one or more images of a negative impression of the patient's tooth arch, of a positive tooth arch mold of the negative impression, or of a wax bite. For example, one or more images are acquired of the patient's teeth, an arrangement of physical tooth models, a negative impression, a positive tooth arch mold, or a wax bite and then the positions of the teeth or physical tooth models are determined from these images using, for example, the methods described above with respect to FIG 1. This process may be repeated to determine, for example, new positions of the teeth at a later stage of treatment or to determine new positions of physical tooth models in a different arrangement.
[0061] In one variation, the images of the teeth, physical tooth models, negative impression, positive tooth arch mold, or wax bite may be acquired by one or more stationary cameras, one or more moving cameras, or a combination of stationary or moving cameras. The cameras may be, for example, conventional digital cameras, digital video cameras, film cameras, or video cameras in some variations. In addition, it should be understood that it need not be necessary for a single image to include all of the teeth for which positions are to be determined.
[0062] rn another variation, the resolution of the images of the teeth, physical tooth models, negative impression, positive tooth arch mold, or wax bite is sufficiently high that the positions of the teeth or physical tooth models may be determined from the images with sufficient precision without the addition of registration marks to, for example, the teeth or physical tooth models. In some of these variations, the determination of 3D positional information from the images may utilize naturally occurring tooth or gingival features that are easily distinguishable in the images such as those discussed above with respect to step 110 of the process of FIG 1. The process of FIG 4 may be applied, in some variations, to these naturally occurring features to determine the movements of the teeth between different stages of treatment, for example, or of physical tooth models between different arrangements. Also, in some variations in which registration marks are not added to the teeth, the determination of 3D information from the images may utilize chamfer matching as discussed below.
[0063] In yet another variation, registration features that are easily distinguishable in images may be added to, for example, the teeth or physical tooth models. The addition of such registration features may reduce the resolution required of the images to determine the positions of the teeth with a particular precision, hi some variations, a sufficient number of registration marks are added to each tooth to define a coordinate system on the tooth and hence represent its position. Any of the registration features discussed above with respect to step 110 of the process of FIG. 1 that are suitable for use with teeth or physical tooth models may be used. In some variations, the process of FIG 4 may be applied to these registration features to determine tooth or physical tooth model movements.
[0064] In yet another variation, a 3D digital model of the patient's tooth arch is generated from one or more images of the teeth, a negative impression of the tooth arch, a positive tooth arch mold, or a wax bite by the methods described with respect to FIG. 1. Similarly, in some variations a 3D digital model of an arrangement of physical tooth models on a base or in a wax set-up, for example, is determined from one or more images of the arrangement. Such 3D digital models include the positions of the teeth or physical tooth models and hence may be used to track those positions.
[0065] FIG. 5 A shows another exemplary method for tracking the movements of teeth or physical tooth models in some variations. In step 500, a 3D digital model of a first arrangement of the teeth or physical tooth models is acquired. In some variations, this 3D digital model is generated from one or more images of the first arrangement by the methods described above. In other variations, the 3D digital model may be acquired, for example, by methods described below or by other methods known to one of ordinary skill in the art for digitizing physical objects.
[0066] In step 505, the positions of the teeth or physical tooth models in a second arrangement are acquired. In some variations these positions are acquired from one or more images of the arrangement by the methods described above, m other variations these positions may be acquired, for example, by other methods described below including methods not requiring the use of images. [0067] In step 510, the 3D digital model is modified to represent the positions of the teeth or physical tooth models in the second arrangement. In some variations, steps 505 and 510 are accomplished by superimposing a projection of the 3D digital model of the first arrangement onto an image of the second arrangement (FIG. 5B). In other variations, steps 505 and 510 are accomplished by superimposing the 3D digital model of the first arrangement onto a 3D digital model of the second arrangement derived from images of the second arrangement (FIG. 5C).
[0068] Referring to FIG. 5B, in one variation in step 530 an image of the second arrangement is acquired. In step 535, a distortion-corrected image of the second arrangement is generated from the image acquired in step 530. The original image may be acquired, and the distortion-corrected image generated, by the methods described above with respect to FIG. 1, for example. In step 540, a static reference point in the 3D digital model is selected. This static reference point, which may also be referred to as an anchor point, is a point in the 3D digital model that has not substantially moved between the first and second arrangements. The static reference point may be selected, for example, by identifying teeth or tooth models that have not substantially moved, by identifying portions of the gingiva that have not substantially moved, or by determining the center of mass of the first arrangement. In step 545 a static subset of the 3D digital model is identified. This static subset is a portion of the 3D digital model that has not substantially moved between the two arrangements. Steps 540 and 545 may occur together.
[0069] In step 550 the static subset of the 3D digital model of the first arrangement is projected onto the distortion-corrected image of the second arrangement. The projection is then rotated, translated, and otherwise transformed to substantially superimpose the projection on a portion of the second arrangement in the distortion-corrected image. The transformation required at this step provides any information required to modify the static subset of the 3D digital model to represent a portion of the second arrangement. Li step 555, the transformation required to superimpose a projection of the non-static portions of the 3D digital model onto the distortion-corrected image of the second arrangement is determined. This transformation provides the additional information required to modify the 3D digital model to represent the second arrangement.
[0070] Steps 540-555 may be applied in an iterative approach in which a static reference point and a static subset of the 3D digital model are selected, transformations for the non-static portions are determined, a new static reference point and a new static subset are selected, and transformations for the newly designated non-static portions are determined.
[0071] In the process of FIG. 5B, the ease with which a static reference point and a static subset of the 3D digital model may be identified, and the accuracy with which the modified 3D digital model represents the second arrangement, increase as the differences between the two arrangements decrease. Consequently, it may be advantageous to update the 3D digital model at intervals over which changes in the arrangement of teeth or tooth models are relatively small. Also, the accuracy with which the modified 3D digital model represents the second arrangement may be improved by applying the process of FIG. 5B to multiple images of the second arrangement taken from different angles.
[0072] In another variation, steps 505 and 510 in FIG. 5 A are accomplished by the process shown in FIG 5C. hi step 570 multiple images of the second arrangement are acquired, hi step 575, a 3D digital model of the second arrangement is generated from the images acquired in step 570. These images may be acquired, and the 3D digital model of the second arrangement maybe generated, by the methods described above with respect to FIG. 1. In step 580, the transformation that superimposes the 3D digital model of the first arrangement onto the 3D digital model of the second arrangement is determined. This transformation provides the information required to modify the 3D digital model of the first arrangement to represent the second arrangement.
[0073] The 3D digital model of the second arrangement generated in step 575 from images should include sufficient information to enable accurate modification of the 3D digital model of the first arrangement to represent the second arrangement. The 3D digital model generated in step 575 from images need not necessarily be as detailed or include as much information as the 3D digital model of the first arrangement, however.
[0074] Referring now to FIGS. 5A, 5B, and 5C, in some variations the two arrangements of teeth or physical tooth models represent different stages of an orthodontic treatment process. For example, in one variation the initial 3D digital model represents the arrangement of the patient's teeth at an earlier stage of treatment, and the modified 3D digital model represents the patient's current arrangement (the second arrangement) of teeth. In another variation, a first arrangement of physical tooth models represents an actual or predicted arrangement of teeth during treatment, and a second arrangement of physical tooth models represents a desired arrangement of teeth at a later stage of treatment. In the latter variation, the modified 3D digital model represents a desired tooth arrangement and may be used in the fabrication of dental appliances such as dental aligners, for example, or physical dental models for use in a treatment plan designed to achieve that tooth arrangement.
[0075] Additional examples in which the positions of or changes in the positions of teeth or physical tooth models are determined and/or tracked are described next. One variation uses chamfer matching. In this variation, the system looks for a specific object in a binary image including objects of various shapes, positions, and orientations. Matching is a central problem in image analysis and pattern recognition. Chamfer matching is an edge matching technique in which the edge points of one image are transformed by a set of parametric transformation equations to edge points of a similar image that is slightly different. In this embodiment, digital pictures of the jaw are acquired from different angles (such as seven angles for each stage of treatment, for example). Those pictures are acquired at a plurality of different resolutions such as, for example, four resolutions. In one variation, a hierarchical method for computing the analysis compares all the pictures of one stage with all the pictures of the other stage. The chamfer matching operation then determines the total amount of movement of the teeth per stage. The movement of individual tooth can then be used for calculating information required for aligner fabrication.
[0076] In another variation that uses 'laser marking', a minute amount of material on the surface of the tooth model is removed and colored. This removal is not visible after the object has been enameled. Li this process a spot shaped indentation is produced on the surface of the material. Another method of laser marking is called 'Center Marking'. In this process a spot shaped indentation is produced on the surface of the object. Center marking can be 'circular center marking' or 'dot point marking'.
[0077] In one variation using laser marking, small features are marked on the crown surface of the tooth model. After that, the teeth are moved, and each individual tooth is superimposed on top of each other to determine the tooth movement. A wax setup is done and then the system marks one or more points using a laser. Pictures of the jaw are acquired from different angles. After that, the next stage is produced and the same procedure is repeated. Stages x and x+1 pictures are overlaid. The change of the laser points reflects the exact amount of tooth movement.
[0078] In another variation called sparkling, marking or reflective markers are placed on the body or object to be motion tracked. The sparkles or reflective objects can be placed on the body/object to be motion tracked in a strategic or organized manner so that reference points can be created from the original model to the models of the later stages. In this variation, a wax setup is done and the teeth models are marked with sparkles. Alternatively, the system marks or paints the surface of the crown model with sparkles. Pictures of the jaw are acquired from different angles. Computer software determines and saves those pictures. After that, the teeth models are moved. Each individual tooth is mounted on top of the other and tooth movement can be determined. Then the next stage is performed, and the same procedure is repeated.
[0079] In another variation, a mechanical based system is used to measure the position of features on teeth or tooth models. First, the model of the jaw is placed in a container. A user takes a stylus and places the tip on different points on the tooth. The points touched by the stylus tip are selected in advance. The user then tells the computer to calculate the value of the point. The value is then preserved in the system. The user takes another point until all points have been digitized. Typically, two points on each tooth are captured. However, depending on need, the number of points to be taken on each tooth can be increased. The points on all teeth are registered in computer software. Based on these points the system determines the differences between planned versus actual teeth position for aligner fabrication. These points are taken on each individual stage, hi this way, this procedure can also be used to calculate the motion/movement of the tooth per stage.
[0080] For example, a mechanical based system for 3D digitization, such as Microscribe from Immersion Corporation or Phantom from SenseAble Technology Incorporated, can be used. In some variations, the 3D digitizer implements counterbalanced mechanical arms (with a number of mechanical joints with digital optical sensors inside) that are equipped with precision bearings for smooth, effortless manipulation. The end segment is a pen like device called a stylus which can be used to touch any point in 3D space. Accurate 3D position information on where the probe touches is calculated by reading each joint decoder's information; 3D angular information can also be provided at an extra cost. In order to achieve true 6 degree of freedom information, an extra decoder can be added for reading pen self rotation information. Some additional sensors can be placed at the tip of the pen, so the computer can read how hard the user is pressing the pen. A special mechanical device can be added to give force feedback to the user.
[0081] Immersion Corporation's MicroScribe uses a pointed stylus attached to a CMM- type device to produce an accuracy of about .01 inch. It is a precision portable digitizing arm with a hand-held probe used at a workstation, mounted or on a tripod or similar fixture for field use or a manufacturing environment. The MicroScribe digitizer is based on optical angle encoders at each of the five arm joints, embedded processor, USB port and software application interface for the host computer. The user selects points of interest or sketches curves on the surface of an object with the hand-held probe tip and foot switch. Angle information from the MicroScribe arm is sent to the host computer through a USB or serial port. The MicroScribe utility software (MUS), a software application interface, calculates the Cartesian XYZ coordinates of the acquired points and the coordinates are directly inserted into keystroke functions in the user's active Windows application. The users design and modeling application functions are used to connect the 3D points as curves and objects to create surfaces and solids integrated into an overall design.
[0082] Li another variation, 3D motion tracking/capture is based on an optical or magnetic system. These require placing markers at specific points on the teeth and digitally recording the movements of the actual teeth so their movements can be played back with computer animation. The computer uses software to post-process this mass of data and determine the exact movement of the teeth, as inferred from the 3D position of each tooth marker at each moment.
[0083] In other variations, a magnetic motion capture systems utilize sensors placed on the teeth or physical tooth models to measure the low-frequency magnetic field generated by a transmitter source. The sensors and source are cabled to an electronic control unit that correlates their reported locations within the field. The electronic control units are networked with a host computer that uses a software driver to represent these positions and rotations in 3D space. The sensors report position and rotational information. In this variation, sensors are applied to each individual tooth or tooth model. In some variations, three sensors are used: one on the buccal side, one on the lingual side and the one on the occlusal side. The number of sensors can be increased depending on the case. [0084] In some variations using magnetic motion capture systems, the jaw is placed in a housing or cabin. The sensors are attached to the teeth/jaw at predetermined points and connected to an electronic system with the help of cables. The electronic system is in turn connected to a computer. The movement of the teeth at each stage is calculated by these sensors. The computer manipulates the coordinates and gives the proper values which are then used to perform the required procedures for aligner fabrication, among others.
[0085] In other variations, wireless sensors which operate at different frequencies can also be used. The movements are once again captured by electronics attached to the computer. With the help of the sensors, positional values are determined for aligner fabrication and other procedures that need to be performed.
[0086] In other variations, Optical Motion Capture Systems may be used. There are two main technologies used in optical motion capture: Reflective and Pulsed-LED (light emitting diodes). Optical motion capture systems utilize proprietary video cameras to track the motion of reflective markers (or pulsed LEDs) attached to an object. Reflective optical motion capture systems use Infra-red (IR) LEDs mounted around the camera lens, along with IR pass filters placed over the camera lens. Optical motion capture systems based on Pulsed-LEDs measure the infra-red light emitted by the LED's rather than light reflected from markers. The centers of the marker images are matched from the various camera views using triangulation to compute their frame-to-frame positions in 3D space. A studio enclosure houses a plurality of video cameras (such as seven, for example) attached to a computer. Dental impressions are placed inside the studio. Each of the teeth has a plurality of reflective markers attached. For example, markers can be placed on the buccal side, the lingual side and the occlusal side. More markers can be deployed if required. Infra-red (IR) LEDs are mounted around the camera lens, along with IR pass filters placed over the lens. When the light emits form the LED's it gets reflected by the markers. The coordinates are captured and matched with the, e.g., seven different camera views to ultimately get the position data for aligner making and other computations.
[0087] In another variation a wax setup operation is done in freehand without the help of any mechanical or electronic systems. Tooth movement is determined manually with scales and/or rules and these measurements are entered into the system. [0088] Some variations use a wax set up in which the tooth abutments are placed in a base which has wax in it. One variation uses robots and clamps to set the teeth at each stage. Another variation uses a clamping base plate, i.e. a plate on which teeth can be attached on specific positions. Teeth are setup at each stage using this process. Measurement tools such as the micro scribe are used to get the tooth movements which can be used later by the universal joint device to specify the position of the teeth.
[0089] In another variation, the FACC lines are marked on the teeth or tooth models.
Movement is determined by a non mechanical method or by a laser pointer.. The distance and angle of the FACC line reflects the difference between the initial position and the next position on which the FACC line lies.
[0090] In some variations, tooth or tooth model movements are checked in real time. The tooth models are placed in a container attached to motion sensors. These sensors track the motion of the tooth models in real time. The motion can be done with freehand or with a suitably controlled robot. Stage x and stage x+1 pictures are overlaid, and the change of the points reflects the exact amount of movement.
[0091] A number of methods and systems for tracking the positions of objects are disclosed in U.S. Patent No. 6,820,025 entitled "METHOD AND APPARATUS FOR MOTION TRACKING OF AN ARTICULATED RIGID BODY" issued to Bachman et al., dated November 16, 2004. One of ordinary skill in the art having the benefit of the present disclosure may find some of the methods and systems disclosed in U.S. Patent No. 6,820,025 suitable for incorporation into the process described here.
[0092] One of ordinary skill in the art having the benefit of this disclosure would appreciate that the systems and methods disclosed herein can be utilized to track the orientation of teeth as well as other articulated physical bodies including, but not limited to, prosthetic devices, robot arms, moving automated systems, and living body parts or tissue components.
Generating a Predicted Post-Treatment 3D Digital Model and/or Image
[0093] FIG. 6 shows an exemplary process for generating a photo-realistic image of the predicted result of a dental or other medical treatment. Although the steps of the process shown in FIG. 6 refer to the generation of an image of a patient's face and teeth showing the predicted result of an orthodontic or other dental treatment, one of ordinary skill in the art with the benefit of this disclosure would recognize that a similar or equivalent process may be used to generate predicted post-treatment images for other medical or dental treatments as well.
[0094] Referring to FIG. 6, in step 600 one or more pre-treatment images of the patient's face and teeth are acquired. In some variations, these images may be acquired using, for example, methods and apparatus described above with respect to the process of FIG. 1, facilitating their use in the generation of a 3D digital model of the patient's face and teeth.
[0095] In step 605, a 3D digital model of the patient's pre-treatment face and teeth is generated from the image or images acquired in step 600. In other variations, this 3D digital model is generated using, for example, the methods and apparatus described above with respect to the process shown in FIG. 1. In some variations the pre-treatment 3D digital model is generated using a combination of information derived from the pre-treatment image or images and other information not derived from the images. For example, in some variations missing information may be supplied from a database containing models and information about faces, jaws, tooth arches, and teeth. X-ray or CT data providing bone and tissue information may be used in generating the pre-treatment 3D digital model in some variations. Also, in some variations a 3D digital model of the patient's pre-treatment tooth arches may also be used in generating the pre-treatment 3D digital model of the patient's face and teeth. The generation of such 3D digital tooth arch models is described below with respect to step 610. In some variations, 3D scans of the patient's head, face, jaw, and or teeth prior to treatment are used in generating the pre-treatment digital model of the face and teeth.
[0096] In some variations information regarding the environment in which the image or images were acquired is collected at the time the images are acquired or extracted from the images so that, for example, color pigment information may be separated from texture, shading, and shadow information. This environment information maybe used in subsequent steps (e.g., steps 615 and 620 below).
[0097] In step 610, 3D digital models of the patient's pre-treatment and predicted post- treatment tooth arches are acquired. These tooth arch models may include, for example, the patient's jaws, teeth, and/or gingiva, hi some variations, the 3D digital model of the pre- treatment tooth arches may be generated from images by the methods described above with respect to the process shown in FIG. 1. In some other variations, the 3D digital model of the pre-treatment tooth arches is generated by, for example, scanning and digitizing the patient's teeth in the patient's mouth, scanning and digitizing negative impressions of the patient's tooth arches, scanning and digitizing positive molds of the tooth arches cast from the negative impressions, and/or scanning and digitizing individual physical models of the patient's teeth.
[0098] The 3D digital model of the predicted post treatment tooth arches may be generated, in some variations, by modifying a 3D digital model of the pre-treatment arches to represent the expected results of an orthodontic or other dental treatment. Methods and apparatus, for generating 3D digital models of pre-treatment and predicted post-treatment tooth arches are disclosed, for example, in U.S. Provisional Application No. 60/676,546 entitled "DIGITIZATION OF DENTAL ARCH MODEL," filed April 29, 2005.
[0099] In step 615 a 3D digital model of the patient's predicted post-treatment face and teeth is generated from the 3D digital model of the patient's pre-treatment face and teeth (generated in step 605) and the 3D digital models of the patient's pre-treatment and post- treatment tooth arches (generated in step 610). Texture, environment, shadow, and shading information may also be used in generating the 3D digital models of the patient's predicted post- treatment face and teeth in some variations.
[0100] In step 615 the 3D digital model of the patient's predicted post-treatment face and teeth may be partially or entirely generated with methods and algorithms known to one of ordinary skill in the art and conventional in, for example, the movie and gaming industries. Such known methods may include conventional morphing methods which enable smooth transformations of 3D digital (e.g., mesh or voxel) models. Such known methods may also include conventional methods by which may be generated a hierarchical 3D digital face model including teeth, bone, joints, gingiva, muscle, soft tissue and skin in which changes in the position or shape of one level of the hierarchy (e.g., teeth or bones) changes all dependent levels in the hierarchy (e.g., muscle, soft tissue, and skin). Some conventional morphing methods are disclosed, for example, in U.S. Patent No. 6,268,846 entitled "3D GRAPHICS BASED ON IMAGES AND MORPHING" issued to Georgiev, dated July 31, 2001, and U.S. Patent No. 6,573,889 entitled "ANALYTIC WARPING" issued to Georgiev, dated June 3, 2003. [0101] In some variations, the 3D digital model of the patient's predicted post-treatment face and teeth may be partially or entirely generated with commercial software products or with conventional algorithms and methods related to those on which commercial software products are based. Examples of such commercial software products include the Maya® family of integrated 3D modeling, animation, visual effects, and rendering software products available from Alias Systems Corporation.
[0102] The 3D digital models of the pre-treatment and post-treatment tooth arches can provide information about predicted or projected tooth movement in an anticipated treatment process. This information about tooth movements may be used in step 615 in conjunction with the 3D digital model of the patient's face and teeth generated in step 605 to predict how changes in particular tooth positions result in changes in, for example, the bone structure and/or soft tissue (e.g., gingiva) of the patient's face and jaw, and hence in predicting the overall view of the patient's face and teeth (e.g., projecting a partial or full facial profile during and/or after treatment). For example, the teeth in the pre-treatment and post-treatment 3D digital tooth arch models may be matched with the teeth in the pre-treatment 3D digital model of the patient's face and teeth. The changes in the bones and tissues in the face that occur as a result of the forces applied to them by the teeth as the teeth move from pre-treatment to post-treatment positions may then be simulated, for example, by treating the tissue and bones as an elastic continuum or by using a finite elements analysis. Elastic continuum analyses and finite elements analyses are conventional methods for determining deformations in material resulting from the application of forces. Techniques such as collision computation between the jaw and the facial bones and tissue may also be used to calculate deformations in the face. In this manner, predicted movements in the jaw and/or teeth may result in predicted changes to the 3D digital model of the face and teeth, including the gingiva. The impact of the tooth movements may also be determined and visualized using, for example, 3D morphing of the 3D digital model of the face and teeth.
[0103] A texture based 3D geometry reconstruction may be implemented in some variations. The face colors/pigments may be determined from the image or images acquired in step 600, for example, and stored as a texture. Since different parts of the facial skin may have different colorations, texture maps may store colors corresponding to each position on the face 3D digital model of the face and teeth. [0104] Next, in step 620, the 2D and/or 3D digital model of the patient's post-treatment face and teeth is rendered into a photo-realistic image using conventional rendering methods known to one of ordinary skill in the art. In some variations the rendering process may utilize environment information such as lighting, shadows, shading, color, and texture collected or extracted from images as described at step 605. The photo-realistic image may then be viewed or printed, for example.
[0105] As an example, FIG. 7 shows a pre-treatment image of teeth and FIG. 8 shows an exemplary image, generated according to one variation, of the predicted result of an orthodontic treatment of these teeth.
[0106] Referring again to FIG. 6, in some variations at step 610 a 3D digital model of the patient's predicted tooth arches at an intermediate step of a treatment process is used rather than the 3D digital model of the predicted post-treatment tooth arches. In such variations, the process of FIG. 6 may generate a photo-realistic image of the patient's face and teeth at an intermediate stage of treatment.
[0107] Three dimensional morphing is utilized in some variations to connect the initial and modified geometry of the patient's face and teeth to show gradual changes in the model of the face and teeth. FIG. 9 shows an exemplary process for generating photo-realistic images of predicted intermediate results of a dental or other medical treatment according to some other variations. In step 900, a 3D digital model of the patient's pre-treatment face and teeth is acquired. This may be accomplished by, for example, the methods described with respect to the process of FIG. 1 and/or with respect to step 605 of the process of FIG. 6. In step 905, a 3D digital model of the patient's predicted post-treatment face and teeth is acquired. In some variations this may be accomplished using, for example, the methods described with respect to step 615 of FIG. 6.
[0108] In step 910, features in the pre-treatment and post-treatment 3D digital models of the face and teeth are mapped onto each other. In step 915, one or more 3D digital models of the patient's face and teeth at intermediate stages of treatment are generated by interpolating between the pre-treatment and post-treatment 3D digital models. The interpolation process may utilize information regarding the locations of the teeth in order to avoid, for example, unphysical interpolations in which teeth collide or pass through one another. [0109] Steps 910 and 915 may be accomplished in some variations using, for example, morphing methods known to one of ordinary skill in the art and conventional in the movie and gaming industries. Such morphing methods may gradually convert one graphical object into another. The 3D digital models of the patient's face at intermediate stages of treatment maybe rendered into photo-realistic images by conventional rendering methods and then viewed or printed.
[0110] Some variations may utilize commercial software products, or conventional algorithms and methods related to those on which commercial software products are based, to generate the 3D digitals model of the patient's face and teeth at intermediate stages of treatment. Examples of such commercial software products include the Maya® family of integrated 3D modeling, animation, visual effects, and rendering software products available from Alias Systems Corporation.
[0111] In some variations the feature mapping in step 910 includes teeth and/or lips on the initial and final 3D digital models. In variations in which the 3D digital models use a polyhedral representation, feature mapping may specify polyhedron faces, edges, or vertices. In variations in which the 3D digital models use a voxel representation, appropriate voxels are specified.
[0112] In some variations, the methods described with respect to FIG. 6 and/or FIG. 9 enable patients, doctors, dentists and other interested parties to view a photorealistic rendering of the expected appearance of a patient after treatment, hi the case of an orthodontic treatment, for example, a patient can view his or her expected post-treatment smile.
[0113] In some variations, the methods of FIG. 6 and/or FIG. 9 may be used to simulate the results of other medical or surgical treatments. For example, in some variations the post- treatment result of a plastic surgery procedure may be simulated. As another example, in a tooth whitening application the final tooth color as well as intermediate stages between the initial and final tooth colors may be simulated. In another variation, wound healing on the face, for example, may be simulated through progressive morphing.
[0114] In some variations, a growth model based on a database of prior organ growth information may be used to predict how an organ would be expected to grow and the growth may be visualized using morphing. For example, in one variation a hair growth model may be used to show a person his or her expected appearance three to six months from the day of a haircut.
[0115] hi some variations the methods and apparatus disclosed herein may be used to perform lip sync, hi other variations the methods and apparatus disclosed herein may be used to perform face detection. For example, a person can have different facial expressions at different times. Multiple facial expressions may be simulated and compared to a scanned face for face detection.
[0116] The methods disclosed in this patent may be implemented in hardware or software, or a combination of the two. For example, the methods may be implemented in computer programs executing on programmable computers that each includes a processor, a storage medium readable by the processor (including volatile and nonvolatile memory and/or storage elements), and suitable input and output devices. Program code may be applied to data entered using an input device to perform the functions described and to generate output information. The output data may be processed by one or more output devices for transmission.
[0117] In one variation the computer system includes a CPU, a RAM, a ROM and an I/O controller coupled by a CPU bus. The I/O controller is also coupled by an I/O bus to input devices such as a keyboard and a mouse, and output devices such as a monitor. The I/O controller also drives an I/O interface which in turn controls a removable disk drive such as a floppy disk, among others.
[0118] hi some variations, each program is implemented in a high level procedural or object-oriented programming language to communicate with a computer system. In other variations, the programs may be implemented in assembly or machine language, if desired. In either case, the language may be a compiled or interpreted language, hi some variations, each such computer program may be stored on a storage medium or device (e.g., CD-ROM, hard disk or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer to perform the procedures described, hi some variations, the system may be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner.
[0119] This invention has been described and specific examples of the invention have been portrayed. While the invention has been described in terms of particular variations and illustrative figures, those of ordinary skill in the art will recognize that the invention is not limited to the variations or figures described. In addition, where methods and steps described above indicate certain events occurring in certain order, those of ordinary skill in the art will recognize that the ordering of certain steps may be modified and that such modifications are in accordance with the variations of the invention. Additionally, certain of the steps may be performed concurrently in a parallel process when possible, as well as performed sequentially as described above. Therefore, to the extent there are variations of the invention, which are within the spirit of the disclosure or equivalent to the inventions found in the claims, it is the intent that this patent will cover those variations as well. Finally, all publications and patent applications cited in this specification are herein incorporated by reference in their entirety as if each individual publication or patent application were specifically and individually put forth herein.

Claims

CLAIMSWhat is claimed is:
1. A method of tracking the movements of teeth or physical tooth models, the method comprising: acquiring a 3D digital model of a first arrangement of the teeth or physical tooth models; acquiring the positions of the teeth or physical tooth models in a second arrangement; and modifying the 3D digital model to represent the second arrangement.
2. The method according to claim 1, wherein acquiring the positions of the teeth or physical tooth models in the second arrangement comprises: acquiring one or more images; and determining the positions of the teeth or tooth models in the second arrangement from the one or more images.
3. The method according to claim 2, wherein the images comprise images of the second arrangement.
4. The method according to claim 3, wherein the second arrangement comprises an arrangement of teeth in a patient's mouth.
5. The method according to claim 3, wherein the second arrangement comprises an arrangement of individual physical models of a patient's teeth arranged to represent a past, current, or predicted configuration of teeth in the patient's mouth.
6. The method according to claim 2, wherein the images comprise images of a negative impression of a patient's tooth arch.
7. The method according to claim 2, wherein the images comprise images of a positive dental arch mold cast from a negative impression of a patient's tooth arch.
8. The method according to claim 2, wherein acquiring a 3D digital model of the first arrangement comprises generating the 3D digital model from a second set of one or more images.
9. The method according to claim 8, wherein the second set of images comprises images of the first arrangement.
10. The method according to claim 9, wherein the first arrangement comprises an arrangement of teeth in a patient's mouth.
11. The method according to claim 9, wherein the first arrangement comprises an arrangement of individual physical models of a patient's teeth arranged to represent a past, current, or predicted configuration of teeth in the patient's mouth.
12. The method according to claim 8, wherein the second set of images comprises images of a negative impression of a patient's tooth arch.
13. The method according to claim 8, wherein the second set of images comprises images of a positive dental arch mold cast from a negative impression of a patient's tooth arch.
14. The method according to claim 8, wherein acquiring a 3D digital model of the first arrangement comprises generating the 3D digital model from additional information about the first arrangement not derived from the second set of images.
15. The method according to claim 14, wherein the additional information comprises information derived from x-ray data.
16. The method according to claim 2, further comprising characterizing internal geometries of a camera; and calibrating the camera.
17. The method according to claim 2, further comprising acquiring two or more of the images substantially simultaneously.
18. The method according to claim 2, further comprising adding registration marks to or identifying registration marks on a tooth or physical tooth model.
19. The method according to claim 18, wherein the registration marks are sufficient to define a coordinate system on the tooth or physical tooth model.
20. The method according to claim 18, wherein the registration marks comprise registration marks on a tooth surface facing inside a patient's mouth.
21. The method according to claim 18, further comprising determining a change in position of one or more registration marks between the first arrangement and the second arrangement.
22. The method according to claim 2, further comprising generating a distortion-corrected image of the second arrangement.
23. The method according to claim 22, further comprising: identifying a subset of the 3D digital model of the first arrangement including one or more teeth or physical tooth models having substantially the same positions in the second arrangement as in the first arrangement; projecting the subset onto the distortion-corrected image; determining a transformation of the 3D digital model to substantially superimpose the projection of the subset on one or more teeth or physical tooth models in the distortion-corrected image; and determining a transformation of the 3D digital model to substantially superimpose a projection of portions of the 3D digital model not included in the subset onto one or teeth or physical tooth models in the distortion-corrected image.
24. The method according to claim 23, further comprising selecting a static reference point in the 3D digital model of the first arrangement.
25. The method according to claim 2, further comprising acquiring two or more images of the second arrangement.
26. The method according to claim 25, further comprising: generating a 3D digital model of the second arrangement from the two or more images of the second arrangement; and determining a transformation that substantially superimposes the 3D digital model of the first arrangement onto the 3D digital model of the second arrangement.
27. The method according to claim 2, wherein the modified 3D digital model is used in fabricating a dental appliance.
28. The method according to claim 27, wherein the dental appliance is for rendering corrective teeth movement.
29. The method according to claim 2, wherein the modified 3D digital model is used in fabricating a physical model of a patient's tooth arch.
30. The method according to claim 29, wherein the physical model of a patient's tooth arch comprises individual physical tooth models arranged according to information included in the modified 3D digital model.
31. The method according to claim 1, wherein acquiring the positions of the teeth or physical tooth models in the second arrangement comprises determining the positions with a mechanical position measuring device.
32. A method for generating a photo-realistic image of a predicted result of a dental or medical treatment on a patient, the method comprising: acquiring one or more pre-treatment images of a treatment site on a patient; generating a pre-treatment 3D digital model of the treatment site from the pre-treatment images; generating a predicted 3D digital model of the treatment site from the pre-treatment 3D digital model of the treatment site and a predicted result of the treatment; and rendering a photo-realistic image from the predicted 3D digital model of the treatment site.
33. The method according to claim 32, wherein generating the pre-treatment 3D digital model comprises utilizing information about the treatment site not derived from the images.
34. The method according to claim 32, wherein the treatment site comprises a patient's face and teeth.
35. The method according to claim 32, wherein the predicted result of the treatment is a predicted intermediate result of treatment.
36. The method according to claim 32, wherein the predicted result of the treatment is a predicted final result of treatment.
37. The method according to claim 32, further comprising interpolating a 3D digital model of the treatment site at an intermediate stage of treatment from the pre-treatment 3D digital model of the treatment site and the predicted 3D digital model of the treatment site.
38. A method for generating a photo-realistic image of a predicted result of a dental treatment on a patient, the method comprising: acquiring one or more images of the patient's pre-treatment face and teeth; generating a 3D digital model of the patient's pre-treatment face and teeth from the images of the patient's pre-treatment face and teeth; acquiring a 3D digital model of the patient's pre-treatment tooth arch; acquiring a 3D digital model of the patient's predicted tooth arch resulting from the treatment; generating a 3D digital model of the patient's predicted face and teeth from the 3D digital models of the patient's pre- treatment face and teeth, pre-treatment tooth arch, and predicted tooth arch; and rendering a photo-realistic image from the 3D digital model of the patient's predicted face and teeth.
39. The method according to claim 38, wherein generating a 3D digital model of the patient's pre-treatment face and teeth comprises utilizing information about the pre- treatment face and teeth not derived from the images.
40. The method according to claim 38, further comprising generating the 3D digital model of the patient's pre-treatment tooth arch from one or more images.
41. The method according to claim 38, wherein the predicted face and teeth and predicted tooth arch are at an intermediate stage of treatment.
42. The method according to claim 38, wherein the predicted face and teeth and predicted tooth arch are a predicted final result of treatment.
43. The method according to claim 38, further comprising interpolating a 3D digital model of the patient's face and teeth from the pre-treatment 3D digital model of the face and teeth and the 3D digital model of the predicted face and teeth.
44. A method for generating a 3D model of an object using one or more cameras, comprising: calibrating each camera; establishing a coordinate system and environment for the one or more cameras; registering one or more fiducials on the object; and capturing one or more images and constructing a 3D model from images.
45. The method of claim 44 wherein the model is used for one of the following: measurement of 3D geometry for teeth/gingiva/face/jaw; measurement of position, orientation and size of teeth/gingiva/face/jaw; determination of the type of malocclusion for treatment; recognition of tooth features; recognition of gingiva feature; extraction of teeth from jaw scans; registration with marks or sparkles to identify features of interest; facial profile analysis; filling in gaps in 3 D models from photogrammetry using pre-acquired models based on prior information about teeth/jaw/face, and creating a facial/orthodontic model.
46. The method of claim 44, further comprising: receiving an initial 3D model for a patient; determining a target 3D model; and generating one or more intermediate 3D models.
47. The method of claim 44, further comprising extracting environment information from the model.
48. The method of claim 44, further comprising rendering one or more images of the model.
49. The method of claim 44, wherein the model is represented using one of: polyhedrons and voxels.
50. The method of claim 44, wherein the model is a patient model.
51. The method of claim 50, further comprising generating a virtual treatment for the patient and generating a post-treatment 3D model.
52. The method of claim 44, further comprising geometry subdividing and tessellating the model.
53. The method of claim 44, comprising: identifying one or more common features on a tooth model; detecting the position of the common features on the tooth model at a first position; detecting the position of the common features on the tooth model at a second position; and determining a difference between the position of each common feature at the first and second positions.
54. A system for generating a 3D model of an object, comprising: one or more calibrated cameras; means for establishing a coordinate system and environment for the one or more cameras; means for registering one or more fϊducials on the object; and means for capturing one or more images and constructing a 3D model from images.
55. The system of claim 54, wherein the model is used for one of the following: measurement of 3D geometry for teeth/gingiva/face/jaw; measurement of position, orientation and size of teeth/gingiva/face/jaw; determination of the type of malocclusion for treatment; recognition of tooth features; recognition of gingiva feature; extraction of teeth from jaw scans; registration with marks or sparkles to identify features of interest; facial profile analysis; filling in gaps in 3 D models from photogrammetry using pre-acquired models based on prior information about teeth/jaw/face, and creating a facial/orthodontic model.
56. The system of claim 54, further comprising means for: receiving an initial 3D model for a patient; determining a target 3D model; and generating one or more intermediate 3D models.
57. The system of claim 54, further comprising means for extracting environment information from the model.
58. The system of claim 54, further comprising means for rendering one or more images of the model.
59. The system of claim 54, wherein the model is represented using one of: polyhedrons and voxels.
60. The system of claim 54, wherein the model is a patient model.
61. The system of claim 60, further comprising means for generating a virtual treatment for the patient and generating a post-treatment 3D model.
62. The system of claim 54, further comprising means for geometry subdividing and tessellating the model.
63. The system of claim 54, further comprising means for: identifying one or more common features on a tooth model; detecting the position of the common features on the tooth model at a first position; detecting the position of the common features on the tooth model at a second position; and determining a difference between the position of each common feature at the first and second positions.
64. A method for determining movement of a tooth model from a first position to a second position, comprising: identifying one or more common features on the tooth model; detecting the position of the common features on the tooth model at the first position; detecting the position of the common features on the tooth model at the second position; and determining a difference between the position of each common feature at the first and second positions.
65. The method of claim 64, further comprising: forming the model of the tooth at the first position; and forming the model of the tooth at the second position.
66. The method of claim 64, wherein the position detecting comprises mechanically sensing the position.
67. The method of claim 66, further comprising using handheld 3D digitizers.
68. The method of claim 64, wherein the position detecting comprises chamfer matching.
69. The method of claim 64, wherein the position detecting comprises sparkling.
70. The method of claim 64, wherein the position detecting comprises laser marking a tooth model.
71. The method of claim 64, wherein the position detecting comprises marking an FACC line.
72. The method of claim 64, wherein the position detecting comprises manually measuring the difference between first and second positions.
73. The method of claim 64, wherein the position detecting comprises setting up a wax model.
74. A system for determining movement of a tooth model from a first position to a second position, comprising: means for identifying one or more common features on the tooth model; means for detecting the position of the common features on the tooth model at the first position; means for detecting the position of the common features on the tooth model at the second position; and means for determining a difference between the position of each common feature at the first and second positions.
75. The system of claim 74, further comprising: means for forming the model of the tooth at the first position; and means for forming the model of the tooth at the second position.
76. The system of claim 74, wherein the position detecting means comprises a mechanical position sensor.
77. The system of claim 74, further comprising handheld 3D digitizers.
78. The system of claim 74, wherein the position detecting means comprises chamfer matching means.
79. The system of claim 74, wherein the position detecting means comprises sparkling means.
80. The system of claim 74, wherein the position detecting means comprises means for laser marking a tooth model.
81. The system of claim 74, wherein the position detecting means comprises means for marking an FACC line.
82. The system of claim 74, wherein the position detecting means comprises means for manually measuring the difference between first and second positions.
83. The system of claim 74, wherein the position detecting means comprises a jig for wax model set-up.
84. A method for visualizing changes in a three dimensional (3D) model for a patient, comprising: receiving an initial 3D model for the patient; determining a target 3D model; and generating one or more intermediate 3D models by morphing one or more of the 3D models.
85. The method of claim 84, further comprising extracting environment information from the model.
86. The method of claim 84, comprising rendering one or more images of the model.
87. The method of claim 84 wherein the model is represented using one of: polyhedrons and voxels.
88. The method of claim 84, comprising generating a virtual treatment for the patient.
89. The method of claim 84, comprising generating a post-treatment 3D model.
90. The method of claim 84, comprising rendering an image of the model.
91. The method of claim 84, comprising geometry subdividing and tessellating the model.
92. The method of claim 84, comprising generating an inside model of the 3D model.
93. A visualization system, comprising: means for receiving an initial three dimensional (3D) model for a patient; means for determining a target 3D model; and means for generating one or more intermediate 3D models by morphing one or more of the 3D models.
94. The system of claim 93, further comprising means for extracting environment information from the model.
95. The system of claim 93, comprising means for rendering one or more images of the model.
96. The system of claim 93, wherein the model is represented using one of: polyhedrons and voxels.
97. The system of claim 93, further comprising means for generating a virtual treatment for the patient.
98. The system of claim 93, further comprising means for generating a post- treatment 3D model.
99. The system of claim 93, further comprising means for rendering an image of the model.
100. The system of claim 93, comprising means for geometry subdividing and tessellating the model.
101. The system of claim 93, comprising means for generating an inside model of the 3D model.
PCT/US2005/045351 2004-12-14 2005-12-14 Image based orthodontic treatment methods WO2006065955A2 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US11/013,147 2004-12-14
US11/013,153 2004-12-14
US11/013,146 US20060127852A1 (en) 2004-12-14 2004-12-14 Image based orthodontic treatment viewing system
US11/013,146 2004-12-14
US11/013,153 US20060127854A1 (en) 2004-12-14 2004-12-14 Image based dentition record digitization
US11/013,147 US20060127836A1 (en) 2004-12-14 2004-12-14 Tooth movement tracking system

Publications (2)

Publication Number Publication Date
WO2006065955A2 true WO2006065955A2 (en) 2006-06-22
WO2006065955A3 WO2006065955A3 (en) 2006-08-03

Family

ID=36588527

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/045351 WO2006065955A2 (en) 2004-12-14 2005-12-14 Image based orthodontic treatment methods

Country Status (1)

Country Link
WO (1) WO2006065955A2 (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008149222A2 (en) * 2007-06-08 2008-12-11 Align Technology, Inc. System and method for detecting deviations during the course of an orthodontic treatment to gradually reposition teeth
WO2010105628A2 (en) 2009-03-20 2010-09-23 3Shape A/S System and method for effective planning, visualization, and optimization of dental restorations
CN101862175A (en) * 2010-06-01 2010-10-20 苏州生物医学工程技术研究所 Digitalized oral cavity intelligent auxiliary diagnosis and treatment system and diagnosis and treatment method thereof
WO2011021099A2 (en) * 2009-08-21 2011-02-24 Align Technology, Inc. Digital dental modeling
EP2727553A1 (en) * 2012-10-31 2014-05-07 Ormco Corporation Method, system, and computer program product to perform digital orthodontics at one or more sites
FR3027711A1 (en) * 2014-10-27 2016-04-29 H 42 METHOD FOR CONTROLLING THE DENTITION
WO2016066642A1 (en) * 2014-10-27 2016-05-06 H43 Development Method for monitoring an orthodontic treatment
WO2016066637A1 (en) * 2014-10-27 2016-05-06 H43 Development Method for monitoring an orthodontic treatment
WO2016083519A1 (en) * 2014-11-27 2016-06-02 3Shape A/S Method of digitally designing a modified dental setup
EP3050534A1 (en) * 2015-01-30 2016-08-03 Dental Imaging Technologies Corporation Dental variation tracking and prediction
WO2017182654A1 (en) 2016-04-22 2017-10-26 Dental Monitoring Dentition control method
US10010387B2 (en) 2014-02-07 2018-07-03 3Shape A/S Detecting tooth shade
CN108784878A (en) * 2018-06-15 2018-11-13 北京缔佳医疗器械有限公司 A kind of tooth mold forming matching precision detection method and detection device
US10342638B2 (en) 2007-06-08 2019-07-09 Align Technology, Inc. Treatment planning and progress tracking systems and methods
EP3439558A4 (en) * 2016-04-06 2019-12-04 X-Nav Technologies, LLC System for providing probe trace fiducial-free tracking
US10517696B2 (en) 2007-06-08 2019-12-31 Align Technology, Inc. Treatment progress tracking and recalibration
CN111145289A (en) * 2019-12-30 2020-05-12 北京爱康宜诚医疗器材有限公司 Extraction method and device of pelvis three-dimensional data
US10758321B2 (en) 2008-05-23 2020-09-01 Align Technology, Inc. Smile designer
US10776533B2 (en) 2010-07-12 2020-09-15 3Shape A/S 3D modeling of a dental restoration using textural features
US10799321B2 (en) 2013-09-19 2020-10-13 Dental Monitoring Method for monitoring the position of teeth
US10813721B2 (en) 2007-06-08 2020-10-27 Align Technology, Inc. Systems and method for management and delivery of orthodontic treatment
WO2020231984A1 (en) * 2019-05-14 2020-11-19 Align Technology, Inc. Visual presentation of gingival line generated based on 3d tooth model
US10842601B2 (en) 2008-06-12 2020-11-24 Align Technology, Inc. Dental appliance
WO2020234411A1 (en) * 2019-05-22 2020-11-26 Dental Monitoring Method for generating a model of a dental arch
CN112017280A (en) * 2020-09-17 2020-12-01 广东工业大学 Method for generating digital tooth model with color texture information
US10874487B2 (en) 2003-02-26 2020-12-29 Align Technology, Inc. Systems and methods for fabricating a dental template
EP3760159A1 (en) * 2011-08-31 2021-01-06 Modjaw Method for designing a dental apparatus
US11147652B2 (en) 2014-11-13 2021-10-19 Align Technology, Inc. Method for tracking, predicting, and proactively correcting malocclusion and related issues
US20220354620A1 (en) * 2018-06-29 2022-11-10 Align Technology, Inc. Visualization of clinical orthodontic assets and occlusion contact shape
US11539937B2 (en) 2009-06-17 2022-12-27 3Shape A/S Intraoral scanning apparatus
US11553988B2 (en) * 2018-06-29 2023-01-17 Align Technology, Inc. Photo of a patient with new simulated smile in an orthodontic treatment review software
US11717380B2 (en) 2017-03-20 2023-08-08 Align Technology, Inc. Automated 2D/3D integration and lip spline autoplacement
FR3137270A1 (en) * 2022-07-04 2024-01-05 Dental Monitoring Extraoral camera
US11925525B2 (en) * 2022-07-25 2024-03-12 Align Technology, Inc. Three-dimensional visualization of clinical dentition incorporating view state and modified clinical data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4971069A (en) * 1987-10-05 1990-11-20 Diagnospine Research Inc. Method and equipment for evaluating the flexibility of a human spine
US4983120A (en) * 1988-05-12 1991-01-08 Specialty Appliance Works, Inc. Method and apparatus for constructing an orthodontic appliance
US5568384A (en) * 1992-10-13 1996-10-22 Mayo Foundation For Medical Education And Research Biomedical imaging and analysis
US5753834A (en) * 1996-12-19 1998-05-19 Lear Corporation Method and system for wear testing a seat by simulating human seating activity and robotic human body simulator for use therein
US6264468B1 (en) * 1998-02-19 2001-07-24 Kyoto Takemoto Orthodontic appliance
US20020048741A1 (en) * 1997-09-22 2002-04-25 3M Innovative Properties Company Methods for use in dental articulation
US6602070B2 (en) * 1999-05-13 2003-08-05 Align Technology, Inc. Systems and methods for dental treatment planning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4971069A (en) * 1987-10-05 1990-11-20 Diagnospine Research Inc. Method and equipment for evaluating the flexibility of a human spine
US4983120A (en) * 1988-05-12 1991-01-08 Specialty Appliance Works, Inc. Method and apparatus for constructing an orthodontic appliance
US5568384A (en) * 1992-10-13 1996-10-22 Mayo Foundation For Medical Education And Research Biomedical imaging and analysis
US5753834A (en) * 1996-12-19 1998-05-19 Lear Corporation Method and system for wear testing a seat by simulating human seating activity and robotic human body simulator for use therein
US20020048741A1 (en) * 1997-09-22 2002-04-25 3M Innovative Properties Company Methods for use in dental articulation
US6264468B1 (en) * 1998-02-19 2001-07-24 Kyoto Takemoto Orthodontic appliance
US6602070B2 (en) * 1999-05-13 2003-08-05 Align Technology, Inc. Systems and methods for dental treatment planning

Cited By (91)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10874487B2 (en) 2003-02-26 2020-12-29 Align Technology, Inc. Systems and methods for fabricating a dental template
US11819377B2 (en) * 2007-06-08 2023-11-21 Align Technology, Inc. Generating 3D models of a patient's teeth based on 2D teeth images
WO2008149222A3 (en) * 2007-06-08 2009-05-22 Align Technology Inc System and method for detecting deviations during the course of an orthodontic treatment to gradually reposition teeth
US20160074138A1 (en) * 2007-06-08 2016-03-17 Align Technology, Inc. System and method for detecting deviations during the course of an orthodontic treatment to gradually reposition teeth
WO2008149222A2 (en) * 2007-06-08 2008-12-11 Align Technology, Inc. System and method for detecting deviations during the course of an orthodontic treatment to gradually reposition teeth
US20230157789A1 (en) * 2007-06-08 2023-05-25 Align Technology, Inc. System and method for detecting deviations during the course of an orthodontic treatment to gradually reposition teeth
US10624716B2 (en) * 2007-06-08 2020-04-21 Align Technology, Inc. System and method for detecting deviations during the course of an orthodontic treatment to gradually reposition teeth
US11478333B2 (en) 2007-06-08 2022-10-25 Align Technology, Inc. Treatment planning and progress tracking systems and methods
US10517696B2 (en) 2007-06-08 2019-12-31 Align Technology, Inc. Treatment progress tracking and recalibration
US10342638B2 (en) 2007-06-08 2019-07-09 Align Technology, Inc. Treatment planning and progress tracking systems and methods
US11571276B2 (en) 2007-06-08 2023-02-07 Align Technology, Inc. Treatment progress tracking and recalibration
US10813721B2 (en) 2007-06-08 2020-10-27 Align Technology, Inc. Systems and method for management and delivery of orthodontic treatment
US10896761B2 (en) 2008-05-23 2021-01-19 Align Technology, Inc. Smile designer
US10758321B2 (en) 2008-05-23 2020-09-01 Align Technology, Inc. Smile designer
US11024431B2 (en) 2008-05-23 2021-06-01 Align Technology, Inc. Smile designer
US10842601B2 (en) 2008-06-12 2020-11-24 Align Technology, Inc. Dental appliance
WO2010105628A2 (en) 2009-03-20 2010-09-23 3Shape A/S System and method for effective planning, visualization, and optimization of dental restorations
EP3593755A1 (en) 2009-03-20 2020-01-15 3Shape A/S Computer program product for planning, visualization and optimization of dental restorations
US9861457B2 (en) 2009-03-20 2018-01-09 3Shape A/S System and method for effective planning, visualization, and optimization of dental restorations
US11671582B2 (en) 2009-06-17 2023-06-06 3Shape A/S Intraoral scanning apparatus
US11622102B2 (en) 2009-06-17 2023-04-04 3Shape A/S Intraoral scanning apparatus
US11539937B2 (en) 2009-06-17 2022-12-27 3Shape A/S Intraoral scanning apparatus
US11831815B2 (en) 2009-06-17 2023-11-28 3Shape A/S Intraoral scanning apparatus
US9256710B2 (en) 2009-08-21 2016-02-09 Allign Technology, Inc. Digital dental modeling
US10898299B2 (en) 2009-08-21 2021-01-26 Align Technology, Inc. Digital dental modeling
US8896592B2 (en) 2009-08-21 2014-11-25 Align Technology, Inc. Digital dental modeling
WO2011021099A3 (en) * 2009-08-21 2012-09-07 Align Technology, Inc. Digital dental modeling
US9962238B2 (en) 2009-08-21 2018-05-08 Align Technology, Inc. Digital dental modeling
WO2011021099A2 (en) * 2009-08-21 2011-02-24 Align Technology, Inc. Digital dental modeling
US10653503B2 (en) 2009-08-21 2020-05-19 Align Technology, Inc. Digital dental modeling
CN101862175A (en) * 2010-06-01 2010-10-20 苏州生物医学工程技术研究所 Digitalized oral cavity intelligent auxiliary diagnosis and treatment system and diagnosis and treatment method thereof
US10776533B2 (en) 2010-07-12 2020-09-15 3Shape A/S 3D modeling of a dental restoration using textural features
EP3760159A1 (en) * 2011-08-31 2021-01-06 Modjaw Method for designing a dental apparatus
US10143536B2 (en) 2012-10-31 2018-12-04 Ormco Corporation Computational device for an orthodontic appliance for generating an aesthetic smile
US9345553B2 (en) 2012-10-31 2016-05-24 Ormco Corporation Method, system, and computer program product to perform digital orthodontics at one or more sites
CN103784202A (en) * 2012-10-31 2014-05-14 奥姆科公司 Method, system, and computer program product to perform digital orthodontics at one or more sites
EP2727553A1 (en) * 2012-10-31 2014-05-07 Ormco Corporation Method, system, and computer program product to perform digital orthodontics at one or more sites
US10799321B2 (en) 2013-09-19 2020-10-13 Dental Monitoring Method for monitoring the position of teeth
US10010387B2 (en) 2014-02-07 2018-07-03 3Shape A/S Detecting tooth shade
US11701208B2 (en) 2014-02-07 2023-07-18 3Shape A/S Detecting tooth shade
US11707347B2 (en) 2014-02-07 2023-07-25 3Shape A/S Detecting tooth shade
US11723759B2 (en) 2014-02-07 2023-08-15 3Shape A/S Detecting tooth shade
US10695151B2 (en) 2014-02-07 2020-06-30 3Shape A/S Detecting tooth shade
US11357602B2 (en) 2014-10-27 2022-06-14 Dental Monitoring Monitoring of dentition
EP3901906A1 (en) 2014-10-27 2021-10-27 Dental Monitoring Method for monitoring dentition
FR3027711A1 (en) * 2014-10-27 2016-04-29 H 42 METHOD FOR CONTROLLING THE DENTITION
WO2016066654A1 (en) * 2014-10-27 2016-05-06 H43 Development Method for monitoring dentition
US10485638B2 (en) 2014-10-27 2019-11-26 Dental Monitoring Method for monitoring dentition
US10779909B2 (en) 2014-10-27 2020-09-22 Dental Monitoring Method for monitoring an orthodontic treatment
US10417774B2 (en) 2014-10-27 2019-09-17 Dental Monitoring Method for monitoring an orthodontic treatment
US10342645B2 (en) 2014-10-27 2019-07-09 Dental Monitoring Method for monitoring dentition
FR3027508A1 (en) * 2014-10-27 2016-04-29 H 42 METHOD FOR CONTROLLING THE DENTITION
US10206759B2 (en) 2014-10-27 2019-02-19 Dental Monitoring Method for monitoring an orthodontic treatment
FR3027506A1 (en) * 2014-10-27 2016-04-29 H 42 METHOD FOR CONTROLLING THE DENTITION
WO2016066637A1 (en) * 2014-10-27 2016-05-06 H43 Development Method for monitoring an orthodontic treatment
WO2016066642A1 (en) * 2014-10-27 2016-05-06 H43 Development Method for monitoring an orthodontic treatment
EP3659545A1 (en) 2014-10-27 2020-06-03 Dental Monitoring Method for controlling an orthodontic treatment
US20180204332A1 (en) * 2014-10-27 2018-07-19 Dental Monitoring S.A.S. Method for monitoring an orthodontic treatment
US20170325689A1 (en) * 2014-10-27 2017-11-16 Dental Monitoring Method for monitoring dentition
WO2016066650A1 (en) * 2014-10-27 2016-05-06 H43 Development Method for monitoring dentition
WO2016066652A1 (en) * 2014-10-27 2016-05-06 H43 Development Monitoring of dentition
US11564774B2 (en) 2014-10-27 2023-01-31 Dental Monitoring Method for monitoring an orthodontic treatment
WO2016066651A1 (en) * 2014-10-27 2016-05-06 H43 Development Method for monitoring dentition
FR3121034A1 (en) * 2014-10-27 2022-09-30 Dental Monitoring PROCEDURE FOR CONTROLLING ORTHODONTIC RECURSION
US11246688B2 (en) 2014-10-27 2022-02-15 Dental Monitoring Method for monitoring dentition
US11147652B2 (en) 2014-11-13 2021-10-19 Align Technology, Inc. Method for tracking, predicting, and proactively correcting malocclusion and related issues
CN113796975A (en) * 2014-11-13 2021-12-17 阿莱恩技术有限公司 Method for tracking, predicting and pre-correcting malocclusions and related problems
US11202690B2 (en) 2014-11-27 2021-12-21 3Shape A/S Method of digitally designing a modified dental setup
WO2016083519A1 (en) * 2014-11-27 2016-06-02 3Shape A/S Method of digitally designing a modified dental setup
EP3050534A1 (en) * 2015-01-30 2016-08-03 Dental Imaging Technologies Corporation Dental variation tracking and prediction
CN105832291A (en) * 2015-01-30 2016-08-10 登塔尔图像科技公司 Dental variation tracking and prediction
US9770217B2 (en) 2015-01-30 2017-09-26 Dental Imaging Technologies Corporation Dental variation tracking and prediction
US11510638B2 (en) 2016-04-06 2022-11-29 X-Nav Technologies, LLC Cone-beam computer tomography system for providing probe trace fiducial-free oral cavity tracking
EP3439558A4 (en) * 2016-04-06 2019-12-04 X-Nav Technologies, LLC System for providing probe trace fiducial-free tracking
EP4101418A1 (en) 2016-04-22 2022-12-14 Dental Monitoring System for producing an orthodontic apparatus
US11666418B2 (en) 2016-04-22 2023-06-06 Dental Monitoring Dentition control method
WO2017182654A1 (en) 2016-04-22 2017-10-26 Dental Monitoring Dentition control method
US11717380B2 (en) 2017-03-20 2023-08-08 Align Technology, Inc. Automated 2D/3D integration and lip spline autoplacement
CN108784878A (en) * 2018-06-15 2018-11-13 北京缔佳医疗器械有限公司 A kind of tooth mold forming matching precision detection method and detection device
US20220354620A1 (en) * 2018-06-29 2022-11-10 Align Technology, Inc. Visualization of clinical orthodontic assets and occlusion contact shape
US11553988B2 (en) * 2018-06-29 2023-01-17 Align Technology, Inc. Photo of a patient with new simulated smile in an orthodontic treatment review software
US11642195B2 (en) 2019-05-14 2023-05-09 Align Technology, Inc. Visual presentation of gingival line generated based on 3D tooth model
WO2020231984A1 (en) * 2019-05-14 2020-11-19 Align Technology, Inc. Visual presentation of gingival line generated based on 3d tooth model
FR3096255A1 (en) * 2019-05-22 2020-11-27 Dental Monitoring PROCESS FOR GENERATING A MODEL OF A DENTAL ARCH
WO2020234411A1 (en) * 2019-05-22 2020-11-26 Dental Monitoring Method for generating a model of a dental arch
CN111145289A (en) * 2019-12-30 2020-05-12 北京爱康宜诚医疗器材有限公司 Extraction method and device of pelvis three-dimensional data
CN112017280A (en) * 2020-09-17 2020-12-01 广东工业大学 Method for generating digital tooth model with color texture information
CN112017280B (en) * 2020-09-17 2023-09-26 广东工业大学 Method for generating digital tooth model with color texture information
FR3137270A1 (en) * 2022-07-04 2024-01-05 Dental Monitoring Extraoral camera
WO2024008594A1 (en) * 2022-07-04 2024-01-11 Dental Monitoring Extra-oral image taking device
US11925525B2 (en) * 2022-07-25 2024-03-12 Align Technology, Inc. Three-dimensional visualization of clinical dentition incorporating view state and modified clinical data

Also Published As

Publication number Publication date
WO2006065955A3 (en) 2006-08-03

Similar Documents

Publication Publication Date Title
WO2006065955A2 (en) Image based orthodontic treatment methods
US8029277B2 (en) Method and system for measuring tooth displacements on a virtual three-dimensional model
ES2717447T3 (en) Computer-assisted creation of a habitual tooth preparation using facial analysis
US7740476B2 (en) Method and workstation for generating virtual tooth models from three-dimensional tooth data
US7027642B2 (en) Methods for registration of three-dimensional frames to create three-dimensional virtual models of objects
US7068825B2 (en) Scanning system and calibration method for capturing precise three-dimensional information of objects
US9572636B2 (en) Method and system for finding tooth features on a virtual three-dimensional model
US7585172B2 (en) Orthodontic treatment planning with user-specified simulation of tooth movement
US7471821B2 (en) Method and apparatus for registering a known digital object to scanned 3-D model
US9861457B2 (en) System and method for effective planning, visualization, and optimization of dental restorations
US20070160957A1 (en) Image based dentition record digitization
US20080261165A1 (en) Systems for haptic design of dental restorations
US20100009308A1 (en) Visualizing and Manipulating Digital Models for Dental Treatment
US20100291505A1 (en) Haptically Enabled Coterminous Production of Prosthetics and Patient Preparations in Medical and Dental Applications
CN103908352A (en) Method for generating digital virtual jaw rack, and system thereof

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05849471

Country of ref document: EP

Kind code of ref document: A2