CN116568239A - Systems, devices, and methods for dental care - Google Patents

Systems, devices, and methods for dental care Download PDF

Info

Publication number
CN116568239A
CN116568239A CN202180065294.4A CN202180065294A CN116568239A CN 116568239 A CN116568239 A CN 116568239A CN 202180065294 A CN202180065294 A CN 202180065294A CN 116568239 A CN116568239 A CN 116568239A
Authority
CN
China
Prior art keywords
patient
teeth
dentition
treatment plan
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180065294.4A
Other languages
Chinese (zh)
Inventor
C·E·克莱默
R·文卡塔
L·帕尔瓦塔尼尼
P·T·哈里斯
S·古里贾拉
S·胡贝诺夫
S·哈伦
李国土
高云
C·C·布朗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Align Technology Inc
Original Assignee
Align Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Align Technology Inc filed Critical Align Technology Inc
Priority claimed from PCT/US2021/042838 external-priority patent/WO2022020638A1/en
Publication of CN116568239A publication Critical patent/CN116568239A/en
Pending legal-status Critical Current

Links

Landscapes

  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

A dental treatment method may include receiving one or more photographic parameters to define clinically acceptable criteria for a plurality of clinically relevant photographs of a person's dentition. The clinically acceptable criteria may include at least a plurality of clinically acceptable positions and a plurality of clinically acceptable orientations of the teeth relative to the camera. The method may further include collecting a plurality of image capture rules to capture a plurality of clinically relevant photographs. The plurality of image capture rules may be based on one or more photograph parameters. The method may also include providing a first automated instruction to capture a plurality of clinically relevant photographs of the dentition of the person using a plurality of image capture rules, and capturing the plurality of clinically relevant photographs by the camera in response to the first automated instruction.

Description

Systems, devices, and methods for dental care
Cross Reference to Related Applications
The present application claims the benefit of U.S. patent application Ser. No.63/200,432 filed on day 3 and 5 of 2021 and U.S. patent application Ser. No.62/705,954 filed on day 7 and 23 of 2021, both entitled "virtual dental care," which are incorporated herein by reference in their entireties.
Background
Medical practice is evolving towards telemedicine, the telemedicine of patients. Telemedicine allows doctors to assess the needs of patients, and in some cases provide treatment advice to patients, without the hassle and risk involved in personal treatment. However, current systems and methods related to dental care are less than ideal in many respects. For example, many dental care environments require a patient to physically consult a dentist for various purposes, such as initial assessment, obtaining a diagnosis of various conditions, obtaining a treatment plan and/or an appliance prescribed by the treatment plan, and tracking the progress of the treatment. Existing dental care solutions that rely on-site consultation and/or diagnosis are particularly problematic during times when a dental office is not available due to an incident, epidemic, impractical access and/or impracticality.
Disclosure of Invention
As will be described in greater detail below, the present disclosure describes various systems and methods for virtual dental care for remote patients.
In addition, the systems and methods described herein may improve the functionality of a computing device by reducing the computing resources and overhead for acquiring and storing updated patient data, thereby improving the processing efficiency of the computing device relative to conventional methods. These systems and methods may also improve the field of orthodontic treatment by analyzing the data to effectively target the treatment area and provide the patient with access to more practitioners than are conventionally available.
By incorporation by reference
All patents, applications, and publications cited and identified herein are incorporated by reference in their entirety and, even if cited elsewhere in this application, are to be considered to be incorporated by reference in their entirety.
Drawings
A better understanding of the features, advantages, and principles of the disclosure will be obtained by reference to the following detailed description that sets forth illustrative embodiments, and the accompanying drawings, in which:
fig. 1A illustrates a block diagram of an example system for virtual dental care, according to some embodiments.
FIG. 1B illustrates a block diagram of an example system for smart photo guidance, in accordance with some embodiments.
FIG. 1C illustrates a block diagram of an example system for image-based evaluation, in accordance with some embodiments.
FIG. 1D illustrates a block diagram of an example system for intelligent patient guidance, according to some embodiments.
FIG. 1E illustrates a block diagram of an example system for photo-based refinement in accordance with some embodiments.
FIG. 2 illustrates a block diagram of an example system for photo guidance, according to some embodiments.
FIG. 3 illustrates a flowchart of an example method for photo guidance, according to some embodiments.
FIG. 4 illustrates an example user device for photo guidance, according to some embodiments.
FIG. 5 illustrates an example neural network for photo guidance, according to some embodiments.
FIG. 6 illustrates a block diagram of an example system for differential error generation in accordance with embodiments herein.
Fig. 7 illustrates a method of assessing tooth movement of a patient according to embodiments herein.
Fig. 8 illustrates a differential error image of a patient's teeth during a treatment phase according to embodiments herein.
Fig. 9 is a profile differential error image of a patient's teeth during a treatment phase according to embodiments herein.
Fig. 10 is a profile differential error image of a patient's teeth during a treatment phase according to embodiments herein.
Fig. 11 shows in parallel a rendered dental image and an actual dental image of a patient during a treatment phase.
Fig. 12 shows a graph of differential error of a patient's teeth for each treatment stage according to embodiments herein.
Fig. 13 illustrates a block diagram of an example system for providing guidance in accordance with embodiments herein.
Fig. 14 illustrates a method of providing guidance according to embodiments herein.
Fig. 15 illustrates a process flow diagram for generating and providing orthodontic guidance to a patient according to embodiments herein.
Fig. 16 illustrates a block diagram of an example system for off-track treatment planning in accordance with embodiments herein.
Fig. 17 illustrates a method of generating a treatment plan for off-trajectory treatment of a patient according to embodiments herein.
Fig. 18 illustrates a segmented mesh dental arch generated from an existing scan of a patient's teeth and a 2D image of the patient's teeth according to embodiments herein.
FIG. 19 illustrates a block diagram of an example computing system that can implement one or more embodiments described and/or illustrated herein, in accordance with some embodiments.
FIG. 20 illustrates a block diagram of an example computing network that can implement one or more of the embodiments described and/or illustrated herein, in accordance with some embodiments.
Fig. 21 depicts a method for acquiring and using clinically relevant images of a patient's teeth.
Fig. 22A, 22B, 22C, and 22D depict teeth and example axes about which the teeth may move according to some embodiments.
23A, 23B, and 23C depict images of patient dentition determined based on a treatment plan according to some embodiments.
Fig. 23D and 23E depict a model of a patient's dentition and clinically relevant views for capturing movement of the patient's teeth, according to some embodiments.
Fig. 24 illustrates a method of evaluating placement quality for a transparent aligner (aligner) in accordance with some embodiments.
Fig. 25A illustrates example image data of a patient dentition with a transparent aligner, according to some embodiments.
FIG. 25B illustrates example mask data derived from the image data of FIG. 25A, according to some embodiments.
Fig. 25C illustrates mask data of fig. 25B overlaid on the image data of fig. 25A, in accordance with some embodiments.
Detailed Description
The following detailed description provides a better understanding of the features and advantages of the present invention described in the present disclosure in accordance with the embodiments disclosed herein. While the detailed description includes many specific embodiments, these embodiments are provided by way of example only and should not be construed to limit the scope of the invention disclosed herein.
Virtual care system
Fig. 1A illustrates a block diagram of an example system for virtual dental care (virtual dental care) according to some embodiments. As shown in fig. 1A, the system 100 may include a dental consumer/patient system 102, a dental professional system (dental professional system) 150, a virtual dental care system 106, and a computer readable medium 104. The dental consumer/patient system 102, the dental professional system 150, and the virtual dental care system 106 may communicate with one another on a computer readable medium 104.
Dental consumer/patient system 102 generally represents any type or form of computing device capable of reading computer-executable instructions. The dental consumer/patient system 102 may be, for example, a desktop computer, a tablet computing device, a laptop computer, a smart phone, an augmented reality device, or other consumer device. Additional examples of dental consumer/patient systems 102 include, but are not limited to, laptops, tablets, desktops, servers, cell phones, personal Digital Assistants (PDAs), multimedia players, embedded systems, wearable devices (e.g., smart watches, smart glasses, etc.), smart vehicles, smart packages (e.g., active packages or smart packages), game consoles, internet of things devices (e.g., smart appliances, etc.), variations or combinations of one or more of these devices, and/or any other suitable computing device. The dental consumer/patient system 102 need not be a clinical scanner (e.g., an intraoral scanner), although it is contemplated that in some embodiments, the functionality described herein with respect to the dental consumer/patient system 102 may be included in a clinical scanner. As an example of various embodiments, the camera 132 of the dental consumer/patient system 102 may include a general camera that captures 2D images of the patient's dentition and does not capture height maps and/or other data used to stitch the mesh of the 3D surface.
In some embodiments, the dental consumer/patient system 102 is configured to interact with a dental consumer and/or dental patient. As used herein, a "dental consumer" may include a person seeking assessment, diagnosis, and/or treatment of a dental condition (general dental condition, orthodontic condition, dental pulp condition, condition requiring restorative dentistry, etc.). The dental consumer may, but need not, have agreed to and/or begun treatment of the dental condition. As used herein, a "dental patient" may include a person who has agreed to diagnose and/or treat a dental condition. For example, a dental consumer and/or dental patient may be interested in and/or have initiated orthodontic treatment, such as treatment using one or more (e.g., a series of) aligners (e.g., a polymeric appliance having a plurality of tooth-receiving cavities shaped to successively reposition (reposition) a person's teeth from an initial arrangement toward a target arrangement). In various embodiments, the dental consumer/patient system 102 provides software (e.g., one or more web pages, stand-alone applications, mobile applications, etc.) to the dental consumer/dental patient that allows the dental consumer/patient to capture images of their dentition, interact with a dental professional (e.g., a user of the dental professional system 150), manage a treatment plan (e.g., a treatment plan from the virtual dental care system 106 and/or the dental professional system 150), and/or communicate with a dental professional (e.g., a user of the dental professional system 150).
Dental professional system 150 generally represents any type or form of computing device capable of reading computer-executable instructions. Dental professional system 150 can be, for example, a desktop computer, tablet computing device, laptop computer, smart phone, augmented reality device, or other consumer device. Additional examples of dental professional systems 150 include, but are not limited to, laptops, tablets, desktops, servers, cell phones, personal Digital Assistants (PDAs), multimedia players, embedded systems, wearable devices (e.g., smart watches, smart glasses, etc.), smart vehicles, smart packages (e.g., active packages or smart packages), game consoles, internet of things devices (e.g., smart appliances, etc.), variations or combinations of one or more of these devices, and/or any other suitable computing device.
In various embodiments, dental professional system 150 is configured to interact with a dental professional. As used herein, a "dental professional" (used interchangeably herein with dentist, orthodontist, and doctor) may include any person having specialized training in the dental field, and may include, but is not limited to, general dentists, orthodontists, dental technicians, dental healthcare workers, and the like. Dental professionals may include persons who are able to evaluate, diagnose, and/or treat dental conditions. As used herein, "assessing" a dental condition may include estimating the presence of the dental condition. The assessment of the dental condition need not be a clinical diagnosis of the dental condition. In some embodiments, the "assessing" of the dental condition may include an "image-based assessment" that assesses the dental condition based in part or in whole on photographs and/or images taken of the dental condition (e.g., images not used to stitch a grid or form the basis of a clinical scan). As used herein, "diagnosis" of a dental condition may include clinically identifying the nature of a disease or other problem by examining symptoms. As used herein, "treatment" of a dental condition may include prescription and/or administration of care to address the dental condition. Examples of treatments for dental conditions include prescriptions and/or application of brackets/wires, transparent aligners, and/or other aligners to address orthodontic conditions, prescriptions and/or application of restorative prosthetic elements to address functional and/or aesthetic requirements for dentition, and the like. Dental professional system 150 can provide software (e.g., one or more web pages, a stand-alone application (e.g., a dedicated treatment plan and/or treatment visualization application), a mobile application, etc.) to a user that allows the user to interact with the user (e.g., a user of dental consumer/patient system 102, other dental professionals, etc.), create/modify/manage treatment plans (e.g., treatment plans from virtual dental care system 106 and/or treatment plans generated at dental professional system 150), etc.
The virtual dental care system 106 generally represents any type or form of computing device capable of storing and analyzing data. The virtual dental care system 106 may include a back-end database server for storing patient data and treatment data. Additional examples of virtual dental care systems 106 include, but are not limited to, security servers, application servers, web servers, storage servers, and/or database servers configured to run certain software applications and/or provide various security, web, storage, and/or database services. Although shown as a single entity in fig. 1A, the virtual dental care system 106 may include and/or represent multiple servers working and/or operating in conjunction with one another.
As shown in fig. 1A, the dental consumer/patient system 102, the virtual dental care system 106, and/or the dental professional system 150 may include one or more memory devices, such as memory 140. Memory 140 generally represents any type or form of volatile or non-volatile storage or medium capable of storing data and/or computer-readable instructions. In one example, the memory 140 may store, load, in conjunction with the physical processor(s) 130, execute, and/or maintain one or more of the virtual dental care modules 108. Examples of memory 140 include, but are not limited to, random Access Memory (RAM), read Only Memory (ROM), flash memory, a Hard Disk Drive (HDD), a Solid State Drive (SSD), an optical disk drive, a cache memory, variations or combinations of one or more of the above, and/or any other suitable storage memory.
As shown in fig. 1A, the dental consumer/patient system 102, the dental professional system 150, and/or the server 106 may also include one or more physical processors, such as physical processor 130. Physical processor 130 generally represents any type or form of hardware implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, the physical processor 130 may access and/or modify one or more of the virtual dental care modules 108 stored in the memory 140. Additionally or alternatively, the physical processor 130 may execute one or more physical processors in the virtual dental care module 108 to facilitate a preamble phrase (preamble phrase). Examples of physical processor 130 include, but are not limited to, a microprocessor, a microcontroller, a Central Processing Unit (CPU), a Field Programmable Gate Array (FPGA) implementing a soft core processor, an Application Specific Integrated Circuit (ASIC), portions of one or more of these, variations or combinations of one or more of these, and/or any other suitable physical processor.
In some embodiments, dental consumer/patient system 102 can include camera 132. The camera 132 may include a camera, scanner, or other optical sensor. The camera 132 may include one or more lenses, or may include one or more camera devices and/or one or more other optical sensors. In some examples, camera 132 may include other sensors and/or devices that may aid in capturing optical data, such as one or more lights, depth sensors, and the like. In various implementations, the camera 132 is not a clinical scanner.
Computer-readable medium 104 generally represents any transitory or non-transitory computer-readable medium or architecture capable of facilitating communication or data transfer. In one example, the computer readable medium 104 may facilitate communication between the dental consumer/patient system 102, the dental professional system 150, and/or the virtual dental care system 106. In some implementations, the computer-readable medium 104 includes a computer network that facilitates communication or data transmission using wireless and/or wired connections. Examples of computer-readable medium 104 include, but are not limited to, an intranet, a Wide Area Network (WAN), a Local Area Network (LAN), a Personal Area Network (PAN), the internet, a Power Line Communication (PLC), a cellular network (e.g., a global system for mobile communications (GSM) network), portions of one or more of these, variations or combinations of one or more of these, and/or any other suitable network. The computer-readable medium 104 may also include connections (e.g., buses, any communication infrastructure (e.g., the communication infrastructure 1912 shown in fig. 19, etc.) between elements within a single device.
The virtual dental care data store 120 includes one or more data stores configured to store any type or form of data that may be used for virtual dental care. In some embodiments, the virtual dental care data store 120 includes, but is not limited to, patient data 136 and treatment data 138. Patient data 136 may include data collected from a patient, such as patient dentition information, patient history data, patient scans, patient information, and the like. The treatment data 138 may include data for treating the patient, such as treatment plans, status of treatment, success of treatment, changes in treatment, notes regarding treatment, and the like.
The example system 100 of fig. 1A may be implemented in a variety of ways. For example, all or a portion of example system 100 may represent portions of example system 200 in fig. 2, system 600 in fig. 6, system 1300 in fig. 13, or system 1600 in fig. 16.
As will be described in greater detail below, one or more of the virtual dental care module 108 and/or the virtual dental care data store 120 in fig. 1A may enable (when executed by at least one processor of the dental consumer/patient system 102, the virtual dental care system 106, and/or the dental professional system 150) the dental consumer/patient system 102, the virtual dental care system 106, and/or the dental professional system 150 to facilitate providing virtual dental care between a doctor and a patient. As used herein, "virtual dental care" may include computer program instructions and/or software operable to provide remote dental services to patients, potential consumers of dental services, and/or other individuals by health professionals (dentists, orthodontics, dental technicians, etc.). Virtual dental care may include computer program instructions and/or software operable to provide dental services without and/or with only limited physical meetings. By way of example, the virtual dental care may include software operable to provide dental care from the dental professional system 150 and/or the virtual dental care system 106 to the computing device 102 over the network 104 through, for example, written instructions, interactive applications allowing the health professional and patient/consumer to interact with each other, telephones, chat, and the like. As used herein, "remote dental care" may include computer program instructions and/or software operable to provide remote services in which a health professional provides dental health care solutions and/or services to a patient. In some embodiments, the virtual dental care facilitated by the elements of system 100 may include non-clinical dental services, such as dental management services, dental training services, dental education services, and the like.
In some embodiments, elements of the system 100 (e.g., the virtual dental care module 108 and/or the virtual dental care data store(s) 120) may be operable to provide smart photo guidance to a patient to capture images related to virtual dental care using the camera 132 on the computing device 102. An example of how elements of system 100 may operate to provide smart photo guidance is shown in fig. 1B.
At operation 160a, the virtual dental care system 106 may provide one or more photo parameters to capture a clinically relevant photo of the user. As used herein, a "clinically relevant" photograph may include an image representing the status of a dental condition in the dentition of a consumer/patient. The clinically relevant photographs may include photographs sufficient to provide the current position and/or orientation of the teeth in the mouth of the consumer/patient. Examples of clinically relevant photographs include photographs showing all teeth in a consumer/patient dental arch; photographs showing the shape of a consumer/patient dental arch; photographs showing the position of missing, autogenous, ectopic, etc. teeth; photographs showing malocclusions in the consumer/patient dental arch (e.g., from the anterior, left cheek, right cheek, and/or other various perspectives); as used in this context, "photo parameters" may include parameters defining clinically acceptable criteria (e.g., clinically acceptable positions and/or clinically acceptable orientations of teeth) in one or more photos. The photo parameters may include a distance parameter, e.g., one parameter that parameterizes the distance of the camera relative to the consumer/patient dentition; orientation parameters (e.g., parameters that parameterize the orientation of a photograph of the tooth; opening parameters of the photograph of the consumer/patient bite (e.g., whether the bite is open, closed, and/or the extent of opening of the bite); dental appliance wear parameters of a photograph of a consumer/patient bite (e.g., whether the photograph shows a dental appliance (such as cheek retractor, aligner, etc.) in the consumer/patient's mouth); camera parameters (brightness parameters of a photograph; contrast parameters of a photograph; exposure parameters of a photograph; etc.); tooth identifier parameters, e.g., tooth identifier parameters for a given tooth in a parameterized photograph, tooth identifier parameters derived from a treatment plan; at operation 160b, the virtual dental care system 106 may send one or more photo parameters to the dental consumer/patient system 102. This operation may occur as a file and/or data transfer on the computer-readable medium 104.
At operation 160c, the dental consumer/patient system 102 may use one or more photo parameters to intelligently instruct the consumer/patient to capture clinically relevant photos of their dentition. The dental consumer/patient system 102 may collect image capture rules that guide capturing clinically relevant photographs based on the photograph parameters. The dental consumer/patient system 102 may provide software (e.g., one or more web pages, stand-alone applications, mobile applications, etc.) to the consumer/patient that uses one or more photo parameters to assist the consumer/patient in capturing clinically relevant photos of their teeth. As an example, the distance parameter may be used to instruct the consumer/patient to position and/or orient the dental consumer/patient system 102 a specific distance away from their teeth to capture a photograph with the appropriate details of their teeth. The distance parameter may instruct whether the camera is located too close or too far or just. Orientation parameters may be used to direct the photograph to a clinically relevant orientation. As an example, orientation parameters may be used to instruct the consumer/patient to take photographs of front view, left cheek view, right cheek view, etc. As additional examples, the opening parameters may be used to instruct the consumer/patient to take photographs of various bite states, e.g., open bite, closed bite, and/or partially open for clinically relevant compression; the dental appliance wear parameters may be used to detect the cheek retractor and/or instruct the consumer/patient to properly position the cheek retractor and/or position/orient the photograph to be clinically relevant; the dental appliance wear parameters may be used to detect various dental appliances (aligners, holders, etc.) and guide the consumer to remove or move the dental appliance to obtain clinically relevant photographs, etc. Further, the tooth identifier parameters (e.g., collected from a treatment plan) may be used to instruct the consumer/patient to take a photograph of a sufficient number of teeth such that the photograph is clinically relevant. Camera parameters, such as contrast, brightness, exposure, etc., may be used to instruct the consumer/patient to take a photograph with attributes such that the photograph is clinically relevant. In some implementations, the dental consumer/patient system 102 uses camera parameters to modify one or more photo settings (add/disable flash, adjust zoom, adjust brightness, adjust contrast, adjust shading, adjust silhouettes, etc., such that clinically relevant photos are captured under various conditions).
At operation 160d, the dental consumer/patient system 102 may operate to capture clinically relevant photographs using intelligent guidance. In some embodiments, the consumer/patient may follow instructions to capture a photograph of their dentition using intelligent guidance provided on the dental consumer/patient system 102. In various embodiments, at least a portion of operation 160d is performed by an automated agent that configures the camera to take the photograph without human intervention. At operation 160e, the dental consumer/patient system 102 may send the captured clinically relevant image to the virtual dental care system 106. This operation may occur as a file and/or data transfer on the computer-readable medium 104.
At operation 160f, the virtual dental care system 106 may store the captured clinically relevant photographs. In various embodiments, the virtual dental care system 106 may store the captured clinically relevant photos in a treatment database associated with the consumer/patient, a clinical data file associated with the consumer/patient, and/or any relevant data store. At operation 160g, the virtual dental care system 106 may send the captured clinically relevant photographs to the dental consumer/patient system 102 and/or the dental professional system 150. This operation may occur through file and/or data transfer on the computer-readable medium 104.
At operation 160h, the tooth consumer/patient system 102, the virtual dental care system 106, and/or the dental professional system 150 may use the clinically relevant photos for virtual dental care. As an example, dental professional system 150 can display instructions to the consumer/patient in the form of a covering over the consumer/patient's dental image. As another example, dental professional system 150 can display text and/or interactive instructions to the consumer/patient regarding how to modify and/or improve the capture of clinically relevant photographs. In some embodiments, the dental consumer/patient system 102, the virtual dental care system 106, and/or the dental professional system 150 may use clinically relevant photographs for image-based assessment, intelligent patient guidance, and/or photo-based refinement (refinishment), for example.
In some embodiments, elements of the system 100 (e.g., the virtual dental care module 108 and/or the virtual dental care data store 120) may be operable to provide one or more image-based assessment tools to a user of the dental professional system 150. As used herein, an "image-based assessment tool" may include a digital tool that operates to provide an image-based assessment of a dental condition. In some embodiments, the image-based evaluation may include a visualization that allows a user of the dental professional system 150 to make decisions regarding the clinical condition. For example, elements of system 100 may provide a visualization that assists a user of dental professional system 150 in making one or more diagnoses of dental conditions. As referred to herein, a visualization may include, for example, a visualization of an assessment of a current stage of a treatment plan; visualization of the assessment may be, but need not be, based on images and knowledge of the ongoing treatment plan. As another example, elements of system 100 may provide a visualization to a user of dental professional system 150 that provides a view of patient assessment over time. An example of how elements of system 100 may operate to provide an image-based assessment tool is shown in FIG. 1C.
At operation 170a, the dental consumer/patient system 102 may capture one or more images of the consumer/patient. The one or more images may include photographs taken by a camera of the dental consumer/patient system 102. One or more photographs may be captured by smart photo guidance techniques further described herein. The one or more images may include various perspectives and/or views of the dentition of the consumer/patient. The one or more photographs captured at operation 170a need not include scan data, altitude map information, and/or data used by a clinical scanner to stitch together a grid representation of the consumer/patient dentition. The dental consumer/patient system 102 may store locally captured images in a networked folder or the like. At operation 170b, the dental consumer/patient system 102 may send the captured photograph of the consumer/patient to the virtual dental care system 106. The operations may include file and/or other data transfer on the computer-readable medium 104.
At operation 170c, the virtual dental care system 106 may compare the captured photographs to one or more treatment fiducials. As used herein, a "treatment baseline" may include one or more standards or reference points of at least a portion of a treatment plan. The treatment criteria may include the expected positions of the teeth, jaw, palate area, etc. of the dentition at a particular stage of the treatment plan. In some embodiments, the treatment benchmark is represented as an expected location of a particular stage of the treatment plan on the 3D model of the patient's dentition. In various embodiments, the treatment baseline corresponds to a representation of a patient's dentition from which the dental condition is assessed. As an example, the treatment criteria may represent various malocclusions of the consumer/patient to be evaluated. At operation 170d, the virtual dental care system 106 may evaluate the progress of the dental condition and/or treatment plan using the captured photograph in comparison to the treatment benchmark. As indicated herein, the evaluation need not include diagnosis of the dental condition and/or progress through the treatment plan.
At operation 170e, the virtual dental care system 106 may provide the assessed dental condition and/or progress assessment to the dental consumer/patient system 102 and/or the dental professional system 150. This operation may occur as a file and/or data transfer on the computer-readable medium 104. The dental consumer/patient system 102 and/or the dental professional system 150 may perform additional operations through the assessed dental condition and/or progress assessment. As one example, the dental consumer/patient system 102 may display the dental condition and/or progress assessment at operation 170 f. For example, the dental consumer/patient system 102 may display user interface elements (annotated 3D models, annotated images, informative and/or interactive user interface elements, etc.) showing the evaluation to the consumer/patient, for example, in an application and/or web page.
As another example, in operation 170g, dental professional system 150 can use the dental condition and/or progress assessment to process the diagnosis and/or prescription of the consumer/patient. In operation 170g, the diagnosis may also be based on one or more clinical images of the consumer/patient dentition (intraoral scan, x-ray, CBCT scan, etc.). In some embodiments, the doctor may use software on the dental professional system 150 to perform a diagnosis of the progression of the dental condition and/or treatment plan. As an example, a doctor may use treatment planning software on dental professional system 150 to diagnose malocclusions and/or other dental conditions reflected in photographs from a consumer/patient. Instructions corresponding to the diagnosis may be processed by dental professional system 150. In various embodiments, a dental professional may provide a prescription to treat one or more dental conditions. As an example, a dental professional may prescribe one or more dental appliances (transparent aligners, orthodontic appliances, restorative appliances, etc.) through dental professional system 150 to treat a dental condition associated with a dental condition and/or a progression assessment. For the initial evaluation, the prescription may include an initial prescription of the dental appliance. For the progression assessment, the prescription may include a corrective dental appliance configured to correct deviations from the treatment plan.
At operation 170h, the dental professional system 150 may provide the virtual dental care system 106 with a diagnosis and/or prescription for treatment planning and/or virtual dental care. At operation 170i, the virtual dental care system 106 may use the diagnosis/prescription for treatment planning and/or virtual dental care. At operation 170j, the dental professional system 150 can provide the diagnosis and/or prescription to the dental consumer/patient system 102. At operation 170k, the dental consumer/patient system 102 may display the diagnosis to the consumer/patient.
In some embodiments, elements of the system 100 (e.g., the virtual dental care module 108 and/or the virtual dental care data store(s) 120) may be operable to provide smart patient guidance to a consumer/patient using the dental consumer/patient system 102. As used herein, "intelligent patient guidance" may include instructions that direct a consumer/patient to take one or more actions. In some embodiments, the elements of system 100 use consumer/patient photographs, treatment parameters provided by a physician, and/or other information to generate intelligent patient guidance.
In some embodiments, the intelligent patient guidance is supplied by the automated agent without intervention (or with minimal intervention, e.g., the physician providing the treatment parameters and/or interacting with the guidance template). The intelligent patient guidance may include: for example, instructions to replace (and/or when to replace) a specified dental appliance (e.g., aligner, retainer, etc.); instructions to continue use (and/or when to continue use) of the dental appliance, instructions to use (and/or location of use) to supplement the dental appliance (e.g., bite), mint (mint), etc.) with respect to a subsequent dental appliance; instructions to direct attention to areas of the consumer/patient dentition (anterior, posterior, portions that may move during a particular stage, portions that fix movement of individual teeth, etc.); instructions to notify the doctor at a specified time or in response to a specific event (e.g., movement of the tooth at a specified time, movement of the tooth according to a specific movement pattern, etc.); instructions to capture one or more images of the consumer/patient dentition for the purpose of tracking progress at a specified time/treatment phase; visit a doctor to the consumer/patient, set up appointments or take instructions regarding other actions of the doctor, etc. As noted herein, intelligent patient guidance may include any combination and/or variation of the foregoing examples.
The intelligent patient guidance may be adapted to resolve conflicts, for example, may be determined based on prioritizing some forms of actions and/or removing some conflicting forms of actions from the guidance. The guideline rules may provide a set of conflicting or prioritized guidelines to the patient. For example, using a bite (due to one rule) and setting an appointment (due to another rule) and having the system alert the doctor (due to a third rule); in such a case, only the warning to the doctor's rule may be activated, as the doctor may override other rules. Another example may be a rule indicating the use of a chew on a first premolars and another rule indicating the use of a chew on a second premolars on the same side—obviously only one chew is needed. Eliminating conflicts can ensure that only relevant guidance is provided to the patient.
The intelligent patient guidance supplied by the elements of system 100 may be based on dental condition and/or progress assessment (e.g., assessment reflected by images captured by the consumer/patient), treatment parameters, and the like. As used herein, a "treatment parameter" may include a set of parameters that are used to indicate attributes of a treatment plan applied to a consumer/patient. The treatment parameters may include physician preference parameters, e.g., treatment parameters indicating that a physician (and/or other physicians, e.g., a physician whose treatment regimen is used by a particular physician) will employ a treatment regimen for various patients and/or clinical conditions. The treatment parameters may include each patient parameter, e.g., parameters for specifying a treatment regimen for a particular consumer/patient. Each patient parameter need not be based on a particular consumer/patient attribute, and may include, for example, demographic information (information related to consumer/patient race, gender, age, etc.), information about historical treatment cases (e.g., cases with dental conditions in a form similar to consumer/patient), information about idealized dental arches (e.g., dental arches related to dental arches with idealized/near-idealized bite defined by treatment professionals), and/or other information.
In some embodiments, elements of system 100 may utilize a physician-guided template, which, as used herein, may include a formatted data structure that indicates a set of rules that a physician may use to track a treatment plan. An example of a rule may be specific in that a deviation of 0.75 millimeters (mm) of the central incisors from the treatment plan should result in a new appointment; a deviation of the central incisors of 0.5mm to 0.75mm should be noted; the increased deviation of the central incisors during the period of two (2) months should result in a new appointment; the deviation of the central incisor is 0.25mm to 0.5mm, and the current set of aligners should be worn for one circle; and a central incisor deviation of less than 0.25mm may be considered "on the track". Other rules may indicate that teeth marked as "not moving" should not deviate from their treatment positions, and any deviation greater than 0.25mm should result in appointments. Rules in the physician-guided template may allow for conditions based on treatment planning and/or other factors. In some embodiments, rules in the physician guidance templates may be written by a temporal reference frame and/or based on patient history data (e.g., history information and/or historic measurement information regarding patient guidance provided to the consumer/patient in the past). An example of how elements of system 100 may operate to provide intelligent patient guidance is shown in fig. 1D.
At operation 180a, the dental consumer/patient system 102 may capture one or more images of the consumer/patient. The one or more images may include photographs taken by a camera of the dental consumer/patient system 102. One or more photographs may be captured by smart photo guidance techniques further described herein. The one or more images may include various perspectives and/or views of the dentition of the consumer/patient. The one or more photographs captured at operation 180a need not include scan data, altitude map information, and/or data used by a clinical scanner to stitch together a grid representation of the consumer/patient dentition. The one or more photographs may reflect the status of the treatment plan intended for and/or being performed on the consumer/patient. As an example, one or more photographs may capture an initial assessment of the consumer/patient dentition and/or reflect the patient's progress at a specified stage of the treatment plan. The dental consumer/patient system 102 may store locally captured images in a networked folder or the like. At operation 180b, the dental consumer/patient system 102 may send the captured photograph of the consumer/patient to the virtual dental care system 106. The operations may include file and/or other data transfer on the computer-readable medium 104.
At operation 180c, the dental professional system 150 can collect the treatment parameters of the consumer/patient. As indicated herein, the treatment parameters may include physician preference parameters, per patient parameters, and the like. At operation 180d, the dental professional system 150 may send the treatment parameters to the virtual dental care system 106. The operations may include files and/or transmissions on the computer-readable medium 104. As indicated herein, the treatment parameters may include physician preference parameters and/or each patient parameter.
At operation 180e, the virtual dental care system 106 may create and/or update a doctor guiding template with treatment parameters. As noted herein, a physician-guided template may provide a template with one or more rules that a physician may use to track the delivery of a consumer/patient treatment plan. The physician guidance templates may adapt one or more rules to perform guidance conflict resolution and/or prioritize various forms of actions given physician preferences, patient attributes, etc. The virtual dental care system 106 may store the doctor's guidance templates in any relevant format, including but not limited to any transitory and/or non-transitory medium. In operation 180f, the virtual dental care system 106 may send the doctor's guidance template to the dental professional system 150.
At operation 180g, dental professional system 150 can process instructions to review, edit, and/or approve doctor-directed templates. In some embodiments, dental professional system 150 can provide a user interface and/or other software to the doctor that allows the doctor to review the doctor's guidance template, make any changes to the doctor's guidance template, and/or approve/finalize the doctor's guidance template so that it can be applied to a particular patient, such as a consumer/patient using dental consumer/patient system 102. As an example, in some embodiments, a doctor may provide instructions to override a particular portion of a doctor's guideline template, such as factors related to a particular attribute of a particular consumer/patient, based on one or more factors. In operation 180h, the dental professional system 150 may send the review/edit/approve doctor instruction templates to the virtual dental care system 106. This operation may occur as a file and/or data transfer on the computer-readable medium 104.
At operation 180i, the virtual dental care system 106 can generate intelligent patient guidance rules (e.g., rules that direct the application of treatment parameters to the consumer/patient) using the captured photographs and optionally using the guidance templates. In some embodiments, the virtual dental care system 106 may use captured photographs captured at the dental consumer/patient system 102 and doctor guidance templates reviewed, edited, and/or approved by the dental professional system 150 to generate intelligent patient guidance rules for the consumer/patient. At operation 180j, the virtual dental care system 106 may generate patient guidance instructions using the intelligent patient guidance rules. The patient instruction may take specific actions on the consumer/patient in the form of instructions (adding/replacing dental appliances, wearing dental appliances longer or shorter than originally prescribed), may modify appointments and/or tasks in the form of instructions, and/or may interact with the physician in a new and/or modified manner in the form of instructions (e.g., drawing attention to areas of increased interest in the dentition).
At operation 180k, the virtual dental care system 106 may provide patient instruction to the dental consumer/patient system 102 and/or the dental professional system 150. This operation may occur as a file and/or data transfer on the computer-readable medium 104.
At operation 180k, the dental consumer/patient system 102 may use the patient instruction to instruct the consumer/patient. In various embodiments, the dental/consumer system 102 may present the consumer/patient with an automatic and/or interactive software element that instructs the consumer/patient to take specified actions with respect to their treatment plan. As referred to herein, example actions include instructions to replace a dental appliance, to hold a dental appliance for more than an initial specified time, to use a supplemental dental appliance at a specified time/location, to set a appointment for a particular condition and/or at a specified time/location, and the like. At operation 180l, the dental professional system 150 may guide the doctor through patient instruction. In various embodiments, dental professional system 150 can present the doctor with automatic and/or interactive software elements, such as setting appointments for the patient, informing the doctor of one or more conditions and/or areas of the consumer/patient dentition for attention, and the like.
In some embodiments, elements of the system 100 (e.g., the virtual dental care module 108 and/or the virtual dental care data store(s) 120) may be operable to provide photo-based refinement to a user of the dental professional system 150. As used herein, "photo-based refinement" may include tools that allow a doctor to perform virtual dental care in order for the doctor to prescribe a consumer/patient for treatment deviating from the intended course of treatment. The tool may use the photographs and may avoid the need to rescan (e.g., perform a second and/or subsequent clinical scan after an initial clinical scan) the consumer/patient and/or, for example, provide a real-time assessment of the consumer/patient at a doctor's office. In some implementations, the photo-based refinement can provide a tool for a physician to remotely create a secondary (e.g., refined) treatment plan without the need to physically see and/or evaluate the consumer/patient. The photo-based refinement may optimize one or more camera parameters to align the consumer/patient treatment plan with the photo captured by/for the consumer/patient. The photo-based refinement may also optimize one or more pose parameters (e.g., position parameters, orientation parameters, etc.) of the consumer/patient's teeth to ensure that the teeth are in the proper space. As referred to herein, photo-based refinements may be displayed to a physician as user interface elements (e.g., overlays) representing consumer/patient dentitions related to a treatment plan. The photo-based refinement may be used to plan one or more refined treatment plans using 3D tooth formation from the original treatment plan and/or locations found using the techniques described herein; as noted herein, this information may be used to plan one or more new/refined treatment plans. An example of how elements of system 100 may operate to provide photo-based refinement is shown in fig. 1E.
At operation 190a, the dental consumer/patient system 102 may capture one or more images of the consumer/patient at a particular time (e.g., at one or more times during the course of virtual dental care). The one or more images may include photographs taken by a camera of the dental consumer/patient system 102. One or more photographs may be captured by smart photo guidance techniques further described herein. The one or more images may include various perspectives and/or views of the dentition of the consumer/patient. As an example, the one or more images may include a plurality of images representing more than one view of the consumer/patient dentition. For example, images may be taken from a front view, a left cheek view, a right cheek view, and/or other view angles. As indicated herein, one or more images may be captured while intelligently guiding a consumer/patient to take a picture of their dentition. The one or more photographs captured at operation 190a need not include scan data, altitude map information, and/or data used by a clinical scanner to stitch together a grid representation of the consumer/patient dentition. One or more photographs may reflect the status of a treatment plan intended for and/or being performed on a consumer/patient. As an example, one or more photographs may capture an initial assessment of the consumer/patient dentition and/or reflect the patient's progress at a specified stage of the treatment plan. The dental consumer/patient system 102 may store locally captured images in a networked folder or the like. At operation 190b, the dental consumer/patient system 102 may send the captured photograph of the consumer/patient to the virtual dental care system 106. The operations may include file and/or other data transfer on the computer-readable medium 104.
At operation 190c, the dental professional system 150 can request a first treatment plan for the consumer/patient. In some embodiments, the doctor may request a first treatment plan for the consumer/patient by instructions provided to dental professional system 150. The first treatment plan may include any set of instructions for addressing the dental condition of the consumer/patient. As an example, the first treatment plan may include instructions to move the consumer/patient's teeth from the first arrangement toward the target arrangement. The first treatment plan may prescribe the use of a continuous dental appliance (e.g., a plurality of continuous aligners shaped to receive and resiliently reposition the consumer/patient teeth from an initial alignment toward a target alignment). The first treatment plan may include the use of crowns, bridges, implants, and/or other restorative dental appliances to restore properties of the consumer/patient dentition. In various embodiments, the first treatment plan is based on a clinical scan, such as a clinical scan that occurred prior to operation 190 a.
At operation 190d, the dental professional system 150 can send a request for a first treatment plan to the virtual dental care system 106. This operation may occur as a file and/or data transfer on the computer-readable medium 104.
At operation 190e, the virtual dental care system 106 may retrieve the first treatment plan in response to the request for the first treatment plan. Retrieving the first treatment plan may involve providing instructions to a treatment data store to retrieve a clinical data file associated with the consumer/patient. The clinical data file may represent an initial position of the consumer/patient dentition, an intended target position of the consumer/patient dentition, and/or a plurality of intermediate positions that move the consumer/patient dentition from the initial position toward the intended target position. In some implementations, the clinical data file may include specific clinical preferences (stage of performing neighbor reduction (interproximal reduction, IPR), location and/or time of application of the adherent applied during the first treatment plan, etc.). The clinical data file may also include clinical preferences of a physician managing the prescription of the first treatment plan and specific attributes of the dental appliance used to implement the first treatment plan.
At operation 190f, the virtual dental care system 106 may identify an expected arrangement of the first treatment plan at a particular time when a photograph of the consumer/patient was taken at the dental consumer/patient system 102. The virtual dental care system 106 may identify the stage at which the first treatment plan captured the photograph at the dental consumer/patient system 102, for example, using a period of time since the initial implementation of the first treatment plan, spatial relationships between teeth in the photograph captured at the dental consumer/patient system 102, and/or other information. The virtual dental care system 106 may further evaluate a file representing an expected arrangement of identified phases of the first treatment plan to identify a 3D structure, e.g., a grid corresponding to the identified phases of the first treatment plan.
At operation 190g, the virtual dental care system 106 may estimate photo parameters of the photo captured at the dental consumer/patient system 102 to generate alignment data, e.g., data representing an expected arrangement of the first treatment plan aligned with the photo. In some embodiments, the virtual dental care system 106 optimizes 3D parameters from images captured at the dental consumer/patient system 102. Examples of 3D parameters that may be optimized include camera parameters, position parameters, orientation parameters, and the like. The 3D parameter optimization may be performed using various techniques (such as differential rendering, desired maximization, etc.). The applicant hereby incorporates by reference and incorporates by reference the following applications as if fully set forth herein: U.S. patent application Ser. No. 62/952,850, U.S. patent application Ser. No. 16/417,354; U.S. patent application Ser. No. 16/400,980; U.S. patent application Ser. No. 16/455,441; and U.S. patent application Ser. No. 14/831,548 (now U.S. patent No. 10248883). Once the photo parameters are estimated/optimized, the virtual dental care system 106 can use those photo parameters to determine where the consumer/patient's teeth do not track the first treatment plan. For example, the virtual dental care system 106 may estimate where the consumer/patient teeth are in the desired position/orientation and where the teeth deviate from the desired position/orientation.
At operation 190h, the virtual dental care system 106 may generate an alignment grid (e.g., an updated segmentation grid) using the alignment data. The alignment grid may include a 3D representation of the consumer/patient dentition that reflects the photographs taken at the consumer/patient system 102. At operation 190i, the virtual dental care system 106 may estimate a first treatment plan using the alignment grid for modification. The virtual dental care system 106 can identify the positions of the consumer/patient teeth that deviate from the trajectory and/or from the intended arrangement specified by the first treatment plan. The virtual dental care system 106 may store any modifications in a clinical data file associated with the consumer/patient. At operation 190j, the virtual dental care system 106 may send the proposed modification to the doctor. This operation may occur as a file and/or data transfer on the computer-readable medium 104.
At operation 190k, the dental professional system 150 can present and/or facilitate review of the proposed modifications to the doctor. In various embodiments, dental professional system 150 shows the doctor the modifications proposed to the 3D model and/or image representing the consumer/patient dentition. The professional dental system 150 may further allow the physician to accept, reject, and/or further modify the 3D model and/or image. As an example, dental professional system 150 can allow the practitioner to further move the position of the attachment, modify the aligner and/or force system, modify the stage of performing the IPR, and the like. At operation 190l, the dental professional system 150 may send the reviewed modifications to the virtual dental care system 106, for example, as files and/or data transmissions on the computer readable medium 104. At operation 190m, the virtual dental care system 106 may update the first treatment plan through the modification of the audit. In various embodiments, the virtual dental care system 106 updates clinical data files associated with the consumer/patient through modifications of the audit.
For example, and as will be described in greater detail below, one or more of the virtual dental care modules 108 may cause the dental consumer/patient system 102, the dental professional system 150, and/or the virtual dental care system 106 to use one or more of fig. 3, 7, 14, 15, and/or 17 to recited the steps of the method claims.
Intelligent photo guidance
To perform virtual orthodontic care, virtual dental care, and/or other telemedicine, a practitioner may wish to visually inspect a patient. For example, a practitioner may wish to examine patient progress during a treatment plan, diagnose possible problems, and modify the treatment plan as needed. The availability of a high resolution camera integrated with a smart phone, for example, allows the patient to take a sufficiently high resolution photograph that may enable a practitioner to examine the patient. However, the patient may not know how to properly construct (frame) the clinically relevant body part for examination by a practitioner. For example, an orthodontic practitioner may require a specific view of a given tooth of a patient. The patient may not know which designated teeth to capture, which angles to take a picture, whether to wear an oral appliance, etc.
As will be described further below, the systems and methods provided in the present disclosure may utilize artificial intelligence to provide guidance to a patient regarding taking clinically relevant orthodontic photographs. The systems and methods provided in the present disclosure may improve the functionality of a computing device by more efficiently acquiring image data, which may further reduce storage requirements and network bandwidth. Furthermore, the systems and methods provided herein may improve the field of virtual medicine by improving the functional capabilities of the remote device. Furthermore, the systems and methods provided herein may improve the field of medical imaging by providing near real-time classification of images for various classifiers.
FIG. 2 is a block diagram of an example system 200 for Artificial Intelligence (AI) aided photo guidance. As shown in this figure, the example system 200 may include one or more virtual dental care modules 108 for performing one or more tasks. As will be explained in more detail below, the virtual dental care module 108 may include a camera module 204, an AI module 206, a direction module 208, and a requirements module 210. Although shown as separate elements, one or more of the virtual dental care modules 108 in fig. 2 may represent a single module or part of an application.
In some embodiments, one or more of the virtual dental care modules 108 in fig. 2 may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, and as will be described in greater detail below, one or more of the virtual dental care modules 108 may represent a module stored on and configured to run on one or more computing devices, such as the devices shown in fig. 1A (e.g., computing device 102 and/or server 106). One or more of the virtual dental care modules 108 in fig. 2 may also represent all or part of one or more special purpose computers configured to perform one or more tasks.
As shown in fig. 2, the example system 200 may also include one or more virtual dental care data stores 120, such as an image data stream data store 222, a binary classification data store 224, a category classification data store 226, a instructional cue data store 228, image data 232, and requirement data 234. The virtual dental care data store 120 may include one or more data stores configured to store any type or form of data or information.
Fig. 3 is a flow chart of an exemplary computer-implemented method 300 for AI-assisted photo guidance. The steps illustrated in fig. 3 may be performed by any suitable computer executable code and/or computing system, including the systems illustrated in fig. 1 and 2. In one example, each of the steps shown in fig. 3 may represent an algorithm whose structure includes and/or is represented by multiple sub-steps, examples of which are provided in more detail below.
As shown in fig. 3, at step 302, one or more of the systems described herein may receive an image data stream from a camera. For example, the camera module 204 may receive the image data stream 222 from the camera 132 of the system 200 or another camera in communication with the system 200.
In some embodiments, the term "image data stream" may refer to optically captured data that may be temporarily stored in a buffer (e.g., a camera buffer) or otherwise saved in a device memory. Examples of image data streams include, but are not limited to, one or more photographs, videos, and the like. The image data stream may include additional sensor data, such as depth data.
The system described herein may perform step 302 in various ways. In one example, the camera module 204 may receive the image data stream 222 from a buffer of the camera 132. The image data stream 222 may be temporarily stored image data, such as image data corresponding to a viewfinder of the camera 132. In other examples, image data stream 222 may include captured and stored images.
Fig. 4 illustrates a data flow of an apparatus 400, which may correspond to the system 200 and/or the computing apparatus 102. At 404, the camera image/video buffer may temporarily store image data (e.g., image data stream 222). The image data stream 222 may be raw image and/or video data or may be processed. For example, the image data stream 222 may be corrected, compressed and/or decompressed, reformatted, and/or resized for further processing, etc., for any visual artifacts.
Returning to fig. 3, at step 304, one or more of the systems described herein may determine one or more binary classifications and one or more class classifications from the image data stream using an artificial intelligence scheme. For example, the AI module 206 may determine a binary classification 224 and a category classification 226 from the image data stream 222.
In some embodiments, the term "binary classification" may refer to a characteristic that may be defined as having one of two states (e.g., yes or no). With respect to image data flow, examples of binary classifications may include, but are not limited to, whether a particular tooth is visible, whether a particular group of teeth (e.g., posterior teeth, etc.) is visible, whether the upper jaw is visible, whether the lower jaw is visible, whether an appliance (e.g., aligner, cheek retractor, etc.) is visible, whether a focus threshold is met—corresponding to whether the entire body part is visible, whether upper and lower teeth are in contact, whether an illumination threshold is met, whether there is a localized calculus (e.g., plaque accumulation), and whether there is gingival recession.
In some embodiments, the term "category classification" may refer to a characteristic that may be classified into one or more categories. In some implementations, the characteristics may be categorized into one or more mutually exclusive categories. With respect to image data streams, examples of category classification may include, but are not limited to, front view, left cheek view, and right cheek view.
In some embodiments, certain characteristics may be binary classification or class classification. For example, the head pose of the patient (e.g., the angle of the patient's head as observed in the image data stream) may be a binary classification (e.g., upright or tilted) or a class classification (e.g., classified into various pose classes based on slight tilt, large tilt, angle toward or away, etc.). In another example, the degree of ambiguity of the image data stream may be a binary classification (e.g., too-or less-ambiguous) or a category classification (e.g., degree of ambiguity, ambiguous regions within the image data stream).
In one example, the AI module 206 can analyze the image data stream 222 and save the analysis results as a binary classification 224 and a category classification 226. In fig. 4, at 406, the image data stream 222 may be classified by a neural network classifier (e.g., AI module 206).
Fig. 5 illustrates an environment 500 for classification. The image 522, which may correspond to the image data stream 222, may be an input to the neural network 506, which may correspond to the AI module 206. The neural network 506 may include one or more AI schemes, such as convolutional neural networks, deep learning, and the like. The neural network 506 may be trained via training data to discern the various classifications described above. The neural network 506 may determine a category classification 526 corresponding to the category classification 226.
In addition, the neural network 506 may include a binary classifier. The binary classifier may determine binary classifications using binary cross entropy, which may utilize a loss function to predict the probability between two possible values for each binary classification. The neural network 506 may determine a binary classification 524 that may correspond to the binary classification 224.
Returning to fig. 3, at step 306, one or more of the systems described herein may compare one or more binary classifications and one or more category classifications to a set of requirements. For example, the guidance module 208 may compare the binary classification 224 and the category classification 226 to the requirements 234. The requirements may indicate what clinically relevant information may be needed, in particular about the photograph.
The system described herein may perform step 306 in various ways. In one example, the requirements module 210 may determine the requirements 234 that may be tailored to a particular patient in a particular state of treatment. For example, the requirements module 210 may analyze the patient data 136 and/or the therapy data 138 to determine the requirements 234. Patient data 136 may indicate patient-specific conditions that may affect requirements 234. For example, the patient data 136 may indicate that the patient lacks certain teeth such that the requirement 234 may not require visibility of teeth that are known to be missing and therefore not visible.
In some examples, the requirements module 210 may reside in the server 106 such that the requirements 234 may be sent to the computing device 102. In other examples, the server 106 may send the patient data 136 and/or the therapy data 138 to the computing device 102 so that the computing device 102 may determine the requirements 234 locally. Fig. 4 shows at 410 that the requirements and expectations (e.g., requirements 234) may be inputs for the guideline generation and capture initiation at 408.
The requirements 234 may include, for example, visibility of a particular body part (e.g., posterior teeth, etc.), visibility of a particular appliance (e.g., cheek retractor), type of view captured, head pose (e.g., satisfactory head pose relative to a camera), etc. The particular body part may correspond to a tooth of interest identified from a current state of the treatment plan. For example, patient data 136 and/or treatment data 138 may indicate significant movement of a particular tooth. The particular body part may also correspond to one or more teeth in the vicinity of the tooth of interest. For example, if a tooth is expected to move significantly, adjacent teeth may be of interest.
In some examples, the diagnosis may require the patient to wear the appliance. For example, the patient may be required to wear a cheek retractor to properly expose the patient's teeth for viewing. In another example, the patient may be required to wear an orthodontic aligner so that a practitioner can check the aligner's interaction with the patient's teeth.
The guidance module 208 may determine whether the requirements 234 are met from the binary classification 224 and the category classification 226. For example, the guidance module 208 may determine from the category classification 226 whether a desired binary view of the patient's teeth is captured. The guidance module 208 may determine and indicate from the binary classification 224 whether the desired tooth is in the desired view.
Returning to fig. 3, at step 308, one or more of the systems described herein may provide feedback based on the comparison. For example, the guidance module 208 may provide guidance prompts 228.
In some embodiments, the term "instructional cue" may refer to an audio, visual, and/or tactile cue that may provide instructions to a user. Examples of instructional cues may include, but are not limited to, overlays on the device screen, text notifications, verbal instructions, tones or other sounds, vibrations, and the like.
The system described herein may perform step 308 in various ways. In one example, the guidance module 208 may determine the guidance prompt 228 based on the comparison. The instructional prompt 228 may include instructions for the user to manipulate the system 200 into a configuration that can take images that meet the requirements 234. For example, the instructions may include instructions to adjust a camera view of the camera to include a particular body part in the camera view, such as to move the camera closer or farther, pan/tilt/zoom the camera, change angles, track or otherwise move the camera, and so forth. The instructions may include instructions to insert or remove a particular orthotic. The instructions may also include instructions to move a particular body part, such as to open or close a patient's bite, to open a patient's jaw wider, and so on. The instructions may include instructions to adjust one or more camera settings, such as zoom, focus, turn on/off a flash, etc.
The instructional prompt 228 may indicate whether the requirements 234 are met. For example, the instructional prompt 228 may instruct the patient to take a photograph to save as image data 232.
In fig. 4, at 428, a guide (e.g., guide prompt 228) may be displayed or an image may be captured. The instructional cues 228 may comprise visual cues that may be visually displayed, such as overlays showing instructional lines, arrows, graphical instructions, text in overlays or windows, light patterns, grayed out images, ghosting, and the like. The instructional cues 228 may comprise audible cues that may be presented as audio, such as verbal instructions, sounds, warning tones, increasing/decreasing beeps (e.g., when the view is closer to/farther from the satisfactory requirement 234), and the like. The instructional cues 228 may include haptic cues, warning vibrations, or other haptic responses that may be presented as vibrations (e.g., vibrations that decrease in intensity near meeting the requirement 234, vibrations when meeting the requirement 234).
The feedback may include instructions to the system 200 for performing an automatic action when the requirement 234 is not met. The instructional prompt 228 may instruct the camera module 204 to automatically adjust one or more camera settings. For example, instead of instructing the patient to adjust the camera settings, the camera module 204 may automatically make the adjustments. In another example, if the requirements 234 are met, the instructional prompt 228 may instruct the camera module 204 to automatically capture the image data 232. Alternatively, automatically capturing the image data 232 may include saving portions of the image data stream 222 that meet the requirements 234. In some examples, the instructional prompt 228 may include a confirmation such that the patient may confirm or cancel the automatic action.
In some examples, the instructional prompt 228 may prevent certain actions, such as preventing capturing image data 232 of the body part when at least one of the requirements 234 is not met. In some examples, the requirements 234 may include hardware requirements (e.g., camera resolution, zoom, etc.) such that if the hardware requirements are not met, the instructional prompt 228 may prevent the image data 232 from being captured. In some examples, the instructional prompt 228 may include sending a notification. The system 200 may send a notification to the server 106 or other computing device to notify the practitioner of certain results. For example, the notification may indicate whether the attachment has been detached from the tooth, plaque accumulation detected, or other abnormal condition that may be highlighted for the practitioner.
Although the method 300 is presented as a series of steps, in some examples, the steps of the method 300 may be repeated as necessary to provide continuous feedback until a desired image is captured. Accordingly, certain steps may be repeated and the requirements 234 and/or instructional cues 228 may be continuously updated until the image data 232 is sufficiently captured.
As described above, the patient may have a device capable of taking a photograph, such as a smart phone. A previously trained neural network may be provided to the smart phone that may assist the patient in taking clinically relevant photographs. Guidance may be provided to the patient to ensure that the photograph meets clinical requirements. The requirements may be customized to the patient at a particular stage of patient treatment. Thus, a patient's physician may be able to view the patient remotely to track patient progress, update treatment, or diagnose any problems.
Although the examples herein are described with respect to orthodontic care, in other embodiments, remote care may include any other medical care that may be performed via external photography.
Image-based evaluation
The image-based systems and methods as described herein may allow for remote assessment and follow-up of patients during orthodontic treatment. The system and method allow a physician to quickly and accurately assess patient progress or its deficiencies based on photographs or images taken by the patient. A photograph or image will be taken outside the doctor's office or other clinical office and may alternatively be taken by a handheld device such as a smart phone or digital camera, for example. The evaluating may include tracking the actual movement and position of the patient's teeth during the orthodontic treatment as compared to the expected movement and position of the patient's teeth during the orthodontic treatment.
In some embodiments, a patient captures a two-dimensional photographic image of his teeth, which is then compared to a three-dimensional model of the expected position of the patient's teeth during a given treatment session. The comparison may include determining a positional deviation or error between the actual position of the patient's teeth and the expected position of the patient's teeth based on the three-dimensional model of the patient's teeth during the particular treatment phase. Other methods of assessing patient progress may include monitoring the fit of an orthodontic aligner over a patient's teeth. However, the fit of the orthodontic aligner or the gap between the orthodontic aligner and the patient's teeth does not necessarily reflect the deviation of the path of the patient's teeth.
Fig. 6 is a block diagram of an example system 600 for determining an error between an expected tooth position and an actual tooth position. As shown in this figure, the example system 600 may include one or more virtual dental care modules 602 for performing one or more tasks. As will be explained in more detail below, the virtual dental care module 602 may include a registration (registration) module 604, a projection module 606, an image generation module 608, an error module 610, and a treatment and assessment module 612. Although shown as separate elements, one or more of the virtual dental care modules 602 in fig. 6 may represent a single module or part of an application.
In some embodiments, one or more of the virtual dental care modules 602 in fig. 6 may represent a software application or program that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, and as will be described in greater detail below, one or more of the virtual dental care modules 602 may represent a module stored on and configured to run on one or more computing devices, such as the devices shown in fig. 1A (e.g., computing device 102 and/or server 106). One or more of the modules 602 in fig. 6 may also represent all or part of one or more special purpose computers configured to perform one or more tasks.
As shown in fig. 6, the example system 600 may also include one or more virtual dental care data stores 120, such as segmented tooth models 622, two-dimensional images 624, error images 626, error data 628, and treatment plan data 630. The virtual dental care data store 120 may include one or more data stores configured to store any type or form of data or information.
The virtual dental care data store 120 may include a segmented tooth model 622, which may include data representing a three-dimensional model of each individual tooth of the patient. A three-dimensional model may be generated based on an initial three-dimensional (or 3D) intraoral scan of a patient's teeth. During intraoral scanning, the handheld scanning device generates a three-dimensional model of the patient's upper and lower arches. After capturing the three-dimensional models of the upper and lower arches, each tooth within the three-dimensional model is separated from the model to form a single tooth model. These individual tooth models are then used during the treatment planning process to generate each of the treatment phases to move the teeth from the initial position toward the target final position, and then to generate an orthodontic aligner to be worn over the patient's teeth to move the teeth from the initial position toward the final position.
The virtual dental care data store 120 may include a two-dimensional (or 2D) image 624, which may include data representing two-dimensional images of a patient's mouth and teeth. In some embodiments, two-dimensional image 624 is captured using the systems and methods described herein, for example, by using the AI-based photo capture system discussed above. Two-dimensional image 624 may include one or more of three cheek photographs and two bite photographs of a patient's teeth. For example, the three cheek photographs may include a front image of the patient's teeth under the bite, a left cheek image of the patient's teeth under the bite, and a right cheek image of the patient's teeth under the bite. In some embodiments, the cheek photograph may also include an image of the tooth in a neutral bite or malocclusion position. Two-dimensional image 624 may also include a bite photograph of the patient's teeth. For example, two-dimensional image 624 may include an image of the occlusal surfaces of the teeth of the upper dental arch of the patient and an image of the occlusal surfaces of the teeth of the lower dental arch of the patient.
The virtual dental care data store 120 may include treatment plan data 630. Treatment plan data 630 may include the position and orientation of each patient's teeth at each stage of the treatment plan. In some embodiments, the position and orientation of the teeth may be stored as three-dimensional positional and angular orientations of each tooth and the patient's upper and lower arches. In some embodiments, the position and orientation of the patient's teeth may be stored as a set of three-dimensional segmented models of the patient's upper and lower arches for each treatment stage. In some embodiments, the three-dimensional positioning location may be partial. For example, the 3D positioning location may be tooth-to-tooth or tooth-to-space, rather than presenting tooth-to-gum line. In some embodiments, the treatment plan data may include other information, such as the location of attachments on the patient's teeth, as well as other orthodontic devices, such as wires and brackets, elastic members, temporary fixtures, and other orthodontic devices.
The virtual dental care data store 120 may include an error image 626. The systems and methods disclosed herein may generate an error image 626 based on differences in tooth positions between a two-dimensional image of a patient's teeth and a three-dimensional dentition model generated based on expected tooth positions for a current stage of a patient treatment plan. As discussed below with respect to capitals fig. 7, an error image 626 may be generated by a process of registering a 3D model of the patient's intended tooth position with one or more two-dimensional photographs of the patient's dentition at the current stage. After registration, the dentition is projected in the same image plane as the two-dimensional image, and then the difference between the position and orientation of the teeth and the two-dimensional image and the three-dimensional projection is used to form an error image, for example, as shown in fig. 8. In fig. 9, a plurality of error images are shown, wherein the error between the tooth position in the three-dimensional dentition model and the tooth position in the two-dimensional image is shown via an overlay on the two-dimensional image. In fig. 10, a plurality of error images are shown, wherein the error between the tooth position in the three-dimensional dentition model and the tooth position in the two-dimensional image is shown via the outline of the tooth position of the digital model overlaid on the two-dimensional image. In fig. 11, the error image may include a 3D generated model of the current phase of the treatment plan next to the 2D image of the patient's teeth. In some embodiments, the error between the 3D model and the 2D image may be indicated by a color or other visual indicator to show how far the tooth position is on or off track.
The virtual dental care data store 120 may include error data 628. The systems and methods disclosed herein may generate error data in a number of ways. For example, error data may be generated based on differences between the position and orientation of the tooth and the two-dimensional image and the three-dimensional projection. Error data 628 may include the differential position and rotation angle of each tooth in the patient's dentition in three dimensions. Fig. 12 shows an example of a graph generated using error data 628 that includes the differential position between the expected position of each patient's tooth at each treatment stage and the actual position of each patient's tooth. In some embodiments, the error may be calculated as a value (such as a distance in millimeters) that indicates the degree to which the tooth position is off track.
The virtual dental care module 602 may include a registration module 604. The registration module 604 registers a patient's three-dimensional dentition, which includes a three-dimensional segmented model of the teeth and an arrangement that exists at the current stage of the treatment plan, with a two-dimensional image of the patient's teeth taken at the current stage of the treatment plan. The three-dimensional segmented model and the two-dimensional image may be registered in a number of ways. For example, edge detection techniques may be used to determine the edges and shape of teeth in a two-dimensional image in order to determine which teeth are visible in the two-dimensional image and where they are within the two-dimensional image and which teeth in the two-dimensional image correspond to particular teeth and the three-dimensional image.
The virtual dental care module 602 may include a projection module 606. The projection module 606 projects the three-dimensional dentition of the current treatment phase onto a two-dimensional image of the patient. The projection may be based on knowledge of the camera's properties when capturing the image. Attributes may include camera focal length and aperture, camera focus distance, camera angle and orientation, and distance between the camera and the patient's dentition, among other attributes. Using the attribute information, the three-dimensional dentition is projected as a two-dimensional image in the same coordinate space as the tooth within the two-dimensional image.
The virtual dental care module 602 may include an image generation module 608. The image generation module 608 generates error images, such as those depicted in fig. 8, 9, 10, and 11. The projections from the projection module may be used to generate an error image in the image generation module 608. For example, in fig. 8, a two-dimensional error image is generated based on the difference between the position of the tooth in the two-dimensional image and the position of the tooth in the three-dimensional projection. The contour shown in fig. 8 represents the error between the 3D projection of the position of the tooth and the 2D image. In fig. 9, a plurality of error images are shown, wherein the error between the tooth position in the three-dimensional dentition model and the tooth position in the two-dimensional image is shown via an overlay on the two-dimensional image. In fig. 10, a plurality of error images are shown, wherein the error between the tooth position in the three-dimensional dentition model and the tooth position in the two-dimensional image is shown via the contour of the tooth position in the digital model overlaid on the two-dimensional image. In fig. 11, the error image may include a three-dimensional generated model of the tooth in the current stage of the treatment plan alongside the two-dimensional image of the patient's tooth. In fig. 12, a three-dimensional error image is shown, including a first image depicting a three-dimensional view of the expected tooth position of a patient based on a treatment plan, juxtaposed to the actual position of the patient's teeth based on the two-dimensional image.
The virtual dental care module 602 may include an error generation module 610. The error generation module 610 quantifies the error between the position of the tooth in the two-dimensional image and the three-dimensional projection. The error may be determined in a number of ways, for example, the error image in FIG. 8 may be analyzed to find the pixel differences for each tooth of the patient's dentition. The pixel differences may be, for example, differences between the positions of edges of teeth in the two-dimensional image and the positions of corresponding edges of corresponding teeth in the two-dimensional projection of the three-dimensional model. The number of pixels between the corresponding edges may be determined, and then the true distance between the corresponding edges may be determined based on the size of the pixels within the image. For example, if each pixel within the image corresponds to 100 μm and there are 10 pixels between the corresponding edges of the corresponding tooth, the error between the expected position of the tooth of the current stage and the actual position of the tooth of the current stage from the viewpoint of this particular projection is 1000 μm. Such analysis may be performed from multiple projected viewpoints, for example, in fig. 8, 9 and 10, the left cheek projection, the right cheek projection and the front projection of the viewpoint are shown. In some embodiments, the maximum error for each tooth is determined from the error for each projection, and this error can be used to generate a chart for each stage, for example, as shown in fig. 12.
The virtual dental care module 602 may include a treatment and assessment module 612. As discussed herein, the treatment and evaluation module 612 may perform or facilitate performing an evaluation of the patient's teeth, the progress of the patient's treatment based on an error determined from the actual position of the patient's teeth and the expected position of the patient's teeth helping to determine guidance and potential interventions and treatments for the patient. The evaluation and treatment may take many forms, for example, the evaluation may determine whether and how far the patient treatment is off-track when compared to the expected position of the patient's teeth at each treatment stage. Based on this information, additional interventions or treatments may be suggested to the patient or physician, as discussed herein.
Fig. 7 illustrates a method 700 of assessing tooth movement of a patient. Method 700 may begin at step 702, where orthodontic treatment of a patient's teeth is initiated. Initiating orthodontic treatment of a patient's teeth may include a number of processes. In some embodiments, the patient dentition is scanned by a three-dimensional intraoral scanner to generate a three-dimensional model of the patient dentition. The three-dimensional model of the patient's dentition may include individual segmented teeth representing each of the teeth of the patient's upper and lower arches. The desired final position of the patient's teeth may be estimated based on the initial position of the patient's teeth obtained from the intraoral scan. A series of intermediate tooth positions of the patient's teeth may be generated to incrementally move the teeth through a series of stages from an initial position toward a final position. Dental appliances may be made based on intermediate positions of a patient's teeth in order to move the teeth from an initial position towards a final position. The patient then wears each dental appliance for a period of time, e.g., 10 days to two weeks, during which the dental appliance applies a force to the patient's teeth to move the teeth from a first position at the beginning of the treatment phase toward a second position at the end of the treatment phase. To move the patient's teeth, each appliance is worn in succession.
However, treatment may not progress as expected. Sometimes, patient compliance may not be as expected. The patient may not wear the dental appliance throughout the day, e.g. they may remove it before eating, but forget to put it on after completing the meal. This lack of compliance can cause the tooth position to lag behind its intended position. Sometimes, teeth provide more or less resistance to movement than desired. This may result in the teeth moving slower or faster than desired. The differences in compliance and tooth resistance may cause the treatment to deviate from the trajectory such that the actual position of the patient's teeth during a given treatment session may deviate from the intended position to the extent that the appliance may not fit the patient's teeth or may otherwise not provide the desired movement for that session.
The physician may monitor the progress of the patient's treatment to anticipate or determine that the patient's treatment has gone off-track or may be progressing toward off-track so that they may provide intervention to return the treatment to the track or generate a new treatment plan to treat the patient's teeth.
At some point during treatment, a physician may decide to assess patient progress, for example, at each stage of patient treatment, their physician may request that the patient take one or more photographs of the patient's teeth, as guided by the artificial intelligence systems and methods discussed herein.
At step 704, the process may register the three-dimensional model of the patient's dentition at the current treatment stage with one or more two-dimensional images of the patient's teeth. The registration and other processes occurring during step 704 may be performed by one or more modules of the systems described herein, e.g., by registration module 604. The registration module 604 may register the three-dimensional dentition of the patient including the three-dimensional segmented model with the two-dimensional image in a number of ways. For example, edge detection techniques may be used to determine the edges and shape of teeth in a two-dimensional image in order to determine which teeth are visible in the two-dimensional image and where they are within the two-dimensional image and which teeth in the two-dimensional image correspond to a particular tooth and three-dimensional image.
At step 706, a three-dimensional image of the patient's teeth is projected into a two-dimensional image plane of one or more two-dimensional images. The projection module 606 may perform the process of step 706 by projecting the three-dimensional dentition of the current stage of treatment onto a two-dimensional image of the patient. The projection may be based on knowledge of the properties of the camera at the time the image was taken. These attributes may include camera focal length and aperture, camera focus distance, camera angle and orientation, and distance between the camera and the patient's dentition, among others. Using the attribute information, the three-dimensional dentition is projected as a two-dimensional image projection of the tooth in the same coordinate space within the two-dimensional image.
At step 708, an error image is generated. The image generation module 608 may generate an error image at step 708. Examples of error images are depicted in fig. 8, 9, 10, and 11. At step 708, the projection generated by projection module 606 at step 706 may be used to generate an error image. For example, in generating the error image depicted in fig. 8, a two-dimensional error image is generated based on the difference between the position of the tooth in the two-dimensional image and the position of the tooth in the three-dimensional projection. The contour shown in fig. 8 represents the error between the 3D projection of the position of the tooth and the 2D image. In generating the error image depicted in fig. 9, the error image depicted in fig. 8 may be used to generate an overlay on a two-dimensional image. The overlay may be a mask, wherein the color properties of the image are adjusted to highlight position errors of the teeth, e.g., the mask may adjust brightness or color values of the two-dimensional image based on the error mask.
In generating the error image depicted in fig. 10, an overlay of the contour lines of the teeth of the three-dimensional model including the projection of the current treatment phase is overlaid on the two-dimensional image of the patient's teeth. The outline may be opaque or translucent and may appear in one or more colors, for example, the outline may be a white outline, a black outline, or another color.
In some embodiments, the covering shown in fig. 9 and the covering in fig. 10 may vary based on one or more factors. For example, the color, brightness, thickness, and other properties of the overlay may vary based on the degree of error between the expected position of the tooth and the actual position of the tooth. In some embodiments, the overlay may be a two-dimensional rendering of a three-dimensional model of the patient's teeth, which may be rendered on top of the two-dimensional image. The two-dimensional rendering of the three-dimensional model may be shown as partially translucent, pseudo-colored, or may include other indications showing differences between the expected positions of the patient's teeth and the actual positions of the patient's teeth. In some embodiments, the covering may be suddenly changed to help the observer observe different positions of the patient's teeth.
In generating the error image depicted in fig. 11, a three-dimensional model of the current stage of the treatment plan and a two-dimensional image of the patient's teeth may be generated. The three-dimensional model and the two-dimensional image are generated in parallel to allow each image to be viewed simultaneously. In some embodiments, a three-dimensional model of the patient's teeth may be generated based on the position of the teeth in the two-dimensional image of the patient. The two-dimensional image may be used as a texture to provide the three-dimensional model with the appropriate colors and shadows. The three-dimensional model of the patient's teeth may be displayed either side-by-side or simultaneously with the three-dimensional model of the patient's teeth in their intended positions.
In generating the error image depicted in fig. 12, a first image depicting a three-dimensional view of the expected tooth position of the patient based on the treatment plan is displayed in juxtaposition with the actual position of the patient's teeth based on the two-dimensional image.
In some embodiments, error images may be generated for multiple phases of patient treatment. For example, error images may be generated at each stage of patient treatment to allow a physician to assess the progress of the patient over time. In some embodiments, the error images may be presented through a user interface that includes a user adjustable navigation tool, such as a time selector or slider, whereby the physician may quickly move between error images at various stages of treatment. In some embodiments, the user interface may include navigation and scaling tools that allow a physician, other user, to zoom in and out on the patient error image to more closely view the error, and may allow the physician to pan and rotate the error image in order to further facilitate evaluation of the patient dentition. In some embodiments, the various views of the error images may be synchronized with each other such that zooming, panning, or rotating one model or image causes a corresponding zooming, panning, or rotating of another model or image.
At step 710, a position error for each tooth may be determined. The process of step 710 may be performed by the error generation module 610. At step 710, an error between the two-dimensional image and the position of the tooth in the three-dimensional projection is quantified. The error can be quantified in many ways, for example, the error image in fig. 8 can be analyzed to find the pixel differences for each tooth of the patient's dentition. The pixel difference may be, for example, a difference between a position of an edge of a tooth in the two-dimensional image and a position of a corresponding edge of a corresponding tooth in the two-dimensional projection. The number of pixels between the corresponding edges may be determined, and then the true distance between the corresponding edges may be determined based on the size of the pixels within the image. For example, if each pixel within the image corresponds to 100 μm and there are 10 pixels between the corresponding edges of the corresponding tooth, the error between the expected position of the tooth at the current stage and the actual position of the tooth at the current stage from the viewpoint of this particular projection is 1000 μm. Such analysis may be performed from multiple projected viewpoints, for example, in fig. 8, 9 and 10, the left cheek projection, the right cheek projection and the front projection of the viewpoint are shown. In some embodiments, the maximum error for each tooth is determined from the error for each projection. This error can be used to generate a chart for each stage, for example, as shown in fig. 12. In some embodiments, a chart may be generated in a subsequent error image generation step 708, such as shown in FIG. 12.
At step 712, interventions for patient treatment and/or corrections to treatment plans may be generated as discussed herein (e.g., with respect to fig. 13-18).
Fig. 8 shows a differential error image of teeth during a patient treatment phase. The top image is the right cheek-view error image 870, the middle image is the left cheek-view error image 880, and the bottom image is the front-view error image 890. Error images 870, 880, 890 are two-dimensional error images generated based on differences between the position of the tooth in the two-dimensional image and the position of the tooth in the three-dimensional projection. The error image 870 represents the error between the three-dimensional projection of the position of the tooth as viewed from the right cheek position and the three-dimensional image. The error image shows the difference in the actual position of the patient's teeth from the expected position of the patient's teeth. In the right cheek view, right molar 841, right bicuspid 843, right cuspid 845, incisor 847, and left cuspid 842 are visible. Referring to left cusp 842, a first edge 844 of error image 840 corresponds to an edge of left cusp 842 in the two-dimensional image. The second edge 846 corresponds to the same edge of the three-dimensional model of the patient's teeth that the left cusp 842 projects in the same plane as the two-dimensional image of the patient's teeth such that from the perspective of the camera, which captures a two-dimensional image of the patient's dentition from the right cheek side of the patient, the edge is aligned with the edge of the teeth and the three-dimensional model. The difference between the positions of the first edge 844 and the second edge 846 quantifies the displacement of the tooth 842 in the plane of the two-dimensional image 840.
Error image 850 represents the error between the three-dimensional projection of the position of the tooth and the three-dimensional image viewed from the left cheek position. The error image shows the difference in the actual position of the patient's teeth from the expected position of the patient's teeth. In the left cheek view, left molar 851, left bicuspid 853, left cuspid 842, incisor 847, and right cuspid 845 are visible. Referring to left cusp 842, a first edge 854 of error image 850 corresponds to an edge of left cusp 842 in a two-dimensional image taken from a left cheek view. The second edge 856 corresponds to the same edge of the three-dimensional model of the patient's teeth that the left cusp 842 projects in the same plane as the two-dimensional image of the patient's teeth, such that from the perspective of the camera, which captures a two-dimensional image of the patient's dentition from the left cheek side, the edge is aligned with the edge of the teeth and the three-dimensional model. The difference between the positions of the first edge 854 and the second edge 856 quantifies the displacement of the tooth 842 in the plane of the two-dimensional image 850.
The error image 860 represents the error between the three-dimensional projection of the position of the tooth and the three-dimensional image viewed from the anterior position. The error image shows the difference in the actual position of the patient's teeth from the expected position of the patient's teeth. Left cuspid 842, incisor 847, and right cuspid 845 are visible in the front view. Referring to left cusp 842, first edge 864 of error image 860 corresponds to an edge of left cusp 842 in a two-dimensional image taken from a front perspective. The second edge 866 corresponds to the same edge of the three-dimensional model of the patient's teeth that the left cusp 842 projects in the same plane as the two-dimensional image of the patient's teeth such that from the perspective of the camera, which captures a two-dimensional image of the patient's dentition from the left cheek side, the edge is aligned with the edge of the teeth and the three-dimensional model. The difference between the positions of the first edge 864 and the second edge 866 quantifies the displacement of the tooth 842 in the plane of the two-dimensional image 860.
In some embodiments, the difference between the positions of the edges of the teeth in the three different image planes of the error images 840, 850, 860 may be used to directly determine the displacement of the teeth relative to the expected positions of the teeth. In some embodiments, the position of the tooth in the error images 840, 850, 860 may be used to determine the position and three-dimensional space of the tooth based on the known angle and orientation of the camera and the image plane in which the two-dimensional image was taken. As discussed below, the error image may be used to highlight or otherwise indicate a difference between the expected position of the patient's teeth and the actual position of the patient's teeth.
For example, in fig. 9, the error image is used to create a mask of a two-dimensional image of the patient's teeth. In generating the error image depicted in FIG. 9, the error image depicted in FIG. 8 is used to create a mask that generates an overlay over the two-dimensional image. The color properties of a mask area (masked area) of the two-dimensional image may be changed compared to the three-dimensional projection to highlight differences in the position of the teeth in the two-dimensional image. In some embodiments, the color properties of the mask portion (masked portion) of the image are adjusted to highlight positional deviations of the teeth, e.g., the mask may adjust the brightness, luminance, or color value of the two-dimensional image based on the error mask.
Error images 970, 980, 990 are two-dimensional error images generated based on differences between the positions of the teeth in the two-dimensional image and the positions of the teeth in the three-dimensional projection. In error image 970, mask 978 represents the error between the three-dimensional projection of the position of the tooth and the three-dimensional image viewed from the right cheek position. In the right cheek view, right molar 841, right bicuspid 843, right cuspid 845, incisor 847, and left cuspid 842 are visible. Referring to left tooth 842, a first edge 974 of mask 978 corresponds to the edge of left tooth 842 in a two-dimensional image. The second edge 976 corresponds to the same edge of the three-dimensional model of the patient's teeth that the left cusp 842 projects in the same plane as the two-dimensional image of the patient's teeth such that from the perspective of the camera, which captures a two-dimensional image of the patient's dentition from the right cheek side of the patient, the edge is aligned with the edge of the teeth and the three-dimensional model. Thus, the overlay created by mask 978 highlights the difference in position between the positions of first edge 974 and second edge 976 of tooth 842 in the plane of two-dimensional image 970.
In error image 980, mask 982 represents the error between the three-dimensional projection of the position of the tooth and the three-dimensional image viewed from the left cheek position. In the left cheek view, left molar 851, left bicuspid 853, left cuspid 842, incisor 847, and right cuspid 845 are visible. Referring to left cusp 842, a first edge 984 of mask 982 corresponds to an edge of left cusp 842 in a two-dimensional image taken from a left cheek view. The second edge 986 corresponds to the same edge of the three-dimensional model of the patient's teeth that the left cusp 842 projects in the same plane as the two-dimensional image of the patient's teeth such that from the perspective of the camera, which captures a two-dimensional image of the patient's dentition from the left cheek side of the patient, the edge is aligned with the edge of the teeth and the three-dimensional model. Thus, the overlay created by mask 982 highlights the difference in position between the positions of first edge 984 and second edge 986 of tooth 842 in the plane of two-dimensional image 980.
In error image 990, mask 992 represents the error between the position of the tooth and the three-dimensional projection and the three-dimensional image observed from the front position. Left cuspid 842, incisor 847, and right cuspid 845 are visible in the front view. Referring to left cusp 842, first edge 994 of mask 996 corresponds to the edge of left cusp 842 in the two-dimensional image taken from the front view. The second edge 996 corresponds to the same edge of the three-dimensional model of the patient's teeth that the left cusp 842 projects in the same plane as the two-dimensional image of the patient's teeth such that from the perspective of the camera, which captures a two-dimensional image of the patient's dentition from the left cheek side of the patient, the edge is aligned with the edge of the teeth and the three-dimensional model. Thus, the overlay created by mask 992 highlights the difference in position between the positions of first edge 994 and second edge 996 of tooth 842 in the plane of two-dimensional image 990.
In some embodiments, the difference between the positions of the edges of the teeth in the three different image planes of the error images 840, 850, 860 may be used to directly determine the displacement of the teeth relative to the expected positions of the teeth. In some embodiments, the position of the tooth in the error images 840, 850, 860 may be used to determine the position and three-dimensional space of the tooth based on the known angle and orientation of the camera and the image plane in which the two-dimensional image was taken. As discussed below, the error image may be used to highlight or otherwise indicate a difference between the expected position of the patient's teeth and the actual position of the patient's teeth.
Fig. 10 depicts contour error images 1000, 1010, 1020 of teeth during a patient treatment phase. Error image 1000 depicts a two-dimensional image of the patient's dentition from the right cheek view. Patient's teeth 1004 in their current position are depicted in a two-dimensional image, while outline 1002 depicts the expected positions of the patient's teeth according to the current stage of the treatment plan. The contour 1002 is generated based on the projection of the three-dimensional model of the patient's teeth onto the two-dimensional image plane at the desired location according to the current stage of the treatment plan. Each visible tooth of the dentition is represented by a contour 1002. Contours represent the edges of teeth in the projected three-dimensional model. Each tooth profile may represent a silhouette of a patient's teeth from a two-dimensional image perspective. In some embodiments, the tooth profile may be defined by occlusal or incisal edges, interproximal edges of the teeth, and gingival edges.
Error image 1010 depicts a two-dimensional image of the patient's dentition from the right cheek view. Patient teeth 1014 in their current position are depicted in a two-dimensional image, while contours 1012 depict the expected positions of the patient's teeth according to the current stage of the treatment plan. Contour 1012 is generated based on projections on a two-dimensional image plane of a three-dimensional model of the patient's teeth at an expected location according to the current stage of the treatment plan. Each visible tooth of the dentition is represented by a contour 1012.
Error image 1020 depicts a two-dimensional image of the patient's dentition from the right cheek view. Patient teeth 1024 in their current position are depicted in a two-dimensional image, while contours 1022 depict the expected positions of the patient teeth according to the current stage of the treatment plan. The contour 1022 is generated based on projections on a two-dimensional image plane of a three-dimensional model of the patient's teeth at an expected location according to the current stage of the treatment plan. Each visible tooth of the dentition is represented by outline 1022.
Fig. 11 shows a juxtaposition of a three-dimensional rendered tooth image 1130 of a patient's teeth in their intended positions based on a treatment plan and a two-dimensional image 1140 of the actual positions of the patient's teeth. In generating the error image depicted in fig. 11, a three-dimensional model 1130 of the current stage of the treatment plan and a two-dimensional image 1140 of the patient's teeth may be generated. The three-dimensional model 1130 and the two-dimensional image 1140 may be generated side-by-side to allow each image to be viewed simultaneously. In some embodiments, the image 1140 may represent a three-dimensional model of the patient's teeth at their actual current positions generated based on the position of the patient's teeth in the two-dimensional image. The two-dimensional image may be used as a texture to provide the three-dimensional model with the appropriate colors and shadows. The three-dimensional model of the patient's teeth may be displayed in parallel or simultaneously with the three-dimensional model of the patient's teeth in their intended positions.
Fig. 12 shows graphs 1250, 1260 of differential errors, sometimes referred to as the degree to which a tooth is on or off track for a patient's tooth at each stage of treatment. Each column 1252 in charts 1250, 1260 represents a tooth of the patient's dental arch by its corresponding tooth number. Each row 1254 in graphs 1250, 1260 represents a stage of a treatment plan. The shading in each stage of each tooth depicts the extent of the change between the expected position of the patient's teeth at that stage and the actual position of the patient's teeth at that stage, e.g., as determined above. Legend 1256 shows that the darker the shade, the more the patient's teeth are off track. Chart 1250 illustrates maxillary tracking of a patient's teeth through stage 12 of the treatment plan. As shown by blocks 1257, 1258, and 1259, teeth 2, 4, and 7 deviate more from their intended positions than the other teeth. However, as shown in graph 1260, to stage 20, tooth 4 only remains off track level, while blocks 1268 and 1269 show that tooth 2 and tooth 7 continue further off track over the course of treatment, and the physician can use such graphs to determine whether and how to provide guidance or therapeutic intervention to the patient.
Instruction generation
Monitoring and assessing patient treatment progress to determine appropriate treatment guidelines for a patient and then providing the treatment guidelines to the patient can be a difficult, expensive, and time consuming task. As discussed above, the use of phase-by-phase tracking or other periodic tracking and tooth deviation allows the physician to at least partially simplify the task of determining the type of instruction given to the patient and providing that instruction to the patient. For example, the patient may take a photograph of his dentition using artificial intelligence guidance as discussed above, and then may determine the deviation of each patient's teeth from their expected positions as also discussed above. In addition, other image-based analysis of the captured images may be performed to help assess patient treatment progress. Based on this information and the guiding information provided by the doctor, the patient's teeth may be evaluated and the doctor or patient may be provided with appropriate guidance for continuing or modifying the patient's treatment.
Fig. 13 illustrates a block diagram of an example system 1300 for providing guidance. As shown in this figure, the example system 1300 may include one or more virtual dental care modules 108 for performing one or more tasks. As will be explained in more detail below, the virtual dental care module 108 may include a guideline generation module 1304, a guideline conflict resolution module 1306, and a guideline and intervention transmission module 1308. Although shown as separate elements, one or more of the virtual dental care modules 108 in fig. 13 may represent a single module or part of an application.
In some embodiments, one or more of the virtual dental care modules 108 in fig. 13 may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, and as will be described in greater detail below, one or more of the virtual dental care modules 108 may represent a module stored on and configured to run on one or more computing devices, such as the devices shown in fig. 1A (e.g., computing device 102 and/or server 106). One or more of the virtual dental care modules 108 in fig. 13 may also represent all or part of one or more special purpose computers configured to perform one or more tasks.
As shown in fig. 13, the example system 1300 may also include virtual dental care data store(s) 120, such as measurement data 1322, treatment plan data 1324, instructional information 1326, and historical data 1323. The virtual dental care data store 120 may include any type or form of data or information.
The virtual dental care data store 120 may include measurement data 1322. The measurement data may include data such as error data discussed above with respect to fig. 6. For example, the measurement data may include error data generated based on differences between a two-dimensional image of the patient's teeth at a given treatment stage and a three-dimensional projection of the patient's teeth at a three-dimensional model of the given treatment stage. The measurement data 1322 may include differential positions and rotational angles of each tooth in the patient's dentition in three-dimensional space. In some embodiments, the measurement data may also include the presence and absence of attachments and other orthodontic devices on the patient's dentition. In some embodiments, the measurement data 1322 may include information regarding the fit of the aligner over the teeth of the patient. For example, the measurement data 1322 may include a tooth receiving cavity of a dental appliance that is not properly fitted over a patient's tooth. In some embodiments, the measurement data 1322 may include the fit and the magnitude of the improper fit of the dental appliance, such as the distance between all or a incisal edge surface of the tooth receiving cavity and the corresponding bite or incisal edge surface of the respective tooth, and include all or a tangent plane of the tooth receiving cavity and the corresponding bite or tangent plane of the corresponding tooth.
In some embodiments, the measurement data 1322 may include the above-described data for each stage of the treatment plan, and may also include exchange rates and other information determined based on differences between the patient's teeth and orthodontic appliances for treatment over multiple stages of the treatment plan. In some embodiments, the measurement data 1322 includes deviations of each tooth from the current treatment plan in the front view, the left cheek view, and the right cheek view, as discussed above. In some embodiments, the measurement data 1322 may include a measured distance and angle of deviation of the expected position and orientation of each tooth from the actual position and orientation. In some embodiments, the measurement data 1322 may include a distance and a direction of deviation of the tooth. For example, the error information may include data indicating that the tooth was not tracked in an intrusion (in intrusion) and was 0.25mm from the expected location in the treatment plan.
The virtual dental care data store 120 may include treatment plan data 1324. Treatment plan data 1324 may include the position and orientation of each of the patient's teeth at each stage of treatment plan 1514. In some embodiments, the position and orientation of the teeth may be stored as a three-dimensional positional position and angular orientation of each tooth and the patient's upper and lower arches. In some embodiments, the position and orientation of the patient's teeth may be stored as a set of three-dimensional segmented models for each treatment phase of the patient's upper and lower arches. In some embodiments, the treatment plan data may include other information, such as the location of the attachment on the patient's teeth, as well as other orthodontic devices, such as wires and brackets, elastic members, temporary fixtures, and other orthodontic devices.
The virtual dental care data store 120 may include instructional information 1326. The instructional information 1326 may include physician instructional template data 1512. The physician guidance template data 1512 may include information for thresholds that the physician uses to track treatment plans and determine potential changes in the treatment and guidance provided to the patient based on the thresholds. For example, the threshold may be specific to a deviation of the central incisors from the treatment plan of 0.75mm, instructions should be sent to the patient to schedule a new appointment, if one of the central incisors deviates from the treatment plan by between 0.5mm and 0.75mm, further deviations should be observed, if the central incisor deviation increases over a period of 2 months, new appointments should be generated, if the central incisor deviation is between 0.25mm and 0.5mm, instructions should be given to the patient to wear the current set of aligners for another week, and a central incisor deviation of less than 0.25mm may be considered "in tracking". Other thresholds may indicate that the tooth that is "not moving" according to the treatment plan indicia should not deviate from its treatment position, and any deviation greater than 0.25mm should result in a appointment.
The instructional information 1326 can also include individual instructions 1516 based on one or more of a particular treatment plan and a particular patient. For example, a treatment plan with particularly unsuitable teeth may have a specific case threshold that is higher or lower than a threshold in a physician-guided template, or, for example, a patient may have one or more teeth missing, and thus, individual guide data may omit any threshold associated with missing teeth, or may include specific patient thresholds associated with closing gaps formed by missing teeth.
The virtual dental care data store 120 may include historical data 1323. The historical data 1323 may include information relating to guidelines previously provided to the patient and historical measurement information 1524. The use of historical data 1323 may allow the guidance threshold to be written through a temporal frame of reference. For example, the coaching threshold may include things such as whether the condition has deteriorated within a particular number of weeks and then providing a particular type of coaching.
The module 1102 may include a guideline generation module 1304. The guidance generation module receives the measurement data 1322, the treatment plan data 1322, the guidance information 1326, and the history data 1323 and uses this information to apply guidance to the patient's current dental occlusion, such as the position of the patient's teeth relative to the expected occlusion in the treatment plan. For example, the guidance may include a threshold, such as sending guidance to the patient to schedule a appointment if the incisor position determined by the measurement data 1322 is greater than 0.5mm from the expected position, or by merging historical data, sending guidance to the patient to schedule an appointment if the deviation of the incisor position increases by more than 0.1 in two consecutive phases. In some embodiments, for example, where the treatment plan includes the use of an adherent, the guideline generation module may generate the guideline for the patient or physician based on the absence or detachment of the adherent.
The instructions may also include instructions related to the timing of switching aligners, for example, instructions to wear the dental liner for an additional period of time before changing to an aligner for a next treatment stage or to change to a next stage at an earlier time. The instruction may also include instructions to wear the holder or switch from the aligner to the holder. The instructions may also include instructions regarding the proper placement of the aligner, such as how the aligner should look or feel when properly placed, including, for example, the distance between the aligner and the gums. Other interventions or instructions may include instructions on how and when to use the bite, when to schedule orthodontic follow-up appointments, and other instructions. The instruction may also include instructions to the physician or examples of contacting the patient to follow-up the appointment or to guide the next step of the treatment and the suggested intervention for consideration by the physician. In other examples, the patient may not be contacted directly.
In some embodiments, conflicting directions or repeated directions may be given to the patient based on differences in one or more of the directions templates and the individual directions. For example, a patient may have multiple problems, where more than one problem may result in providing guidance to the patient to schedule appointments, while other guidance may suggest that the patient visit a doctor immediately. In this case, the instruction conflict resolution module 1306 may determine that the patient should only receive instructions to immediately see the doctor, rather than instructions to immediately see the doctor and schedule the appointment. In some embodiments, the threshold may indicate that the patient should be provided with instructions to use the chew on a first premolars, and another threshold may indicate that the patient should be provided with instructions to use the chew on a second premolars on the same side. Here, only one chew is required and, therefore, the conflict of instruction may be eliminated to indicate to the patient that a single chew should be used on the first and second premolars. In this way, the guideline conflict module 1306 may prevent the system from providing conflicting or confusing guidelines to the patient.
Module 1102 may include a guidance and intervention transmission module 1308. The guidance and intervention transmission module 1308 may send guidance or intervention information to one or more of the patient and the physician. The instructional or intervention information can be transmitted via a number of means. For example, the directions may be sent via text messages, emails, smart phones, or browser-based application notifications, automated phone calls, calendar invitations, or other forms of messaging and communication. In some embodiments, the directions may include both text and audiovisual information, such as video or images that illustrate the correct use of the chew.
FIG. 14 is a flow chart of an exemplary computer-implemented method 1400 for determining and providing guidance. Fig. 15 is a process and information flow diagram 1350 of an exemplary computer-implemented method 1300 for determining and providing guidance. The steps and information flows illustrated in fig. 14 and 15 may be performed by any suitable computer-executable code and/or computing system, including the system(s) illustrated in fig. 1 and 13. In one example, each of the steps illustrated in fig. 14 and 15 may represent a structure including a plurality of sub-steps and/or an algorithm represented by the plurality of sub-steps, examples of which are provided in more detail below.
Referring to fig. 14 and 15, at step 1405, patient assessment, treatment planning, and initiation of treatment may occur, for example, as discussed above with reference to step 2002 of fig. 20. At step 1410, the guidance generation module 1304 receives measurement data 1322. The measurement data 1322 may include a two-dimensional image received from the patient over a communication network. In some embodiments, the measurement data 1322 may include error data 628 received from, for example, the error module 610.
At step 1415, the guidance generation module 1304 receives treatment plan data 1514. In some embodiments, the guideline generation module 1304 receives treatment plan data 1514 from a treatment planning system or module.
At step 1420, the guideline generation module 1304 receives guideline information, which may include both physician guideline template information 1514 and case guideline information 1516.
At step 1425, the guideline generation module 1304 uses the information received from step 1420 and applies the received guideline to the patient's current dental bite based on the measurement data 1322 and the treatment plan data 1514. As discussed above, the guidance may include guidance regarding the timing of switching aligners, e.g., based on a threshold as discussed above, the dental liner is worn for an additional period of time before being replaced with an aligner for a next treatment stage or replaced with a aligner for a next stage at an earlier time. The instructions may also include instructions to switch from wearing the aligner to wearing the holder. The instructions may also include instructions related to properly positioning the aligner, such as how the aligner should look or feel when properly positioned, including, for example, the distance between the aligner and the gums. Other interventions or instructions may include instructions on how and when to use the bite, when to schedule orthodontic follow-up appointments, and other instructions. In some embodiments, for example, where the treatment plan includes the use of an adherent, the guideline generation module may generate the guideline for the patient or physician based on the absence or detachment of the adherent.
At step 1430, the guidance conflict module 1306 eliminates the guidance conflict provided by the guidance generation module 1304. For example, the conflict module 1306 may determine that the patient should receive only instructions to immediately see the doctor, rather than instructions to immediately see the doctor and schedule the appointment. In some embodiments, the threshold may indicate that the patient should be provided with instructions to use the chew on a first premolars, and another threshold may indicate that the patient should be provided with instructions to use the chew on a second premolars on the same side. Here, only one bite is required, and thus, the conflict module 1306 may indicate that the patient should use a single bite on the first and second premolars. In this way, the system may be prevented from providing conflicting or confusing instructions to the patient.
At step 1435, the guidance and intervention transmission module 1308 may send guidance or intervention information to one or more of the patient and the physician. At step 1435, instructional or intervention information can be transmitted via a number of means. For example, the directions may be sent via text messages, emails, smart phones, or browser-based application notifications, automated phone calls, calendar invitations, or other forms of messaging and communication. In some embodiments, the directions may include both text and audiovisual information, such as video or images that illustrate the correct use of the chew. In other embodiments, the patient may not be contacted directly. For example, the physician may maintain a list of instructional information regarding the patient during the patient's next scheduled appointment.
In some embodiments, the guidance generation module 1304 may indicate that a therapeutic intervention may be desired, for example, when the position of the tooth has deviated to a point where a new treatment planning process should begin generating a new phase of treatment to move the tooth from the current position toward a desired final position.
Photo-based treatment refinement
Medical practice is rapidly evolving towards telemedicine, the telemedicine of patients. By using the above-described system and method, a physician can remotely assess patient treatment progress and, in rare cases, when patient progress becomes so off-track that a revised treatment plan is needed, images captured using the above-described artificial intelligence guidance (discussed above) and a segmented dental scan generated at the time of initiating treatment can be used to prescribe a second prescription for the off-track patient using only their primary treatment plan data and a set of orthodontic photographs taken by the patient through the phone camera, without rescanning the patient or calling the patient back into the office.
Fig. 16 illustrates a block diagram of an example system for off-track treatment planning. As shown in this figure, the example system 1600 may include one or more virtual dental care modules 108 for performing one or more tasks. As will be explained in more detail below, the virtual dental care module 108 may include a three-dimensional parameterization module 1604 and a treatment planning module 1606. Although shown as separate elements, one or more of the virtual dental care modules 108 in fig. 16 may represent a single module or part of an application.
In some embodiments, one or more of the virtual dental care modules 108 in fig. 16 may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, and as will be described in greater detail below, one or more of the virtual dental care modules 108 may represent a module stored on and configured to run on one or more computing devices, such as the devices shown in fig. 1A (e.g., computing device 102 and/or server 106). One or more of the virtual dental care modules 108 in the diagram 162 may also represent all or part of one or more special purpose computers configured to perform one or more tasks.
As shown in fig. 16, the example system 1600 may also include one or more virtual dental care data stores 120, such as treatment plan data 1622, a segmented grid model 1624, and image data 1626. The virtual dental care data store 120 may include one or more data stores configured to store any type or form of data or information.
The virtual dental care data store 120 may include treatment plan data 1622. Treatment plan data 1622 may include the position and orientation of each patient's teeth for each stage of the treatment plan. In some embodiments, the position and orientation of the teeth may be stored as three-dimensional positional and angular orientations of each tooth and the patient's upper and lower arches. In some embodiments, the position and orientation of the patient's teeth may be stored as a set of three-dimensional segmented models of the patient's upper and lower arches for each treatment stage. In some embodiments, the treatment plan data may include other information, such as the location of attachments on the patient's teeth, as well as other orthodontic devices, such as wires and brackets, elastic members, temporary fixtures, and other orthodontic devices.
The virtual dental care data store 120 may include a segmented mesh model 1624. In some embodiments, the segmented mesh model of the patient's teeth may be stored separately from the treatment plan data. The segmented mesh model may include a three-dimensional mesh model of each patient's teeth.
The virtual dental care data store 120 may include image data 1626. Image data 1626 may include two-dimensional image data, such as two-dimensional image data captured using artificial intelligence guidelines, as discussed above.
The three-dimensional parameterization module 1604 receives the treatment plan data 1622 and the image data 1626 and uses this data to generate a three-dimensional model of the patient's dentition at the current location by determining the appropriate locations of the patient's teeth and placing a segmented tooth model from the treatment plan data in those locations. The three-dimensional parameterized model 1604 may use information such as the error data discussed above in order to determine the three-dimensional position of the patient's teeth. In some embodiments, in addition to the three cheek images discussed above, the upper and lower arch bite photographs may be used to determine the three-dimensional position of the patient's teeth. Various methods may be used to align a three-dimensional model of a patient's teeth with a two-dimensional image of the patient's teeth. For example, in some embodiments, a differential rendering algorithm may be used to align the teeth, or a desired maximization algorithm may be used to match the position and orientation of the three-dimensional model of the patient's teeth with the corresponding position and orientation of the teeth in the two-dimensional image. The three-dimensional parameterization module 1604 may output a new segmented dental mesh model of the patient's teeth in their current position.
The treatment planning module 1606 may use the new segmented dental mesh model output by the three-dimensional parameterization module 1604 along with treatment planning information in order to generate a modified treatment plan that moves the patient's teeth from the new current position to the desired final position. In some embodiments, the modified treatment plan may move the teeth to a different new desired final position.
Fig. 17 is a flow diagram of an exemplary computer-implemented method 17300 for photo-based therapy refinement. The steps illustrated in fig. 17 may be performed by any suitable computer executable code and/or computing system, including the systems illustrated in fig. 1 and 16. In one example, each step shown in fig. 17 may represent an algorithm whose structure includes and/or is represented by multiple sub-steps, examples of which are provided in more detail below.
At step 1702, a patient takes a two-dimensional photograph 1626 of his teeth. The photograph may include the left cheek, right cheek, anterior portion, upper bite and lower bite of the teeth of the patient. The capture of the two-dimensional photograph 1626 may be directed by the artificial intelligence guidance system discussed above.
At step 1706, treatment plans are collected. As discussed above, treatment plans may be collected or generated. Treatment plan 1622 may include a mesh of teeth and a planned movement of teeth for initial treatment of the patient's dentition.
At step 1710, the three-dimensional parameterization module 1604 receives the treatment plan data 1622 and the image data 1626 and uses this data to generate a three-dimensional model of the patient's dentition at the current location by determining the appropriate locations of the patient's teeth and placing a segmented tooth model from the treatment plan data in those locations. The three-dimensional parameterized model 1604 may use information such as the error data discussed above in order to determine the three-dimensional position of the patient's teeth. In some embodiments, in addition to the three cheek images discussed above, the upper and lower arch bite photographs may be used to determine the three-dimensional position of the patient's teeth. Various methods may be used to align a three-dimensional model of a patient's teeth with a two-dimensional image of the patient's teeth. For example, in some embodiments, a differential rendering algorithm may be used to align the teeth, or a desired maximization algorithm may be used to match the position and orientation of the three-dimensional model of the patient's teeth with the corresponding position and orientation of the teeth in the two-dimensional image.
At step 1712, the three-dimensional parameterization module 1604 may output the new segmented dental mesh model of the patient's teeth in their current positions. Fig. 18 illustrates a segmented mesh dental arch generated from an existing scan of a patient's teeth and a 2D image of the patient's teeth using the algorithms discussed above according to embodiments herein. Alignment 1810 illustrates aligning a grid depicting an expected position of the patient's teeth 1804 according to a treatment plan with a grid of actual current positions of the patient's teeth 1802 generated using a segmented three-dimensional model of the patient's teeth and a two-dimensional image captured by the patient.
The alignment 1820 shows an alignment of a mesh that depicts an alignment of the three-dimensional mesh model 1806 of the patient's teeth of the future treatment plan with a mesh of the actual current position of the patient's teeth 1802 generated using the segmented three-dimensional model of the patient's teeth and the two-dimensional images captured by the patient. The close agreement between the two models shows that the algorithm discussed above produces a grid of suitable accuracy for use in treatment planning without the need to rescan the patient's teeth.
At step 1714, the treatment plan module 1606 may use the new segmented dental mesh model output by the three-dimensional parameterization module 1604 along with treatment plan information in order to generate a modified treatment plan that moves the patient's teeth from the new current position to the desired final position. In some embodiments, the modified treatment plan may move the teeth to a different new desired final position.
The updated treatment plan may be used to make a new dental appliance to move the patient's teeth from the new current position to the desired final position.
Computing system
Fig. 19 is a block diagram of an example computing system 1910 capable of implementing one or more of the embodiments described and/or illustrated herein. For example, all or a portion of computing system 1910 may perform and/or be a means for performing one or more of the steps described herein (such as one or more of the steps shown in fig. 3, 7, 14, 15, and 17), alone or in combination with other elements. All or a portion of computing system 1910 may also perform and/or be a means for performing any other steps, methods, or processes described and/or illustrated herein.
Computing system 1910 broadly represents any single-processor or multi-processor computing device or system capable of executing computer-readable instructions. Examples of computing system 1910 include, but are not limited to, workstations, laptops, client-side terminals, servers, distributed computing systems, handheld devices, or any other computing system or device. In its most basic configuration, computing system 1910 may include at least one processor 1914 and system memory 1916.
The processor 1914 generally represents any type or form of physical processing unit (e.g., a hardware-implemented central processing unit) capable of processing data or interpreting and executing instructions. In certain embodiments, the processor 1914 may receive instructions from a software application or module. The instructions may cause the processor 1914 to perform the functions of one or more of the example embodiments described and/or illustrated herein.
The system memory 1916 generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or other computer-readable instructions. Examples of system memory 1916 include, but are not limited to, random Access Memory (RAM), read Only Memory (ROM), flash memory, or any other suitable memory device. Although not required, in certain embodiments, the computing system 1910 can include both volatile memory units (such as, for example, system memory 1916) and nonvolatile storage (such as, for example, main storage 1932 as described in detail below). In one example, one or more of the virtual dental care modules 108 from fig. 1A may be loaded in the system memory 1916.
In some examples, the system memory 1916 may store and/or load an operating system 1940 for execution by the processor 1914. In one example, operating system 1940 can include and/or represent software that manages computer hardware and software resources and/or provides common services to computer programs and/or applications on computing system 1910. Examples of operating systems 1940 include, but are not limited to, LINUX, JUNOS, MICROSOFT WINDOWS, WINDOWS MOBILE, MAC OS, apple 'S IOS, UNIX, GOOGLE CHROME OS, GOOGLE' S Android, SOLARIS, variations of one or more of the above, and/or any other suitable operating system.
In some embodiments, the example computing system 1910 may include one or more components or elements in addition to the processor 1914 and the system memory 1916. For example, as shown in fig. 19, computing system 1910 may include a memory controller 1918, an input/output (I/O) controller 1920, and a communication interface 1922, each of which may be interconnected via a communication infrastructure 1912. Communication infrastructure 1912 generally represents any type or form of infrastructure capable of facilitating communication between one or more components of a computing device. Examples of communication infrastructure 1912 include, but are not limited to, communication buses such as Industry Standard Architecture (ISA), peripheral Component Interconnect (PCI), PCI Express (PCIe), or similar buses, and networks.
Memory controller 1918 generally represents any type or form of device capable of processing memory or data or controlling communication between one or more components of computing system 1910. For example, in some embodiments, memory controller 1918 may control communications between processor 1914, system memory 1916, and I/O controller 1920 via communications infrastructure 1912.
I/O controller 1920 generally represents any type or form of module capable of coordinating and/or controlling the input and output functions of a computing device. For example, in certain embodiments, the I/O controller 1920 may control or facilitate the transfer of data between one or more elements of the computing system 1910 (such as the processor 1914, the system memory 1916, the communication interface 1922, the display adapter 1926, the input interface 1930, and the storage interface 1934).
As shown in fig. 19, computing system 1910 may also include at least one display device 1924 coupled to I/O controller 1920 via a display adapter 1926. Display device 1924 generally represents any type or form of device capable of visually displaying information forwarded by display adapter 1926. Similarly, the display adapter 1926 generally represents any type or form of device configured to forward graphics, text, and other data from the communication infrastructure 1912 (or from a frame buffer as known in the art) for display on the display device 1924.
As shown in fig. 19, the example computing system 1910 may also include at least one input device 1928 coupled to the I/O controller 1920 via an input interface 1930. Input device 1928 generally represents any type or form of input device capable of providing computer-or human-generated input to example computing system 1910. Examples of input devices 1928 include, but are not limited to, a keyboard, a pointing device, a voice recognition apparatus, variations or combinations of one or more of the above, and/or any other input device.
Additionally or alternatively, the example computing system 1910 may include additional I/O devices. For example, example computing system 1910 may include I/O devices 1936. In this example, I/O devices 1936 can include and/or represent user interfaces that facilitate human interaction with computing system 1910. Examples of I/O devices 1936 include, but are not limited to, a computer mouse, keyboard, monitor, printer, modem, camera, scanner, microphone, touch screen device, variations or combinations of one or more of these devices, and/or any other I/O device.
Communication interface 1922 broadly represents any type or form of communication device or adapter capable of facilitating communication between example computing system 1910 and one or more additional devices. For example, in some embodiments, communication interface 1922 may facilitate communication between computing system 1910 and a private or public network including additional computing systems. Examples of communication interface 1922 include, but are not limited to, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, and any other suitable interface. In at least one embodiment, the communication interface 1922 may provide a direct connection to a remote server via a direct link to a network such as the internet. Communication interface 1922 may also provide such a connection indirectly through, for example, a local area network (such as an ethernet network), a personal area network, a telephone or cable network, a cellular telephone connection, a satellite data connection, or any other suitable connection.
In some embodiments, communication interface 1922 may also represent a host adapter configured to facilitate communication between computing system 1910 and one or more additional networks or storage devices via an external bus or communication channel. Examples of host adapters include, but are not limited to, small Computer System Interface (SCSI) host adapters, universal Serial Bus (USB) host adapters, institute of Electrical and Electronics Engineers (IEEE) 1394 host adapters, advanced Technology Attachment (ATA), parallel ATA (PATA), serial ATA (SATA) and external SATA (eSATA) host adapters, fibre channel interface adapters, or Ethernet adapters, among others. The communication interface 1922 may also allow the computing system 1910 to participate in distributed or remote computing. For example, the communication interface 1922 may receive instructions from a remote device or send instructions to a remote device for execution.
In some examples, the system memory 1916 may store and/or load network communication programs 1938 for execution by the processor 1914. In one example, network communication program 1938 can include and/or represent software that enables computing system 1910 to establish a network connection 1942 with another computing system (not shown in FIG. 19) and/or to communicate with another computing system via communication interface 1922. In this example, network communication program 1938 can direct outbound traffic (flow of outgoing traffic) sent to other computing systems via network connection 1942. Additionally or alternatively, network communication program 1938 can direct the processing of inbound traffic received from other computing systems via network connection 1942 coupled to processor 1914.
Although not shown in fig. 19 in this manner, network communication programs 1938 may alternatively be stored and/or loaded in communication interface 1922. For example, network communication programs 1938 may include and/or represent at least a portion of software and/or firmware executed by a processor and/or Application Specific Integrated Circuit (ASIC) incorporated in communication interface 1922.
As shown in fig. 19, the example computing system 1910 may also include a primary storage 1932 and a backup storage 1933 coupled to the communication infrastructure 1912 via a storage interface 1934. Storage devices 1932 and 1933 generally represent any type or form of storage device or medium capable of storing data and/or other computer-readable instructions. For example, storage devices 1932 and 1933 may be magnetic disk drives (e.g., so-called hard disk drives), solid state drives, floppy disk drives, magnetic tape drives, optical disk drives, or flash drives, among others. Storage interface 1934 generally represents any type or form of interface or device for transferring data between storage devices 1932 and 1933 and other components of computing system 1910. In one example, virtual dental care data store(s) 120 from fig. 1A may be stored and/or loaded in main storage 1932.
In certain embodiments, storage devices 1932 and 1933 may be configured to read from and/or write to removable storage units configured to store computer software, data, or other computer-readable information. Examples of suitable removable storage units include, but are not limited to, floppy disks, magnetic tape, optical disks, or flash memory devices, among others. Storage devices 1932 and 1933 may also include other similar structures or devices for allowing computer software, data, or other computer-readable instructions to be loaded into computing system 1910. For example, storage devices 1932 and 1933 may be configured to read and write software, data, or other computer-readable information. Storage devices 1932 and 1933 may also be part of computing system 1910 or may be separate devices accessed through other interface systems.
Many other devices or subsystems may be connected to computing system 1910. Conversely, all of the components and devices illustrated in fig. 19 need not be present to practice the embodiments described and/or illustrated herein. The above-mentioned devices and subsystems may also be interconnected in different ways from that shown in fig. 19. The computing system 1910 may also employ any number of software, firmware, and/or hardware configurations. For example, one or more of the example embodiments disclosed herein may be encoded as a computer program (also referred to as computer software, software applications, computer-readable instructions, or computer control logic) on a computer-readable medium. The term "computer-readable medium" as used herein generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer readable media include, but are not limited to, transmission media such as carrier waves and non-transitory media such as magnetic storage media (e.g., hard disk drives, tape drives, and floppy disks), optical storage media (e.g., compact Discs (CDs), digital Video Discs (DVDs), and blu-ray discs), electronic storage media (e.g., solid state drives and flash memory media), and other distribution systems.
A computer-readable medium containing a computer program may be loaded in computing system 1910. All or a portion of the computer program stored on the computer readable medium can then be stored in the system memory 1916 and/or various portions of the storage devices 1932 and 1933. The computer programs loaded into the computing system 1910 when executed by the processor 1914 may cause the processor 1914 to perform and/or be a means for performing the functions of one or more of the example embodiments described and/or illustrated herein. Additionally or alternatively, one or more of the example embodiments described and/or illustrated herein may be implemented in firmware and/or hardware. For example, computing system 1910 may be configured as an Application Specific Integrated Circuit (ASIC) adapted to implement one or more of the example embodiments disclosed herein.
Fig. 20 is a block diagram of an example network architecture 2000 in which client systems 2010, 2020, and 2030 and servers 2040 and 2045 may be coupled to a network 2050. As detailed above, all or a portion of network architecture 2000 may perform and/or be a means for performing one or more of the steps disclosed herein (such as one or more of the steps shown in fig. 3, 7, 14, 15, and 17), alone or in combination with other elements. All or a portion of network architecture 2000 may also be used to perform and/or be a means for performing other steps and features set forth in the instant disclosure.
Client systems 2010, 2020, and 2030 generally represent any type or form of computing device or system, such as example computing system 1910 in fig. 19. Similarly, servers 2040 and 2045 generally represent computing devices or systems, such as application servers or database servers, configured to provide various database services and/or to run certain software applications. Network 2050 generally represents any telecommunications or computer network including, for example, an intranet, WAN, LAN, PAN, or the internet. In one example, client systems 2010, 2020 and/or 2030 and/or servers 2040 and/or 2045 may include all or a portion of system 100 from fig. 1A.
As shown in fig. 20, one or more storage devices 2060 (1) to (N) may be directly attached to the server 2040. Similarly, one or more storage devices 2070 (1) through (N) may be directly attached to the server 2045. Storage devices 2060 (1) through 2060 (N) and storage devices 2070 (1) through 2070 (N) generally represent any type or form of storage device or medium capable of storing data and/or other computer-readable instructions. In certain embodiments, storage devices 2060 (1) to 2060 (N) and storage devices 2070 (1) to 2070 (N) may represent Network Attached Storage (NAS) devices that are configured to communicate with servers 2040 and 2045 using various protocols, such as Network File System (NFS), server Message Block (SMB), or Common Internet File System (CIFS).
Servers 2040 and 2045 may also be connected to a Storage Area Network (SAN) fabric 2080.SAN fabric 2080 generally represents any type or form of computer network or architecture capable of facilitating communication between a plurality of storage devices. SAN fabric 2080 may facilitate communication between servers 2040 and 2045 and a plurality of storage devices 2090 (1) to 2090 (N) and/or intelligent storage array 2095. SAN fabric 2080 may also facilitate communication between client systems 2010, 2020, and 2030 and storage devices 2090 (1) through 2090 (N) and/or smart storage array 2095 via network 2050 and servers 2040 and 2045 such that devices 2090 (1) through 2090 (N) and array 2095 are represented as locally attached devices of client systems 2010, 2020, and 2030. As with storage devices 2060 (1) to 2060 (N) and storage devices 2070 (1) to 2070 (N), storage devices 2090 (1) to 2090 (N) and smart storage array 2095 generally represent any type or form of storage device or medium capable of storing data and/or other computer-readable instructions.
In some embodiments, and with reference to the example computing system 1910 of fig. 19, a communication interface (such as the communication interface 1922 of fig. 19) may be used to provide connectivity between each of the client systems 2010, 2020, and 2030 and the network 2050. Client systems 2010, 2020, and 2030 may be capable of accessing information on servers 2040 or 2045 using, for example, a web browser or other client software. Such software may allow client systems 2010, 2020, and 2030 to access data stored by server 2040, server 2045, storage 2060 (1) to 2060 (N), storage 2070 (1) to 2070 (N), storage 2090 (1) to 2090 (N), or smart storage array 2095. Although fig. 20 depicts the use of a network (such as the internet) to exchange data, the embodiments described and/or illustrated herein are not limited to the internet or any particular network-based environment.
In at least one embodiment, all or a portion of one or more of the example embodiments disclosed herein may be encoded as a computer program and loaded onto and executed by server 2040, server 2045, storage 2060 (1) to 2060 (N), storage 2070 (1) to 2070 (N), storage 2090 (1) to 2090 (N), smart storage array 2095, or any combination thereof. All or a portion of one or more of the example embodiments disclosed herein may also be encoded as a computer program, stored in server 2040, run by server 2045, and distributed to client systems 2010, 2020, and 2030 over network 2050.
As detailed above, one or more components of computing system 1910 and/or network architecture 2000 may perform and/or be a means for performing, either alone or in combination with other elements, one or more steps of an example method for virtual care.
Photo guidance based on treatment
As discussed herein, to perform virtual orthodontic care, virtual dental care, and/or other telemedicine, a practitioner may wish to visually inspect a patient's dentition. For example, a practitioner may wish to examine patient progress during a treatment plan, diagnose possible problems, and modify the treatment plan as needed. The dental practitioner or treatment data 138 including the treatment plan may be used to determine a clinically relevant view from which the patient's teeth are imaged.
Using the determined views, as described herein, the systems and methods provided in the present disclosure may utilize artificial intelligence or other guidance means to provide guidance to a patient regarding taking clinically relevant orthodontic photographs. The systems and methods provided in the present disclosure may improve the functionality of a computing device by more efficiently acquiring image data, which may further reduce storage requirements and network bandwidth. Furthermore, the systems and methods provided herein may improve the virtual medical field by improving the functional capabilities of the remote device.
Fig. 21 depicts a method 2100 for acquiring and using clinically relevant images of a patient's teeth. The method may include determining a clinically relevant photo view for taking a clinically relevant image of a patient's teeth, the taking the clinically relevant image of the patient's teeth including providing guidance for capturing the image, and providing orthodontic treatment guidance or intervention based on the clinically relevant image.
At block 2110, a clinically relevant photo or image view is determined. The clinically relevant photograph or image view may include one or more of a field of view to include in the teeth to be included in the image, a position, an orientation, and an image of the patient's dentition. In some embodiments, a dental professional (such as a treating orthodontic doctor or dentist) can request an image from one or more directions to one or more of the patient's teeth. For example, a dental professional may indicate that an upper middle incisor is included in the image and obtain the image from both the occlusal and buccal directions. In some embodiments, a treatment plan may be used in order to determine a view of the captured clinically relevant images.
During orthodontic treatment, clinically relevant images are taken in order to capture movement of the patient's teeth. To capture movement of a patient's teeth, images should be acquired from one or more views that are orthogonal to the plane of tooth translation or parallel to the axis about which the patient's teeth rotate. Fig. 22A-22D depict example axes about which teeth may move. Fig. 22A depicts an isometric view of an upper center cutting tooth. A mesial-distal axis (2220) may extend through the width of the tooth in the mesial-distal direction and may be referred to as the y-axis, a buccal-lingual axis (2210) may extend through the thickness of the tooth in the buccal-lingual direction and may be referred to as the x-axis, and an occlusal-gingival axis (2230) may extend through the length of the tooth in the occlusal-gingival direction and may be referred to as the z-axis. In some embodiments, the x-axis, y-axis, and z-axis are orthogonal to each other.
When tooth movement during the treatment phase includes translation in the XZ plane shown in fig. 22B or along the x-axis or z-axis, or includes rotation about the y-axis, then clinically relevant images of the patient's teeth may include images taken normal to or displaced from the XZ plane or along the y-axis. When tooth movement during the treatment phase includes translation in the XY plane or along the x-axis or y-axis as shown in fig. 22D, or includes rotation about the z-axis, then clinically relevant images of the patient's teeth may include images taken normal to or displaced from the XY plane or along the z-axis. When tooth movement during the treatment phase includes translation in the YZ plane shown in fig. 22C or along the y-axis or z-axis, or includes rotation about the x-axis, then clinically relevant images of the patient's teeth may include images taken normal to the YZ plane or displaced from the YZ plane or along the x-axis.
As discussed above, a dental professional may select a view for capturing movement of a patient's teeth. In some embodiments, a treatment plan may be used in order to determine views from which to take clinically relevant images. As discussed elsewhere herein, the treatment plan may include a series of stages for moving the patient's teeth from an initial position toward a final position. Each treatment session may include an initial position of the treatment session and a final position of the treatment session. The final position of the first treatment phase may be or correspond to the initial position of the second treatment phase. At block 2210, the virtual dental care system 106 may use the treatment data 138 including treatment plan data to determine which teeth are moving and in which directions and about which axes they translate and/or rotate during a particular treatment phase. Based on the determination, one or more views for capturing movement of the patient's teeth may be determined. In some embodiments, determining which teeth are moving may include determining which teeth are scheduled to make difficult movements during the treatment phase, such as rotating cuspids, invasive teeth, or extruded teeth.
In some embodiments, one or more views may be selected from one or more predetermined views. For example, the predetermined view may be a cheek view of the patient's teeth from a cheek direction centered along the patient's midline, one or more cheek views of the patient's teeth taken from a position offset on one side or the other of the patient's midline (e.g., offset 15 °, 30 °, 45 °, 60 °, 75 °, and 90 ° to the patient's midline left and right), and one or more bite views taken from bite positions to capture occlusal or tangential surfaces of the patient's teeth.
In some of the embodiments, one or more views may be selected from one or more predetermined views of each of the patient's teeth. For example, the treatment plan may indicate that one or more teeth are moving or rotating beyond a threshold amount during a particular phase of the treatment plan. The selected view may include a cheek image and an bite image of each of the teeth that are moved or rotated beyond a threshold amount during this phase of the treatment plan. In some embodiments, a single image may capture more than one tooth, and thus may merge views of adjacent teeth that may be within the field of view of an imaging system (such as a camera). For example, after determining the desired clinically relevant view of each tooth, the virtual dental care system 106 can then merge one or more views, such as one or more views of adjacent teeth. For example, in some embodiments, the upper left and upper right central incisors may be moved accordingly during a particular treatment phase, the virtual dental care system 106 may initially determine that the buccal and occlusal images of the upper right and upper left central incisors should be captured, however at the merging step, the two buccal images may be merged into a single buccal image, and the two occlusal images may be merged into a single occlusal image of the captured tooth.
In some embodiments, at block 2110, a particular patient view is determined. For example, predicted tooth movement during a treatment phase may be evaluated, and based on the predicted tooth movement, a specific patient view may be determined. While tooth movement in a treatment plan may be represented as vectors in three axes as discussed above, not all tooth movement is along one of the orthogonal axes. In some embodiments, movement of each tooth during each stage of treatment may be determined, and then the position and orientation of the clinically relevant view of the tooth from a perspective orthogonal to tooth movement may be determined in that view, or the perspective may be a specific patient view of the patient's teeth for capturing the treatment stage.
Fig. 23A-23C depict images determined based on a treatment plan. Fig. 23A depicts a model 2310 of a first treatment phase of a patient's dentition. The model may include a three-dimensional model formed of connected vertices depicting surfaces of the patient's teeth. Fig. 23B depicts a similar model 2320 of a second treatment phase of a patient's dentition. Fig. 23C depicts a three-dimensional model 2340 of a patient's teeth that is color coded or shaded based on the apex directional distance between the surface of the patient's teeth and the first stage and the second stage. The darker shaded teeth (e.g., teeth 2344 and 2346) move during this treatment phase, while teeth 2342 do not move or move very little during the state of treatment. At block 210, the virtual dental care system 106 may determine that the teeth 2344 and 2346 should be imaged in order to evaluate the dentition of the patient. Based on the determination, the virtual dental care system 106 can determine one or more clinically relevant views.
Fig. 23D depicts a model 2350 of patient dentition and a first clinically relevant view 2352 for capturing movement of patient teeth 2344 and 2346. View 2352 is a cheek view of the patient's teeth and may capture movement in the mesial-distal direction and in the bite-gingival direction. Fig. 23E depicts a model 2360, which may be the same model as the model 2350 of the patient's dentition and a second clinically relevant view 2362 for capturing movement of the patient's teeth 2344 and 2346. View 2362 is an occlusal view of the patient's teeth and may capture movement in both the cheek-tongue direction and the mesial-distal direction. View 2362 may also capture rotation of the tooth about the bite-gingival axis.
After determining the clinically relevant views, the process may proceed to block 2120. At block 2120, guidance may be provided to obtain a photograph based on the clinically relevant view. Guidance may be provided as discussed herein in the "smartphoto guidance" section with respect to fig. 2-5. For example, one or more systems described herein may receive clinically relevant views from the virtual dental care system 106 and image data streams from a camera. For example, the camera module 204 may receive the image data stream 222 from the camera 132 of the system 200 or another camera in communication with the system 200.
The one or more systems may then compare the images from the image data stream to clinically relevant views, e.g., using an artificial intelligence scheme, one or more binary classifications from the image data stream, and one or more class classifications. For example, the AI module 206 may determine a binary classification 224 and a category classification 226 from the image data stream 222. Based on the determination, the system may provide feedback on how to change the view provided in the data stream or how to otherwise move the camera to capture the clinically relevant view. In some embodiments, the feedback may include instructional cues, which may refer to audio, visual, and/or tactile cues that may provide instructions to a user. Examples of instructional cues may include, but are not limited to, overlays on the device screen, text notifications, verbal instructions, tones or other sounds, vibrations, and the like.
The instructional prompt 228 may include instructions for a user to manipulate the system 200 into a configuration that can take images that meet the requirements 234. For example, the instructions may include instructions to adjust a camera view of the camera to include a particular body part in the camera view, such as to move the camera farther or farther, pan/tilt/zoom the camera, change angles, track or otherwise move the camera, and so forth. The instructions may include instructions to insert or remove a particular orthotic. The instructions may also include instructions to move a particular body part, such as to open or close a patient's bite, to open a patient's jaw wider, and so on. The instructions may include instructions to adjust one or more camera settings, such as zoom, focus, turn on/off flash, etc.
When the system determines that the data stream includes clinically relevant views, the system may automatically capture images or may instruct the user to provide input to capture images. This process may be repeated for each of the medical related views.
After capturing the clinically relevant views, the process may proceed to block 2130. At block 2130, orthodontic treatment guidance or intervention may be provided. Guidance or intervention may be provided as discussed herein in the section "guidance generation" and "photo-based treatment refinement" with respect to fig. 13-18. For example, the guideline generation module 1304 may receive guidelines and apply the received guidelines to the patient's current bite based on the measurement data 1322 and the treatment plan data 1514. As discussed herein, the guidance may include guidance regarding the timing of switching the aligner, e.g., guidance to wear the dental pad for an additional period of time before changing to an appliance for the next treatment stage or to change to the next stage at an earlier time based on the thresholds discussed above. The instructions may also include instructions to switch from wearing the aligner to wearing the holder. Other interventions may include instructions on how and when to use the bite, when to schedule orthodontic follow-up appointments, and other instructions. In some embodiments, for example, wherein the treatment plan includes the use of an adherent, the guideline generation module may generate the guideline for the patient or physician based on the absence or detachment of the adherent.
The guidance may be sent to one or more of the patient and physician, or a later revision may be sent to the patient. For example, the directions may be sent via text messages, emails, smart phones, or browser-based application notifications, automated phone calls, calendar invitations, or other forms of messaging and communication. In some embodiments, the directions may include both textual information and audiovisual information, such as video or images that illustrate the proper use of the chew.
The intervention may include modification to the treatment plan. For example, if it is determined that the patient's orthodontic treatment is off-track to a sufficient extent that a new treatment plan should be formulated, the clinically relevant images may be used to generate a three-dimensional model of the patient's current dentition. The current dentition may then be used to formulate an updated treatment plan for moving the teeth from their current position toward a position. For example, as shown and described herein with respect to fig. 16 and 17.
Virtual care-aligner cooperation
Using a remote orthodontic or virtual care system, as described herein, a patient can take his own photographs of his own dentition and send those photographs to his doctor. The physician may then evaluate the patient's progress toward the treatment objective. As described herein, a doctor may evaluate a patient's actual dentition via a photo and virtual care system. However, patients and doctors may wish to use remote orthodontic to evaluate orthodontic appliances, such as to evaluate an "aligner fit" for evaluating the quality of placement of an aligner on a patient's dentition.
When a transparent aligner for patient treatment is used, aspects of the aligner engagement may be visible from photographs taken of the patient. As further described herein, the present disclosure provides systems and methods for remotely assessing the quality of placement of transparent aligners.
Fig. 24 is a flow chart of an exemplary computer-implemented method 2400 for evaluating the placement quality of a transparent aligner. The steps illustrated in fig. 24 may be performed by any suitable computer executable code and/or computing system, including the systems illustrated in fig. 1A, 1B, 1C, 1D, 1E, 2, 3, 4, 6, 13, 15, 16, 19, and 20. In one example, each of the steps shown in fig. 24 may represent an algorithm whose structure includes and/or is represented by multiple sub-steps, examples of which are provided in greater detail below.
As shown in fig. 24, at step 2410, one or more of the systems described herein may receive image data of a patient's dentition and orthodontic appliances. For example, example system 200 in fig. 2, system 600 in fig. 6, system 1300 in fig. 13, or system 1600 in fig. 16 may receive image data similar to image data stream 222, image data 232, 2D image 624, etc. of a patient's dentition. As described herein, patients may take their own photographs of their own dentition using their own devices (e.g., using dental consumer/patient system 102). The image data may include image data captured with the patient wearing their orthodontic appliance, which may be a transparent aligner. The patient may capture image data during the middle or near the end of the treatment session, although the patient may capture image data at any time.
The system described herein may perform step 2410 in various ways. In one example, the image data may be uploaded from the patient device to another computing device, such as a server or other computer (e.g., virtual dental care system 106 and/or dental professional system 150) for further processing. In other examples, the image data may be processed on a patient device.
Fig. 25A illustrates image data 2500 of a patient's dentition including an orthodontic appliance.
At step 2420, one or more of the systems described herein may identify an orthodontic appliance from the image data. For example, the example system 200 in fig. 2, the system 600 in fig. 6, the system 1300 in fig. 13, or the system 1600 in fig. 16 may identify an orthodontic appliance, which may be a transparent aligner.
The system described herein may perform step 2420 in various ways. In one example, semantic segmentation may be performed to classify each pixel of image data into one of a plurality of categories. For example, a probability of belonging to each class may be determined for each pixel of image data. Each pixel may be classified based on the class for which the pixel has the highest probability of matching. The categories may include, for example, a tooth category indicating a patient's tooth (which may be a portion of the tooth not covered by the orthodontic appliance), a gap category indicating a gap between the orthodontic appliance and a corresponding gingival margin, and a spacing category indicating a spacing between a incisal edge of the orthodontic appliance and a incisal edge of a corresponding tooth. In other examples, other categories may be used, such as a gum category corresponding to a patient's gums, an appliance category, other categories, and the like. By performing semantic segmentation, pixels corresponding to orthodontic appliances (e.g., gap classes and interval classes) may be distinguished from pixels corresponding to patient dentitions without appliances (e.g., tooth classes). As will be described further below, the gap species and/or the spacing species may also correspond to misalignment (misalignment).
Fig. 25B illustrates mask data 2502 in which the semantic segmentation has identified a gap region 2510, a space region 2520, and a space region 2530. Fig. 25C illustrates image data 2504 in which mask data 2502 is overlaid over image data 2500 to better show how semantic segmentation may produce mask data 2502.
In some examples, semantic segmentation may be performed using machine learning. For example, a neural network 406 or other machine learning scheme may be used to perform semantic segmentation. In some examples, the neural network 406 may be trained to perform semantic segmentation by inputting an image dataset (such as a training dataset) for semantic segmentation by the neural network. The training data set may have a corresponding mask data set of desired semantic segmentation. Training may also include calculating an error between the output of the neural network (e.g., by performing semantic segmentation) and a mask dataset corresponding to the image dataset, and adjusting parameters of the neural network 406 to reduce the error.
In other examples, identifying the orthodontic appliance may include evaluating a color value of each pixel to identify a portion of the tooth having no orthodontic appliance and a portion of the tooth having an orthodontic appliance. For example, threshold-based segmentation may be used in which each pixel may be classified using a color threshold corresponding to teeth, gums, appliances on teeth, and appliances without teeth.
In other examples, identifying the orthodontic appliance may include applying one or more filters to the image data to determine the tooth edge and the orthodontic appliance edge. For example, edge-based segmentation may be used to find edges, and regions within edges may be specified by classes based on color features (such as color thresholds described herein).
In some examples, the various segmentation schemes described herein may be applied to each tooth such that different segmentation schemes may be applied to different identified teeth. By identifying tooth-to-tooth boundaries, each tooth may be analyzed to provide specific tooth information or data. For example, a color estimate may be applied to each tooth such that the color value and/or the color threshold may be local to each tooth. Differences in illumination and/or actual differences between tooth colors may affect global color values, while local tooth color analysis may be easier to identify between classes. In another example, semantic segmentation may be applied to identify the spacing of each tooth. Semantic segmentation schemes may use a semantic segmentation model to find the spacing of a given tooth (such as the upper left central incisors, etc.). Alternatively, each tooth may be identified in the image data, and the identified tooth spacing may be associated with a corresponding particular tooth.
At step 2430, one or more of the systems described herein may calculate a misalignment height of the misalignment of the orthodontic appliance relative to the dentition of the patient. For example, the example system 200 of fig. 2, the system 600 of fig. 6, the system 1300 of fig. 13, or the system 1600 of fig. 16 may calculate a misalignment height of a misalignment determined using the identified orthodontic appliance.
The system described herein may perform step 2430 in various ways. In one example, the misalignment height may be calculated from a misaligned pixel height, which may be identified from a displacement class such as the gap class and/or the space class described herein. For example, in fig. 25B and/or 25C, the pixel heights of the gap region 2510, the space region 2520, and the space region 2530 may be calculated.
As seen in fig. 25B and 25C, each misalignment may occur in several areas, such as across a horizontal range. In such examples, the misalignment dimension (e.g., height, length, and/or width) may be calculated from aggregating the plurality of identified misalignments. For example, for the spacing region 2530, various heights can be determined across the spacing region 2530. The misalignment height of the spacing region 2530 may be calculated using, for example, an 80 th percentile of the various heights, although in other examples, other percentiles may be used such that outliers may not significantly affect the misalignment height. Alternatively, other aggregate functions may be used, such as average, mode, etc. The misalignment height of the gap region 2510 and the spacing region 2520 may be similarly calculated.
Although pixel heights may be used, in some examples, pixel heights may be converted to standard units of measure. For example, a patient physician may prefer to see a misalignment height measured in millimeters or other units of measurement. To convert the pixel measurements, a reference object may be identified from the image data, which may be a subset of identifiable teeth, such as incisors. The reference object may be selected based on having known measurements available. For example, incisor measurements may be obtained from a patient's treatment plan. Pixel heights of incisors can be determined from image data (e.g., by determining edges of identified incisors and counting pixels along a desired dimension) and used with incisor measurements to determine a scaling factor between the pixels and a standard measurement unit (e.g., mm). The misalignment height may be scaled from a pixel to a standard measurement unit using a scaling factor.
In some other examples, a global average of each tooth pixel of all identified teeth may be used to determine a scaling factor, optionally excluding outliers. In further examples, the scaling factor may be determined by constructing a pixel-to-mm-sized field over the entire image data and interpolating and/or extrapolating the pixel-to-mm-sized over the identified dental arch.
In some examples, the misalignment height may be further adjusted. Semantic segmentation may overestimate misaligned regions. In this case, the thickness deviation may be subtracted from the calculated misalignment height to simulate the material thickness of the orthodontic appliance. Thickness deviations may be obtained from a patient's treatment plan.
In some examples, the misalignment height may be tracked over time using image data over time. For example, the patient may capture image data at various points in time during the treatment phase. A misalignment trend may be identified from the tracked misalignment height. The misalignment trend may be defined as a general trend (e.g., increase, decrease, etc.), a height increment (e.g., a change in misalignment height at each point in time), or by an actual misalignment height value.
At step 2440, one or more of the systems described herein may determine whether the misalignment height meets a misalignment threshold. For example, the example system 200 in fig. 2, the system 600 in fig. 6, the system 1300 in fig. 13, or the system 1600 in fig. 16 may determine whether the misalignment height satisfies a misalignment threshold. The misalignment threshold may be predetermined or pre-calculated, such as based on patient history and/or other empirical data, or may be manually selected, such as by the patient's physician.
The system described herein may perform step 2440 in various ways. In one example, the misalignment threshold may include a plurality of misalignment thresholds. For example, a 0.5mm interval may not be desirable, but may not necessarily require corrective action, and thus may be set to a low threshold. However, 0.75mm may require corrective action and is therefore set to a high threshold. In some examples, if the misalignment trend is tracked, the misalignment threshold may include a misalignment trend threshold. For example, if the misalignment height remains at 0.75mm at multiple points in time, corrective action may be required.
At step 2450, one or more of the systems described herein may provide a notification in response to the misalignment threshold being met. For example, if the misalignment threshold is met, the example system 200 of fig. 2, the system 600 of fig. 6, the system 1300 of fig. 13, or the system 1600 of fig. 16 may provide a notification.
The system described herein may perform step 2450 in various ways. In one example, the notification may include a message or other notification to the patient's physician. In some examples, as in fig. 25C, the notification may include providing a visual overlay that is misaligned. In some examples, the color may indicate a type of misalignment.
In some examples, if the misalignment threshold includes a plurality of misalignment thresholds, the notifying may include increasing the priority based on the satisfied thresholds. For each range between the plurality of thresholds, a different color may be used when the mask data is depicted. For example, if the misalignment height is below a low threshold, a low priority color such as blue may be used. If between the low and high thresholds, a low warning color such as yellow may be used. If a high threshold is exceeded, a high warning color such as orange may be used.
In some examples, the misalignment threshold may include a misalignment trend threshold. The notification may be provided in response to the misalignment trend threshold being met.
The virtual care system described herein may allow a patient physician to remotely monitor aspects of patient treatment progress. Such monitoring may allow early intervention when needed. For example, in response to the notification, the physician may recommend certain actions or changes in the treatment, such as repeating a particular phase, using a chew object (e.g., a "bite") to assist the patient in chewing the orthodontic appliance in place, restarting the treatment, and so forth.
While the foregoing disclosure sets forth various embodiments using specific block diagrams, flowcharts, and examples, each block diagram component, flowchart step, operation and/or component described and/or illustrated herein may be implemented, individually and/or collectively, using a wide variety of hardware, software, or firmware (or any combination thereof) configurations. In addition, any disclosure of components contained within other components should be considered as exemplary in nature, as many other architectures can be implemented to achieve the same functionality.
In some examples, all or a portion of the example system 100 in fig. 1A may represent a portion of a cloud computing or network-based environment. Cloud computing environments may provide various services and applications via the internet. These cloud-based services (e.g., software as a service, platform as a service, infrastructure as a service, etc.) may be accessed through a web browser or other remote interface. The various functions described herein may be provided through a remote desktop environment or any other cloud-based computing environment.
In various embodiments, all or a portion of the exemplary system 100 in fig. 1A may facilitate multi-tenancy within a cloud-based computing environment. In other words, software modules described herein may configure a computing system (e.g., a server) to facilitate multi-tenancy of one or more of the functions described herein. For example, one or more of the software modules described herein may program a server to enable two or more clients (e.g., customers) to share applications running on the server. A server programmed in this manner may share applications, operating systems, processing systems, and/or storage systems among multiple clients (i.e., tenants). One or more of the modules described herein may also divide the data and/or configuration information of the multi-tenant application for each customer such that one customer cannot access the data and/or configuration information of another customer.
According to various embodiments, all or a portion of the example system 100 in FIG. 1A may be implemented within a virtual environment. For example, the modules and/or data described herein may reside and/or execute within a virtual machine. As used herein, the term "virtual machine" generally refers to any operating system environment that is abstracted from computing hardware by a virtual machine manager (e.g., hypervisor). Additionally or alternatively, the modules and/or data described herein may reside and/or execute within a virtualization layer. As used herein, the term "virtualization layer" generally refers to any data layer and/or application layer that overlays and/or abstracts from an operating system environment. The virtualization layer may be managed by a software virtualization solution (e.g., a file system filter) that presents the virtualization layer as if it were part of the underlying base operating system. For example, the software virtualization solution may redirect calls that were originally directed to locations within the base file system and/or registry to locations within the virtualization layer.
In some examples, all or a portion of the example system 100 in fig. 1A may represent a portion of a mobile computing environment. The mobile computing environment may be implemented by a wide variety of mobile computing devices including mobile phones, tablet computers, electronic book readers, personal digital assistants, wearable computing devices (e.g., computing devices with head mounted displays, smart watches, etc.), and the like. In some examples, the mobile computing environment may have one or more different features including, for example, a limited platform that relies on battery power, presents only one foreground application at any given time, remote management features, touch screen features, location and movement data (e.g., provided by a global positioning system, gyroscope, accelerometer, etc.), limits modifications to system level configuration, and/or limits the ability of third party software to check the behavior of other applications, controls that limit the installation of applications (e.g., originating only from approved application stores). The various functions described herein may be provided for and/or may interact with a mobile computing environment.
Additionally, all or a portion of the example system 100 in FIG. 1A may represent portions of, interact with, consume, and/or generate data generated by one or more systems for information management. As used herein, the term "information management" may refer to the protection, organization, and/or storage of data. Examples of systems for information management may include, but are not limited to, storage systems, backup systems, archiving systems, replication systems, high availability systems, data search systems, virtualization systems, and the like.
In some embodiments, all or a portion of the example system 100 in fig. 1A may represent portions of, generate data protected by, and/or communicate with one or more systems for information security. As used herein, the term "information security" may refer to controlling access to protected data. Examples of systems for information security may include, but are not limited to, systems that provide managed security services, data loss prevention systems, identity authentication systems, access control systems, encryption systems, policy compliance systems, intrusion detection and prevention systems, electronic discovery systems, and the like.
The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and may be varied as desired. For example, although the steps illustrated and/or described herein may be shown or discussed in a particular order, the steps need not be performed in the order illustrated or discussed. Various example methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
Although various embodiments have been described and/or illustrated herein in the context of a fully functional computing system, one or more of these example embodiments may be distributed as a program product in a variety of forms, regardless of the particular type of computer-readable media used to actually carry out the distribution. Embodiments disclosed herein may also be implemented using software modules that perform certain tasks. These software modules may include script, batch, or other executable files that may be stored on a computer-readable storage medium or in a computing system. In some embodiments, these software modules may configure a computing system to perform one or more of the example embodiments disclosed herein.
As described herein, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In its most basic configuration, the computing device(s) may each include at least one memory device and at least one physical processor.
The term "memory" or "memory device" as used herein generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, but are not limited to, random Access Memory (RAM), read Only Memory (ROM), flash memory, hard Disk Drive (HDD), solid State Drive (SSD), optical disk drive, cache memory, variations or combinations of one or more of the above, or any other suitable storage memory.
In addition, as used herein, the term "processor" or "physical processor" generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the memory device described above. Examples of physical processors include, but are not limited to, microprocessors, microcontrollers, central Processing Units (CPUs), field Programmable Gate Arrays (FPGAs) implementing soft-core processors, application Specific Integrated Circuits (ASICs), portions of one or more of the above, variations or combinations of one or more of the above, or any other suitable physical processor.
Although depicted as separate elements, the method steps described and/or illustrated herein may represent portions of a single application. Additionally, in some embodiments, one or more of these steps may represent or correspond to one or more software applications or programs, which when executed by a computing device, may cause the computing device to perform one or more tasks, such as method steps.
Furthermore, one or more of the devices described herein may convert data, physical devices, and/or representations of physical devices from one form to another. Additionally or alternatively, one or more of the modules detailed herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form of computing device to another by executing on, storing data on, and/or otherwise interacting with the computing device.
The term "computer-readable medium" as used herein generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer readable media include, but are not limited to, transmission media such as carrier waves and non-transitory media such as magnetic storage media (e.g., hard disk drives, tape drives, and floppy disks), optical storage media (e.g., compact Discs (CDs), digital Video Discs (DVDs), and blu-ray discs), electronic storage media (e.g., solid state drives and flash memory media), and other distribution systems.
Those of ordinary skill in the art will recognize that any of the processes or methods disclosed herein may be modified in many ways. The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and may be varied as desired. For example, although the steps illustrated and/or described herein may be shown or discussed in a particular order, the steps need not be performed in the order illustrated or discussed.
Various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed. Furthermore, the steps of any method disclosed herein may be combined with any one or more steps of any other method disclosed herein.
The processor as described herein may be configured to perform one or more steps of any of the methods disclosed herein. Alternatively or in combination, the processor may be configured to combine one or more steps of one or more methods as disclosed herein.
The terms "connected" and "coupled" as used in the specification and claims (and derivatives thereof) should be interpreted as allowing a direct and indirect (i.e., via other elements or components) connection unless otherwise indicated. In addition, the terms "a" or "an" as used in the specification and claims should be construed to mean "at least one". Finally, for convenience in use, the terms "comprising" and "having" (and their derivatives) as used in the specification and claims are interchangeable with, and should have the same meaning as, the term "comprising".
The processor as disclosed herein may be configured with instructions to perform any one or more steps of any of the methods disclosed herein.
It will be appreciated that although the terms "first," "second," "third," etc. may be used herein to describe various layers, elements, components, regions or sections, these should not be interpreted as referring to any particular order or sequence of events. These terms are only used to distinguish one layer, element, component, region or section from another layer, element, component, region or section. A first layer, element, component, region or section discussed herein could be termed a second layer, element, component, region or section without departing from the teachings of the present disclosure.
As used herein, the term "or" is used inclusively to refer to items in the substitutions and combinations.
As used herein, characters such as numerals refer to the same elements.
The present disclosure includes the following numbered clauses.
Clause 1. A method for dental treatment comprising: receiving one or more photographic parameters to define clinically acceptable criteria for a plurality of clinically relevant photographs of a person's dentition, the clinically acceptable criteria including at least a plurality of clinically acceptable positions of the teeth relative to the camera and a plurality of clinically acceptable orientations; collecting a plurality of image capture rules to capture the plurality of clinically relevant photographs, the plurality of image capture rules based on the one or more photograph parameters; providing first automated instructions to capture a plurality of clinically relevant photographs of a person's dentition using the plurality of image capture rules; the plurality of clinically relevant photos are captured by the camera in response to the first automated instruction.
Clause 2. The method of clause 1, wherein capturing the plurality of clinically relevant photos comprises receiving an instruction from a person to capture the plurality of clinically relevant photos in response to the first automated instruction.
Clause 3 the method of clause 1, wherein capturing the plurality of clinically relevant photos comprises processing a second automated instruction for the camera to capture the plurality of clinically relevant photos in response to the first automated instruction.
Clause 4. The method of clause 1, wherein the first automated instructions comprise instructions to modify the distance of the camera relative to the person's teeth.
Clause 5. The method of clause 1, wherein the first automated instructions comprise instructions to modify the orientation of the camera relative to the person's teeth.
Clause 6. The method of clause 1, wherein the first automated instructions comprise instructions to capture a front view, a left cheek view, a right cheek view of a person's teeth, or a partial combination of the front view, the left cheek view, the right cheek view of the person's teeth.
Clause 7. The method of clause 1, wherein the first automated instructions comprise instructions to adapt to the bite state of the patient's teeth.
Clause 8 the method of clause 1, wherein the first automated instructions comprise instructions to adapt to a dental appliance on the patient's teeth.
Clause 9. The method of clause 1, wherein the first automated instructions comprise instructions to adapt to a dental appliance on the patient's teeth, wherein the dental appliance comprises a cheek retractor.
Clause 10. The method of clause 1, wherein the first automated instructions comprise instructions to adapt to a dental appliance on the patient's teeth, wherein the dental appliance comprises one or more aligners.
Clause 11 the method of clause 1, wherein the first automated instructions comprise instructions to modify one or more photo settings on the camera.
Clause 12. The method of clause 1, further displaying clinically relevant guidelines to the person based on the first automated instructions.
Clause 13. The method of clause 1, further displaying clinically relevant directions to the person based on the first automated instructions, wherein the clinically relevant directions include an overlay over the representation of the person's teeth.
Clause 14. The method of clause 1, further displaying clinically relevant directions to the person based on the first automated instructions, wherein the clinically relevant directions include text instructions to modify the position of the camera, the orientation of the camera, or a partial combination of the position of the camera, the orientation of the camera.
Clause 15 the method of clause 1, further comprising deriving the plurality of image capture rules using the one or more photograph parameters.
Clause 16 the method of clause 1, further comprising: training a neural network using the one or more photo parameters on a training dataset of training images to identify the image capture rule; the image capture rules are stored in a data store.
Clause 17 the method of clause 1, further comprising using the plurality of clinically relevant photos to perform virtual dental care on the person.
Clause 18 the method of clause 1, further comprising collecting the one or more photo parameters.
Clause 19 the method of clause 1, wherein the method is performed on a mobile device.
Clause 20 the method of clause 1, wherein the plurality of clinically relevant photos do not include a three-dimensional (3D) grid of the patient's teeth.
Clause 21 the method of clause 1, wherein the plurality of clinically relevant photos do not include height map data.
Clause 22, a system comprising: one or more processors; a memory storing computer program instructions that, when executed by the one or more processors, cause the system to perform a computer-implemented method comprising: receiving one or more photographic parameters to define clinically acceptable criteria for a plurality of clinically relevant photographs of a person's dentition, the clinically acceptable criteria including at least a plurality of clinically acceptable positions of the teeth relative to the camera and a plurality of clinically acceptable orientations; collecting a plurality of image capture rules to capture the plurality of clinically relevant photographs, the plurality of image capture rules based on the one or more photograph parameters; providing first automated instructions to capture a plurality of clinically relevant photographs of a person's dentition using the plurality of image capture rules; the plurality of clinically relevant photos are captured by the camera in response to the first automated instruction.
Clause 23 the system of clause 22, wherein the system comprises a mobile device.
The system of clause 22, wherein the memory stores computer program instructions that, when executed by the one or more processors, further cause the system to perform the computer-implemented method comprising any of clauses 1-21.
Clause 25. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any one of clauses 1-21.
Clause 26, a method for dental treatment, comprising: receiving a photograph of a dentition of a person; a stage of identifying a treatment plan to be administered to the dentition of the person; collecting a three-dimensional (3D) model of a dentition of a person corresponding to a phase of the treatment plan; projecting properties of the 3D model of the person's dentition on an image plane to obtain a projected representation of the person's dentition at a stage of the treatment plan; comparing the photograph with the projected representation to obtain an error image representative of the comparison; the error image is analyzed to obtain a difference, wherein the difference represents one or more deviations of the dentition of the person from the phase of the treatment plan.
Clause 27 the method of clause 26, wherein the deviation comprises one or more pixel differences for each tooth of the person's dentition.
Clause 28 the method of clause 26, wherein: the deviation includes one or more pixel differences for each tooth of the person's dentition.
Clause 29 the method of clause 27, further comprising: the true distance between one or more of the differences is approximated based on the one or more pixel differences for each tooth.
Clause 30 the method of clause 26, further comprising displaying a digital assessment corresponding to one or more of the deviations.
The method of clause 31, wherein the method further comprises: displaying a digital estimate corresponding to one or more of the deviations; receiving one or more instructions that interact with the digital assessment; and modifying a display of the digital assessment using instructions that interact with the digital assessment.
Clause 32 the method of clause 26, wherein: the method further includes displaying one or more annotated photographs of the person's dentition; and the one or more annotated photographs include a plurality of overlays showing the one or more of the deviations in the photographs relative to the dentition of a person.
Clause 33 the method of clause 26, wherein: the method further includes displaying one or more annotated photographs of the person's dentition; the one or more annotated photographs include a plurality of overlays showing the one or more of the deviations in the photographs relative to the dentition of a person; the method further includes receiving one or more instructions to interact with the one or more annotated photos; and the method further comprises modifying a display of the one or more annotated photos in response to the one or more instructions interacting with the one or more annotated photos.
The method of clause 26, wherein the method further comprises: displaying a 3D model of the dentition of the person; and simultaneously with the 3D model, displaying a digital assessment corresponding to one or more of the deviations on the photograph of the person's dentition.
The method of clause 26, wherein the method further comprises: displaying a 3D model of the dentition of the person; and displaying one or more annotated photographs of the person's dentition concurrently with the 3D model, wherein the one or more annotated photographs comprise a plurality of overlays showing the one or more deviations from the deviations of the photograph of the person's dentition.
The method of clause 36, wherein the method further comprises: displaying a 3D model of the dentition of the person; simultaneously with the 3D model, displaying one or more annotated photographs of a person's dentition, wherein the one or more annotated photographs comprise a plurality of overlays showing the one or more deviations in the deviation of the photograph relative to the person's dentition; receive instructions to interact with the 3D model or one or more annotated photographs of the person's dentition; and modifying the display of the 3D model and the one or more annotated photos while locking the 3D model and the one or more annotated photos into a common orientation.
The method of clause 37, 26, wherein the method further comprises: displaying one or more annotated photographs of a person's dentition; and displaying one or more phasing elements configured to represent phases of a person's dentition over the course of the treatment plan.
Clause 38 the method of clause 26, wherein: the method further includes displaying one or more annotated photographs of the dentition of the person; the method further includes displaying one or more phasing elements configured to represent phases of a person's dentition over the course of the treatment plan; and the one or more phasing elements show each jaw of the person's dentition relative to the phase of the treatment plan.
Clause 39 the method of clause 26, wherein the method further comprises: displaying a digital estimate corresponding to one or more of the deviations; providing one or more digital diagnostic tools associated with the digital assessment; and receiving a diagnosis from the doctor using the one or more digital diagnostic tools.
Clause 40 the method of clause 26, further comprising: training a neural network to discern a machine-learned difference between a first training set of the 3D model and a projected representation of a second training set of the training image; and comparing the photograph to a projected representation of the 3D model using the machine learning variance.
Clause 41 the method of clause 26, further comprising: providing intelligent guidance to a person to take a photograph of the person's dentition; the photo is captured using the intelligent guidance.
Clause 42 the method of clause 26, wherein the photograph does not include a three-dimensional (3D) grid of the patient's teeth.
Clause 43 the method of clause 26, wherein the photograph does not include height map data.
Clause 44 the method of clause 26, further comprising using the difference to modify the treatment plan.
Clause 45 the method of clause 26, wherein the photograph comprises a plurality of photographs, each photograph of the plurality of photographs having a different orientation relative to the dentition of the patient.
The method of clause 46, wherein the photograph comprises a plurality of photographs, a first photograph of the plurality of photographs having a front orientation, a second photograph of the plurality of photographs having a right cheek orientation, and a third photograph of the plurality of photographs having a left cheek orientation.
Clause 47, a system for dental treatment, comprising: one or more processors; and a memory storing computer program instructions that, when executed by the one or more processors, cause the system to perform a computer-implemented method comprising: receiving a photograph of a dentition of a person; a stage of identifying a treatment plan to be administered to the dentition of the person; collecting a three-dimensional (3D) model of a dentition of a person corresponding to a phase of the treatment plan; projecting properties of the 3D model of the person's dentition on an image plane to obtain a projected representation of the person's dentition at a stage of the treatment plan; comparing the photograph with the projected representation to obtain an error image representative of the comparison; and analyzing the error image to obtain a difference, wherein the difference represents one or more deviations of the dentition of the person from the phase of the treatment plan.
The system of clause 48, wherein the memory stores computer program instructions that, when executed by the one or more processors, further cause the system to perform the computer-implemented method comprising any of clauses 26-46.
Clause 49 a non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any of clauses 26-46.
Clause 50. A method for dental treatment, comprising: receiving a photograph of a dentition of a patient; collecting treatment parameters for a treatment plan, the treatment parameters representing attributes of the treatment plan selected by a physician for a patient's dentition for the patient; generating one or more intelligent guidance rules using the photograph and the treatment parameters to guide the application of at least a portion of the treatment parameters to the patient's dentition; generating instructions to apply intelligent patient guidance rules to the patient's dentition; and providing the instructions to the patient to apply at least a portion of the treatment plan according to the intelligent patient guidance rules.
Clause 51 the method of clause 50, wherein the treatment parameters comprise doctor preference parameters specifying a treatment regimen previously prescribed by the doctor for the clinical condition.
Clause 52 the method of clause 50, wherein the treatment parameters comprise each patient parameter specifying a treatment regimen for the patient.
Clause 53 the method of clause 50, wherein: the method further comprises managing a physician-guiding template having the treatment parameters; and generating the one or more intelligent guidance rules using the physician guidance template.
Clause 54 the method of clause 50, wherein: the method further comprises managing a physician-guiding template having the treatment parameters; the physician-guiding templates include templates configured to track the delivery of the treatment plan; and generating the one or more intelligent guidance rules using the physician guidance template.
Clause 55. The method of clause 50, further comprising: managing a physician-guiding template having the treatment parameters; receiving a modification instruction from a doctor to modify the doctor guiding template; the physician instruction template is modified based on the modification instructions.
Clause 56 the method of clause 50, further comprising configuring a display of a computing system to direct the patient to apply at least a portion of the treatment plan according to the intelligent patient guidance rules.
Clause 57 the method of clause 50, further comprising configuring a display of a computing system to display one or more interactive elements that direct the patient to apply at least a portion of the treatment plan according to the intelligent patient guidance rules.
Clause 58 the method of clause 50, wherein: the method further comprises the steps of: configuring a display of a computing system to instruct the patient to apply at least a portion of the treatment plan according to the intelligent patient guidance rules; and the computing system is a dental consumer/patient system associated with a patient.
Clause 59 the method of clause 50, wherein: the method further includes configuring a display of a computing system to direct the patient to apply at least a portion of the treatment plan according to the intelligent patient guidance rules; and the computing system is a dental professional system associated with a doctor.
Clause 60 the method of clause 50, further comprising instructing the patient to take corrective action, the corrective action comprising a modification to the treatment plan.
Clause 61 the method of clause 50, further comprising instructing the patient to manage a dental appliance for delivering the treatment plan.
Clause 62. The method of clause 50, further comprising notifying a doctor to instruct the patient to take corrective action, including modification of the treatment plan.
Clause 63, the method of clause 50, wherein the instructions to apply at least a portion of the treatment plan comprise one or more of: instructions to replace a dental appliance, instructions to hold a dental appliance for a time period exceeding that prescribed by the treatment plan, instructions to use a supplemental dental appliance at a particular time or location, instructions to set a appointment for a specified tooth condition, instructions to notify a doctor about one or more tooth conditions, and instructions to notify a doctor of a specified region of the patient's dentition for a particular aspect of the treatment plan.
Clause 64 the method of clause 50, wherein the treatment plan prescribes an aligner to move the teeth of the patient from the initial arrangement toward the target arrangement using a plurality of successive tooth restoration aligners.
Clause 65 the method of clause 50, further comprising: providing intelligent photo guidance instructions to a patient to capture a photograph of the patient's dentition, wherein the photograph includes a clinically relevant photograph of the patient's dentition; and taking a picture of the patient's dentition according to the intelligent picture guidance instructions.
Clause 66. The method of clause 50, further comprising taking a photograph of the patient's dentition at a mobile device associated with the patient.
Clause 67. The method of clause 50, wherein the photograph does not include a three-dimensional (3D) grid of the patient's teeth.
Clause 68 the method of clause 50, wherein the photograph does not include height map data.
The method of clause 69, wherein the photograph comprises a plurality of photographs, each photograph of the plurality of photographs having a different orientation relative to the dentition of the patient.
The method of clause 70, wherein the photograph comprises a plurality of photographs, a first photograph of the plurality of photographs having a front orientation, a second photograph of the plurality of photographs having a right cheek orientation, and a third photograph of the plurality of photographs having a left cheek orientation.
Clause 71 the method of clause 50, further comprising: training a neural network using a set of training photographs and a set of training treatment parameters to discern the one or more intelligent guidance rules; and storing the one or more intelligent guidance rules in a data store.
Clause 72 a system for dental treatment comprising: one or more processors; a memory storing computer program instructions that, when executed by the one or more processors, cause the system to perform a computer-implemented method comprising: receiving a photograph of a dentition of a patient; collecting treatment parameters for a treatment plan, the treatment parameters representing attributes of the treatment plan selected by a physician for a patient's dentition for the patient; generating one or more intelligent guidance rules using the photograph and the treatment parameters to guide the application of at least a portion of the treatment parameters to the patient's dentition; generating instructions to apply the intelligent patient guidance rules to the patient's dentition; and providing the instructions to the patient to apply at least a portion of the treatment plan according to the intelligent patient guidance rules.
Clause 73 the system of clause 72, wherein the memory stores computer program instructions that, when executed by the one or more processors, further cause the system to perform the computer-implemented method comprising any of clauses 50 to 71.
Clause 74 a non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform any of the methods of any of clauses 50-71.
Clause 75. A method for dental treatment comprising: receiving a photograph of a patient's dentition status captured of the patient at a specified time; retrieving a first treatment plan to treat dentition of the patient; identifying an expected arrangement of the first treatment plan at the specified time; estimating photo parameters of the photo to generate alignment data to align an intended arrangement of the first treatment plan with the photo; generating an alignment grid using the alignment data, the alignment grid comprising a three-dimensional (3D) grid representation of the patient's dentition at the specified time; performing an estimation of the first treatment plan for modification using the alignment grid; and identifying modifications to the first treatment plan based on the estimation
Clause 76 the method of clause 75, wherein the photo parameters include one or more of camera parameters, position parameters, and orientation parameters.
Clause 77 the method of clause 75, wherein estimating the photo parameters comprises optimizing the photo parameters using a trained neural network.
Clause 78 the method of clause 75, wherein estimating the photo parameters comprises implementing a differential renderer for the photo parameters.
Clause 79 the method of clause 75, wherein estimating the photo parameters comprises performing a desired maximization of the photo parameters.
Clause 80 the method of clause 75, wherein estimating the first treatment plan comprises determining the location of the portion of the patient's dentition that has deviated from the expected location in the first treatment plan.
Clause 81. The method of clause 75, further comprising displaying one or more annotations representing the proposed modification to a physician.
Clause 82 the method of clause 75, further comprising displaying one or more overlays representing the proposed modification to the doctor.
Clause 83. The method of clause 75, wherein: the method further includes receiving a request from a physician for the first treatment plan; the first treatment plan is retrieved in response to the request for the first treatment plan.
Clause 84 the method of clause 75, further comprising: providing the proposed modification to a physician; and facilitating review of the proposed modifications by the physician.
Clause 85 the method of clause 75, further comprising: receiving an censored modification from a physician based on the proposed modification to the first treatment plan; and one or more steps of refining the first treatment plan using the censored modifications.
The method of clause 86, 75, further comprising: receiving an censored modification from a physician based on the proposed modification to the first treatment plan; one or more steps of refining the first treatment plan using the censored modifications; and sending a refined treatment plan to the patient or the physician, the refined treatment plan including one or more steps of refining.
Clause 87. The method of clause 75, further comprising: receiving an censored modification from a physician based on the proposed modification to the first treatment plan; one or more steps of the first treatment plan are refined using the censored modifications, wherein the one or more steps include placement of attachments, phasing of teeth, or time for conducting an interproximal reduction procedure.
Clause 88 the method of clause 75, further comprising taking a photograph of the patient's dentition at a mobile device associated with the patient.
Clause 89 the method of clause 75, wherein the photograph does not include a three-dimensional (3D) grid of the patient's teeth.
Clause 90 the method of clause 75, wherein the photograph does not include height map data.
Clause 91 the method of clause 75, further comprising: training a neural network using a set of training photographs and a set of training treatment parameters to discern the one or more intelligent guidance rules; and storing the one or more intelligent guidance rules in a data store.
Clause 92. A system comprising: one or more processors; a memory storing computer program instructions that, when executed by the one or more processors, cause the system to perform a computer-implemented method comprising: receiving a photograph of a state of a patient's dentition captured at a specified time; retrieving a first treatment plan to treat the dentition of the patient; identifying an expected arrangement of the first treatment plan at the specified time; estimating photo parameters of the photo to generate alignment data to align an expected arrangement of the first treatment plan with the photo; generating an alignment grid using the alignment data, the alignment grid comprising a three-dimensional (3D) grid representation of the patient's dentition at the specified time; estimating the first treatment plan for modification using the alignment grid; and identifying a proposed modification to the first treatment plan based on the estimation.
Clause 93 the system of clause 92, wherein the memory stores computer program instructions that, when executed by the one or more processors, further cause the system to perform the computer-implemented method comprising any of clauses 75 to 91.
Clause 94. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any of clauses 75-91.
Clause 95. A method for photo guidance based on treatment, the method comprising: receive a treatment plan; determining movement of the first one or more teeth based on movement of the tooth model in the treatment plan; and determining one or more photographic views for capturing movement of the first one or more teeth.
Clause 96 the method of clause 95, wherein the treatment plan includes a model of the position of the patient's teeth.
Clause 97. According to the method of clause 95, determining the movement of the first one or more teeth comprises determining a difference between a position of a first tooth in a first phase of a treatment plan and a position of the first tooth in a second phase of the treatment plan.
The method of clause 98, wherein determining the movement comprises determining that the vertex moves between the first stage and the second stage.
Clause 99 the method of clause 97, further comprising determining a motion vector based on the difference in position.
Clause 100 the method of clause 99, wherein determining the photo view comprises determining a position and orientation orthogonal to the motion vector and comprising a field of view of the first tooth.
Clause 101. The method of clause 95, wherein determining the photo view comprises selecting the photo view from one or more predetermined photo views.
Clause 102. The method of clause 101, wherein the predetermined photographic view includes a position and orientation for capturing the buccal and occlusal surfaces of each tooth.
Clause 103 the method of clause 102, further comprising merging the one or more photo views.
Clause 104. The method of clause 103, wherein merging the one or more photo views comprises determining that at least two of the one or more photo views belong to adjacent teeth, and removing one of the at least two photo views from the determined photo view.
Clause 105 the method of clause 95, further comprising receiving an image stream from the camera; and comparing images in the image stream to the one or more photo views.
Clause 106 the method of clause 105, further comprising determining that the image in the image stream does not match the one or more photo views; and providing guidance to move the camera so that the image more closely matches the one or more photo views.
Clause 107 the method of clause 105, further comprising determining that the image in the image stream matches the one or more photo views; providing a guided capture image.
Clause 108 the method of clause 107, wherein determining that the image in the image stream matches the one or more photo views comprises determining that a field of view of the image comprises at least one set of teeth in a field of view of the one or more photo views.
Clause 109 the method of clause 107, wherein determining that the image in the image stream matches the one or more photo views is performed within 10 ° of the orientation of the one or more photo views and oriented.
Clause 110 the method of clause 105, further comprising determining that the image in the image stream matches the one or more photo views; and capturing the image based on determining that the image in the image stream substantially matches the one or more photo views.
Clause 111 the method of clause 95, wherein determining the movement of the first one or more teeth comprises determining that the first one or more teeth are moving during the treatment phase.
Clause 112 the method of clause 111, wherein the photo view comprises one or more cheek views and one or more bite views, the one or more cheek views and the one or more bite views comprising the first one or more teeth that are moving during the treatment phase.
Clause 113 the method of clause 112, wherein the one or more cheek views comprise a cheek view centered along the midline of the patient and one or more cheek views offset from the midline of the patient by 15 °, 30 °, 45 °, 60 °, 75 °, or 90 °.
Clause 114. A system comprising: a camera; at least one physical processor; and a physical memory comprising computer-executable instructions that, when executed by the physical processor, enable the physical processor to perform the method according to any one of clauses 95-113.
Clause 115. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any of clauses 95-113.
Clause 116 a method for photo-based orthodontic treatment, the method comprising: receiving image data of a patient's dentition and orthodontic appliances; identifying the orthodontic appliance from the image data; calculating a misalignment height of the orthodontic appliance relative to a dentition of the patient; determining whether the misalignment height meets a misalignment threshold; and providing a notification in response to the misalignment threshold being met.
Clause 117 the method of clause 116, wherein identifying the orthodontic appliance further comprises: semantic segmentation is performed to classify each pixel of the image data into one of a plurality of categories.
The method of clause 117, further comprising training a neural network to perform the semantic segmentation by: inputting an image dataset for semantic segmentation by the neural network; calculating an error between an output of the neural network and a mask dataset corresponding to the image dataset; and adjusting parameters of the neural network to reduce the error.
Clause 119 the method of clause 117, wherein performing the semantic segmentation further comprises, for each pixel: determining a probability that the pixel matches each of the plurality of classes; and classifying the pixel into one of the plurality of categories based on the corresponding highest probability value.
Clause 120 the method of clause 117, wherein the plurality of categories include a tooth category indicative of a tooth of the patient, a gap category indicative of a gap between the orthodontic appliance and a corresponding gingival margin, and a spacing category indicative of a spacing between a incisal edge of the orthodontic appliance and a incisal edge of a corresponding tooth, and wherein the gap category and the spacing category correspond to the misalignment.
Clause 121 the method of clause 116, wherein the misalignment height is calculated from the misaligned pixel heights.
Clause 122 the method of clause 121, wherein calculating the misalignment height further comprises: determining a pixel height of an incisor identified in the image data; obtaining an incisor measurement of a standard measurement unit; determining a scaling factor between a pixel and the standard measurement unit using a pixel height of the incisors and the incisor measurements; and converting the misalignment height from a pixel to the standard measurement unit using the conversion factor.
Clause 123 the method of clause 122, wherein the incisor measurements are obtained from a treatment plan of the patient.
Clause 124 the method of clause 122, wherein the standard measurement units are millimeters.
Clause 125. The method of clause 116, wherein the misalignment height is calculated by aggregating a plurality of identified misalignments.
Clause 126 the method of clause 125, wherein calculating the misalignment height further comprises determining an 80 th percentile of the plurality of identified misalignments.
Clause 127 the method of clause 116, wherein calculating the misalignment height further comprises subtracting the thickness deviation to simulate a material thickness of the orthodontic appliance.
The method of clause 128, wherein the thickness deviation is obtained from a treatment plan for the patient.
Clause 129 the method of clause 116, further comprising tracking the misalignment height over time using time-varying image data.
Clause 130 the method of clause 129, further comprising: identifying a misalignment trend from the tracked misalignment height; determining whether the misalignment trend meets a misalignment trend threshold; and providing a notification in response to the misalignment trend threshold being met.
Clause 131 the method of clause 116, further comprising providing the misaligned visual overlay.
Clause 132 the method of clause 116, wherein the misalignment threshold comprises a plurality of misalignment thresholds.
Clause 133 the method of clause 132, further comprising providing the visual covering of misalignment, wherein each range between each of the plurality of misalignment thresholds corresponds to a unique color.
The method of clause 134, according to clause 116, wherein identifying the orthodontic appliance further comprises: one or more filters are applied to the image data to determine tooth edges and orthodontic appliance edges.
The method of clause 116, wherein identifying the orthodontic appliance further comprises: the color value of each pixel is estimated to identify a portion of the tooth that does not have the orthodontic appliance and a portion of the tooth that does have the orthodontic appliance.
Clause 136, a system comprising: a camera; at least one physical processor; and a physical memory comprising computer-executable instructions that, when executed by the physical processor, enable the physical processor to perform the method according to any one of clauses 116-135.
Clause 137 a non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any of clauses 116-135.
Clause 138 a method for capturing image data of a body part according to clinical requirements, the method comprising: receiving an image data stream from a camera; determining one or more binary classifications and one or more class classifications from the image data stream using an artificial intelligence scheme; comparing the one or more binary classifications and the one or more class classifications to a set of requirements; and providing feedback based on the comparison.
The method of clause 139, clause 138, wherein the one or more binary classifications comprise at least one of the following: whether a particular tooth is visible; whether the upper jaw is visible; whether the mandible is visible; whether the appliance is visible; whether a focus threshold corresponding to whether the entire body part is visible is met; whether the upper teeth are contacted with the lower teeth; whether the illumination threshold is met; whether there are local stones; and whether there is gingival recession.
Clause 140 the method of clause 138, wherein the one or more binary classifications are determined using binary cross entropy.
The method of clause 141, wherein the one or more category classifications include at least one of: a front view; left cheek view; and a right cheek view.
The method of clause 142, wherein the one or more category classifications include one or more mutually exclusive categories.
Clause 143 the method of clause 138, wherein the feedback comprises a coaching prompt when at least one requirement of the set of requirements is not met.
The method of clause 144, clause 143, wherein the instructional prompt comprises at least one of: instructions to adjust a camera view of the camera to include a particular body part in the camera view; instructions for inserting a particular orthotic; instructions to remove a particular orthotic; instructions to move a particular body part; and instructions to adjust one or more camera settings.
Clause 145 the method of clause 143, wherein the instructional cue comprises a visual cue.
The method of clause 146, clause 143, wherein the instructional cue comprises an audible cue of clause 147, the method of clause 143, wherein the instructional cue comprises a tactile cue.
Clause 148 the method of clause 138, wherein when at least one requirement of the set of requirements is not met, the feedback comprises automatically adjusting one or more camera settings.
Clause 149 the method of clause 138, wherein when the set of requirements is satisfied, the feedback comprises automatically capturing image data of the body part.
Clause 150 the method of clause 138, wherein when at least one of the set of requirements is not met, the feedback comprises preventing capturing image data of the body part.
Clause 151 the method of clause 138, wherein the feedback comprises sending a notification.
Clause 152 the method of clause 138, further comprising determining the set of requirements based on the current state of the patient's dentition and treatment plan.
The method of clause 153, wherein the set of requirements includes at least one of: visibility of a particular body part; visibility of a particular orthotic; and the type of view captured.
Clause 154 the method of clause 153, wherein the specific body part corresponds to a tooth of interest identified from the current state of the treatment plan.
Clause 155 the method of clause 154, wherein the particular body part further corresponds to one or more teeth in the vicinity of the tooth of interest.
Clause 156 a system comprising: a camera; at least one physical processor; and a physical memory comprising computer executable instructions that, when executed by the physical processor, enable the physical processor to perform the method according to any one of clauses 138 to 155.
Clause 157 is a non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any of clauses 138-155.
Clause 158 a method for determining deviation of a tooth from captured image data of a dentition, the method comprising: receiving one or more two-dimensional images of a patient's teeth; receiving a three-dimensional model of a patient's teeth; registering the patient's teeth in the three-dimensional model with the patient's teeth in the one or more two-dimensional images; projecting the registered teeth in the three-dimensional model onto each of the one or more two-dimensional images; and generating an error image based on a difference in the position of the projected tooth of the three-dimensional model and the corresponding tooth of the one or more two-dimensional images.
Clause 159 the method of clause 158, wherein: projecting the registered teeth includes projecting the registered teeth in the three-dimensional model onto an image plane of each of the one or more two-dimensional images.
Clause 160 the method of clause 158, wherein: receiving a two-dimensional image of a patient's teeth includes receiving images of the patient's teeth from multiple perspectives.
Clause 161 the method of clause 159, wherein: the plurality of viewing angles includes at least two of an upper bite viewing angle, a lower bite viewing angle, an anterior viewing angle, a right cheek viewing angle, and a left cheek viewing angle.
Clause 162. The method of clause 158, wherein: registering the patient's teeth in the three-dimensional model with the patient's teeth in the one or more two-dimensional images includes registering the patient's teeth in the three-dimensional model with the patient's teeth in multiple perspectives in the one or more two-dimensional images.
Clause 163 the method of clause 158, wherein: receiving one or more two-dimensional images of the patient's teeth includes capturing, by a camera, one or more images of the patient's teeth.
Clause 164 the method of clause 158, further comprising: an error image is generated based on the projection, the error image comprising data representing a difference between a position of a tooth in the three-dimensional model and a position of the tooth in the two-dimensional image.
Clause 165 the method of clause 164, wherein the three-dimensional model is a three-dimensional model of the patient's teeth during the phase of the treatment plan.
Clause 166 the method of clause 165, wherein the two-dimensional image of the patient's teeth corresponds to the treatment of the patient in the treatment phase.
Clause 167. The method of clause 164, wherein the error image comprises a first edge and a second edge, the first edge corresponding to an edge of a first patient's tooth in the two-dimensional image from a first perspective, the second edge corresponding to an edge of the first patient's tooth in the three-dimensional image from the first perspective.
Clause 168 the method of clause 164, further comprising: an image mask is generated based on the error data.
Clause 169. The method of clause 164, further comprising: the image mask is applied to the two-dimensional image.
Clause 170 the method of clause 164, further comprising: one or more of a color and a brightness of a mask portion of the two-dimensional image is adjusted.
Clause 171 the method of clause 164, wherein the error image comprises a contour of the projected tooth of the three-dimensional image on the two-dimensional image.
Clause 172 the method of clause 164, wherein the error image comprises an overlay of the projected three-dimensional model on the two-dimensional image.
Clause 173 the method of clause 167, further comprising: a distance between the first edge and the second edge in the error image is determined.
Clause 174 the method of clause 173, wherein: the distance is determined based on a number of pixels spanning a distance between the first edge and the second edge.
Clause 175 the method of clause 174, further comprising: a true distance is determined based on the number of pixels and a true size of the pixels.
Clause 176. A system comprising: a camera; at least one physical processor; and a physical memory comprising computer-executable instructions that, when executed by the physical processor, enable the physical processor to perform the method according to any one of clauses 158-175.
Clause 177. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of the computing device, enable the computing device to perform the method according to any of clauses 158-175.
Clause 178 a method of generating and providing guidance, the method comprising: receiving patient dentition measurement information; receiving patient treatment plan information; receiving guide information; patient guidance is generated by applying the guidance information based on the measurement information and the treatment plan information.
The method of clause 178, further comprising: and eliminating the conflict of the guiding information.
Clause 180 the method of clause 179, further comprising: and sending the instruction information to the patient.
The method of clause 181, wherein the patient dentition measurement information comprises a distance between the current position of the tooth and the expected position of the tooth.
Clause 182 the method of clause 181, wherein the desired position of the tooth is based on the position of the tooth corresponding to an orthodontic treatment plan.
Clause 183 the method of clause 181, wherein the distance is determined from a two-dimensional image of the patient's teeth at the current position and a three-dimensional model of the patient's teeth at the expected position.
Clause 184 the method of clause 183, wherein the distance is based on a projection of the patient's teeth in the three-dimensional model onto one or more two-dimensional images of the patient's teeth.
Clause 185 the method of clause 178, wherein the instructional information comprises an instructional template and a case guide.
Clause 186 the method of clause 185, wherein eliminating the conflict of the guidance information comprises eliminating redundant guidance.
Clause 187 the method of clause 185, wherein eliminating the conflict of the guidance information comprises removing the merging of two or more guidance into a single guidance.
Clause 188. The method of clause 178, wherein the guidance comprises one or more of a modified treatment, notification, or treatment instruction.
The method of clause 189, wherein the modified treatment comprises one or more of using a chew based on the treatment plan, changing to a different dental appliance, and wearing the dental appliance for a longer period of time than the initial indication.
Clause 190 the method of clause 178, wherein the guidance information comprises a tooth deviation threshold that, when met, causes a guidance to be generated.
Clause 191 the method of clause 190, wherein the threshold is based on a deviation measured at a single treatment stage.
Clause 192 the method of clause 190, wherein the threshold is based on deviations measured over multiple phases of the treatment.
Clause 193 the method of clause 184, wherein the guidance template comprises a set of generic thresholds used by a physician.
Clause 194. The method of clause 185, wherein the individual case guide comprises a threshold for a particular treatment.
Clause 195. The method of clause 178, wherein generating the patient guidance is based on measurements from a previous stage of treatment.
Clause 196. A system comprising: a camera; at least one physical processor; and a physical memory comprising computer-executable instructions that, when executed by the physical processor, enable the physical processor to perform the method according to any one of clauses 178-195.
Clause 197. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any of clauses 178-195.
Clause 198 a method for photo-based orthodontic treatment, the method comprising: receiving one or more photographs of a patient's dentition; receive a treatment plan for a patient; determining that a current position of the patient's teeth deviates from an expected position of the patient's teeth based on the treatment plan and the one or more photographs; and generating an updated treatment plan to move the patient's teeth toward the final position of the patient's teeth based on the position of the patient's teeth in the photograph.
Clause 199. The method of clause 198, wherein: the position of the current position of the patient's teeth is based on the position of the teeth in the one or more photographs.
Clause 200. The method of clause 198, wherein: the expected position of the patient's teeth is based on the position of the patient's teeth during the phase of the treatment plan.
Clause 201 the method of clause 198, wherein: the final position is different from the final position of the patient's teeth and the same final position as in the treatment plan.
Clause 202 the method of clause 198, wherein: the final position is different from the final position of the patient's teeth than the final position in the treatment plan.
Clause 203 the method of clause 198, wherein: the one or more photographs are two-dimensional images of the patient's teeth from one or more perspectives.
Clause 204 the method of clause 198, wherein the measuring comprises: a distance between the position of the patient's teeth in the one or more photographs of the current position and a three-dimensional model of the patient's teeth in the expected position is determined.
Clause 205 the method of clause 204, wherein the distance is based on a projection of the patient's teeth in the three-dimensional model onto one or more photographs of the patient's teeth.
Clause 206 the method of clause 198, further comprising: one or more three-dimensional parameters of a three-dimensional model of the patient's teeth are optimized to move teeth in the three-dimensional model into the current position based on a photograph of the patient's teeth.
Clause 207 the method of clause 206, wherein: the three-dimensional model is a segmented dental mesh model of the patient's teeth.
Clause 208 the method of clause 206, wherein: the segmented three-dimensional model is a segmented dental mesh model of the patient's teeth, based on an intraoral three-dimensional scan of the patient's teeth.
Clause 209 the method of clause 208, further comprising: an updated segmented dental grid model of the patient's teeth is generated based on the one or more optimized three-dimensional parameters.
Clause 210 the method of clause 208, wherein the updated segmented dental mesh model is used as an initial position of the patient's teeth to generate an updated treatment plan.
Clause 211 the method of clause 210, wherein the updated treatment plan includes a plurality of intermediate tooth positions to move the patient's teeth from the initial position toward the final position.
Clause 212 the method of clause 211, further comprising: a plurality of tooth aligners are made based on the intermediate tooth positions to move the teeth toward the final positions.
Clause 213 the method of clause 203, wherein: the one or more viewing angles include one or more of an upper bite viewing angle, a lower bite viewing angle, an anterior viewing angle, a right cheek viewing angle, and a left cheek viewing angle.
Clause 214 the method of clause 204, wherein: in the event that the distance of one or more teeth is greater than 0.1mm, generating the updated treatment plan occurs.
Clause 215 the method of clause 206, wherein: the optimizing includes using one or more of differential rendering and a desired maximization process.
Clause 216, a system comprising: a camera; at least one physical processor; and a physical memory comprising computer executable instructions that, when executed by the physical processor, enable the physical processor to perform the method according to any one of clauses 198 to 215.
Clause 217, a non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any of clauses 198 to 215.
Embodiments of the present disclosure have been shown and described, as set forth herein, and are provided by way of example only. Many modifications, changes, variations, and substitutions will now occur to those skilled in the art without departing from the scope of the disclosure. Several alternatives and combinations of the embodiments disclosed herein may be utilized without departing from the scope of the present disclosure and the invention disclosed herein. Accordingly, the scope of the presently disclosed invention should be limited only by the scope of the following claims and equivalents thereof.

Claims (217)

1. A method for dental treatment, comprising:
receiving one or more photographic parameters to define clinically acceptable criteria for a plurality of clinically relevant photographs of a person's dentition, the clinically acceptable criteria including at least a plurality of clinically acceptable positions of the teeth relative to the camera and a plurality of clinically acceptable orientations;
collecting a plurality of image capture rules to capture the plurality of clinically relevant photographs, the plurality of image capture rules based on the one or more photograph parameters;
providing first automated instructions to capture the plurality of clinically relevant photographs of a person's dentition using the plurality of image capture rules;
The plurality of clinically relevant photos are captured by the camera in response to the first automated instruction.
2. The method of claim 1, wherein capturing the plurality of clinically relevant photos comprises: in response to the first automated instruction, an instruction from a person is received to capture the plurality of clinically relevant photos.
3. The method of claim 1, wherein capturing the plurality of clinically relevant photos comprises: in response to the first automated instruction, second automated instructions for the camera are processed to capture the plurality of clinically relevant photos.
4. The method of claim 1, wherein the first automated instructions include instructions to modify a distance of the camera relative to a person's teeth.
5. The method of claim 1, wherein the first automated instructions include instructions to modify an orientation of the camera relative to a person's teeth.
6. The method of claim 1, wherein the first automated instructions comprise instructions to capture a front view, a left cheek view, a right cheek view of a person's teeth, or a partial combination of the front view, the left cheek view, the right cheek view of the person's teeth.
7. The method of claim 1, wherein the first automated instructions comprise instructions to adapt to an occlusal state of a patient's teeth.
8. The method of claim 1, wherein the first automated instructions comprise instructions to adapt to a dental appliance on a patient's tooth.
9. The method of claim 1, wherein the first automated instructions comprise instructions to adapt to a dental appliance on a patient's tooth, wherein the dental appliance comprises a cheek retractor.
10. The method of claim 1, wherein the first automated instructions comprise instructions to adapt a dental appliance on a patient's tooth, wherein the dental appliance comprises one or more aligners.
11. The method of claim 1, wherein the first automated instructions comprise instructions to modify one or more photo settings on the camera.
12. The method of claim 1, further displaying clinically relevant directions to a person based on the first automated instruction.
13. The method of claim 1, further displaying clinically relevant directions to the person based on the first automated instructions, wherein the clinically relevant directions include an overlay on a representation of the person's teeth.
14. The method of claim 1, further displaying clinically relevant directions to a person based on the first automated instructions, wherein the clinically relevant directions include text instructions to modify the position of the camera, the orientation of the camera, or a partial combination of the position of the camera, the orientation of the camera.
15. The method of claim 1, further comprising deriving the plurality of image capture rules using the one or more photograph parameters.
16. The method of claim 1, further comprising:
training a neural network using the one or more photo parameters on the training dataset of the training image to identify an image capture rule;
the image capture rules are stored in a data store.
17. The method of claim 1, further comprising using the plurality of clinically relevant photos to perform virtual dental care on a person.
18. The method of claim 1, further comprising collecting the one or more photograph parameters.
19. The method of claim 1, wherein the method is performed on a mobile device.
20. The method of claim 1, wherein the plurality of clinically relevant photos do not include a three-dimensional (3D) grid of teeth of the patient.
21. The method of claim 1, wherein the plurality of clinically relevant photos do not include elevation map data.
22. A system, comprising:
one or more processors;
a memory storing computer program instructions that, when executed by the one or more processors, cause the system to perform a computer-implemented method comprising:
Receiving one or more photographic parameters to define clinically acceptable criteria for a plurality of clinically relevant photographs of a person's dentition, the clinically acceptable criteria including at least a plurality of clinically acceptable positions of the teeth relative to the camera and a plurality of clinically acceptable orientations;
collecting a plurality of image capture rules to capture the plurality of clinically relevant photographs, the plurality of image capture rules based on the one or more photograph parameters;
providing first automated instructions to capture the plurality of clinically relevant photographs of a person's dentition using the plurality of image capture rules;
the plurality of clinically relevant photos are captured by the camera in response to the first automated instruction.
23. The system of claim 22, wherein the system comprises a mobile device.
24. The system of claim 22, wherein the memory stores computer program instructions that, when executed by the one or more processors, further cause the system to perform a method comprising the computer-implemented method of any of claims 1-21.
25. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any one of claims 1 to 21.
26. A method for dental treatment, comprising:
receiving a photograph of a dentition of a person;
a stage of identifying a treatment plan to be administered to the dentition of the person;
collecting a three-dimensional (3D) model of a dentition of a person corresponding to a phase of the treatment plan;
projecting properties of the 3D model of the person's dentition onto an image plane to obtain a projected representation of the person's dentition at a stage of the treatment plan;
comparing the photograph with the projected representation to obtain an error image representative of the comparison;
the error image is analyzed to obtain a difference, wherein the difference represents one or more deviations of the dentition of the person from the phase of the treatment plan.
27. The method of claim 26, wherein the deviation comprises one or more pixel differences for each tooth of a person's dentition.
28. The method of claim 26, wherein,
the deviation includes one or more pixel differences for each tooth of the person's dentition.
29. The method of claim 27, further comprising:
the true distance between one or more of the differences is approximated based on the one or more pixel differences for each tooth.
30. The method of claim 26, further comprising displaying a digital assessment corresponding to one or more of the deviations.
31. The method of claim 26, wherein the method further comprises:
displaying a digital estimate corresponding to one or more of the deviations;
receiving one or more instructions that interact with the digital assessment; and
the display of the digital assessment is modified using instructions that interact with the digital assessment.
32. The method according to claim 26, wherein:
the method further includes displaying one or more annotated photographs of the dentition of the person; and is also provided with
The one or more annotated photographs include a plurality of overlays showing one or more of the deviations in the photographs relative to the dentition of a person.
33. The method according to claim 26, wherein:
the method further includes displaying one or more annotated photographs of the dentition of the person;
the one or more annotated photographs include a plurality of overlays showing one or more of the deviations of the photographs relative to the dentition of a person;
the method further includes receiving one or more instructions to interact with the one or more annotated photographs; and is also provided with
The method further includes modifying a display of the one or more annotated photos in response to the one or more instructions interacting with the one or more annotated photos.
34. The method of claim 26, wherein the method further comprises:
displaying a 3D model of the dentition of the person; and
simultaneously with the 3D model, a digital assessment is displayed, the digital assessment corresponding to one or more of the deviations on the photograph of the person's dentition.
35. The method of claim 26, wherein the method further comprises:
displaying a 3D model of the dentition of the person; and
one or more annotated photographs of a person's dentition are displayed concurrently with the 3D model, wherein the one or more annotated photographs include a plurality of overlays showing one or more of the deviations of the photographs relative to the person's dentition.
36. The method of claim 26, wherein the method further comprises:
displaying a 3D model of the dentition of the person;
simultaneously with the 3D model, displaying one or more annotated photographs of the person's dentition, wherein the one or more annotated photographs comprise a plurality of overlays showing one or more of the deviations of the photographs relative to the person's dentition;
Receive instructions to interact with the 3D model or the one or more annotated photographs of a person's dentition; and
modifying the display of the 3D model and the one or more annotated photos while locking the 3D model and the one or more annotated photos into a common orientation.
37. The method of claim 26, wherein the method further comprises:
displaying one or more annotated photographs of a person's dentition; and
one or more phasing elements configured to represent phases of a person's dentition during the course of the treatment plan are displayed.
38. The method according to claim 26, wherein:
the method further includes displaying one or more annotated photographs of the dentition of the person;
the method further includes displaying one or more phasing elements configured to represent phases of the dentition of the person during the treatment plan; and is also provided with
The one or more staging elements show each jaw of a person's dentition relative to a stage of the treatment plan.
39. The method of claim 26, wherein the method further comprises:
Displaying a digital estimate corresponding to one or more of the deviations;
providing one or more digital diagnostic tools associated with the digital assessment; and
the one or more digital diagnostic tools are used to receive a diagnosis from a physician.
40. The method of claim 26, further comprising:
training a neural network to discern a machine-learned difference between a first training set of the 3D model and a projected representation of a second training set of the training image; and
the machine learning differences are used to compare the photograph to a projected representation of the 3D model.
41. The method of claim 26, further comprising:
providing intelligent guidance to a person to take a photograph of the person's dentition;
the photo is captured using the intelligent guidance.
42. The method of claim 26, wherein the photograph does not include a three-dimensional (3D) grid of teeth of the patient.
43. The method of claim 26, wherein the photograph does not include altitude map data.
44. The method of claim 26, further comprising using the difference to modify the treatment plan.
45. The method of claim 26, wherein the photograph comprises a plurality of photographs, each photograph of the plurality of photographs having a different orientation relative to a dentition of the patient.
46. The method of claim 26, wherein the photograph comprises a plurality of photographs, a first photograph of the plurality of photographs having a front orientation, a second photograph of the plurality of photographs having a right cheek orientation, and a third photograph of the plurality of photographs having a left cheek orientation.
47. A system for dental treatment, comprising:
one or more processors; and
a memory storing computer program instructions that, when executed by the one or more processors, cause the system to perform a computer-implemented method comprising:
receiving a photograph of a dentition of a person;
a stage of identifying a treatment plan to be administered to the dentition of the person;
collecting a three-dimensional (3D) model of a dentition of a person corresponding to a phase of the treatment plan;
projecting properties of the 3D model of the person's dentition onto an image plane to obtain a projected representation of the person's dentition at a stage of the treatment plan;
comparing the photograph with the projected representation to obtain an error image representative of the comparison; and
the error image is analyzed to obtain a difference, wherein the difference represents one or more deviations of the dentition of the person from the phase of the treatment plan.
48. The system of claim 47, wherein the memory stores computer program instructions that, when executed by the one or more processors, further cause the system to perform a computer-implemented method comprising any one of claims 26 to 46.
49. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any one of claims 26 to 46.
50. A method for dental treatment, comprising:
receiving a photograph of a dentition of a patient;
collecting treatment parameters for a treatment plan, the treatment parameters representing attributes of the treatment plan selected by a physician for a patient's dentition for the patient;
generating one or more intelligent guidance rules using the photograph and the treatment parameters to guide the application of at least a portion of the treatment parameters to the patient's dentition;
generating instructions to apply intelligent patient guidance rules to the patient's dentition; and
The instructions are provided to the patient according to the intelligent patient guidance rules to apply at least a portion of the treatment plan.
51. The method of claim 50, wherein the treatment parameters include physician preference parameters specifying a treatment regimen previously prescribed by a physician for a clinical condition.
52. The method of claim 50, wherein the therapy parameters include each patient parameter specifying a therapy regimen for the patient.
53. The method of claim 50, wherein:
the method further comprises managing a physician-guiding template having the treatment parameters; and
the one or more intelligent guidance rules are generated using the physician guidance template.
54. The method of claim 50, wherein:
the method further comprises managing a physician-guiding template having the treatment parameters;
the physician-guiding templates include templates configured to track the delivery of the treatment plan; and
the one or more intelligent guidance rules are generated using the physician guidance template.
55. The method of claim 50, further comprising:
managing a physician-guiding template having the treatment parameters;
Receiving a modification instruction from a doctor to modify the doctor guiding template;
the physician instruction template is modified based on the modification instructions.
56. The method of claim 50, further comprising configuring a display of a computing system to instruct a patient to apply at least a portion of the treatment plan according to the intelligent patient guidance rules.
57. The method of claim 50, further comprising configuring a display of a computing system to display one or more interactive elements that direct a patient to apply at least a portion of the treatment plan according to the intelligent patient guidance rules.
58. The method of claim 50, wherein:
the method further includes configuring a display of a computing system to direct a patient to apply at least a portion of the treatment plan according to the intelligent patient guidance rules; and is also provided with
The computing system is a dental consumer/patient system associated with a patient.
59. The method of claim 50, wherein:
the method further includes configuring a display of a computing system to direct a patient to apply at least a portion of the treatment plan according to the intelligent patient guidance rules; and is also provided with
The computing system is a dental professional system associated with a doctor.
60. The method of claim 50, further comprising instructing the patient to take corrective action, the corrective action comprising a modification to the treatment plan.
61. The method of claim 50, further comprising instructing the patient to manage a dental appliance for delivery of the treatment plan.
62. The method of claim 50, further comprising notifying a physician to instruct a patient to take corrective action, the corrective action comprising a modification to the treatment plan.
63. The method of claim 50, wherein the instructions to apply at least a portion of the treatment plan include one or more of:
instructions to replace the dental appliance,
instructions to maintain the dental appliance for a time period exceeding the prescribed treatment plan,
instructions to use the supplemental dental appliance at a particular time or location,
instructions to set a appointment for a specified dental condition,
informing a doctor about one or more dental conditions, and
instructions to notify a physician of a specified area of the patient's dentition for a particular aspect of the treatment plan.
64. The method of claim 50, wherein the treatment plan prescribes an aligner to move the teeth of the patient from the initial arrangement toward the target arrangement using a plurality of successive tooth reduction aligners.
65. The method of claim 50, further comprising:
providing intelligent photo guidance instructions to a patient to capture a photo of a dentition of the patient, wherein the photo comprises a clinically relevant photo of the patient's dentition; and
taking a picture of the dentition of the patient according to the intelligent picture guidance instruction.
66. The method of claim 50, further comprising taking a photograph of the patient's dentition at a mobile device associated with the patient.
67. The method of claim 50, wherein the photograph does not include a three-dimensional (3D) grid of dentition of the patient.
68. The method of claim 50, wherein the photograph does not include height map data.
69. The method of claim 50, wherein the photograph comprises a plurality of photographs, each photograph of the plurality of photographs having a different orientation relative to the dentition of the patient.
70. The method of claim 50, wherein the photograph comprises a plurality of photographs, a first photograph of the plurality of photographs having a front orientation, a second photograph of the plurality of photographs having a right cheek orientation, and a third photograph of the plurality of photographs having a left cheek orientation.
71. The method of claim 50, further comprising:
training a neural network using a set of training photographs and a set of training treatment parameters to discern the one or more intelligent guidance rules; and
the one or more intelligent guidance rules are stored in a data store.
72. A system for dental treatment, comprising:
one or more processors;
a memory storing computer program instructions that, when executed by the one or more processors, cause the system to perform a computer-implemented method comprising:
receiving a photograph of a dentition of a patient;
collecting treatment parameters for a treatment plan, the treatment parameters representing attributes of the treatment plan selected by a physician for a patient's dentition for the patient;
generating one or more intelligent guidance rules using the photograph and the treatment parameters to guide the application of at least a portion of the treatment parameters to the patient's dentition;
generating instructions to apply intelligent patient guidance rules to the patient's dentition; and
the instructions are provided to the patient according to the intelligent patient guidance rules to apply at least a portion of the treatment plan.
73. The system of claim 72, wherein the memory stores computer program instructions that, when executed by the one or more processors, further cause the system to perform a computer-implemented method comprising any of claims 50 to 71.
74. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any one of claims 50-71.
75. A method for dental treatment, comprising:
receiving a photograph of a state of a patient's dentition captured at a specified time;
retrieving a first treatment plan to treat dentition of the patient;
identifying an expected arrangement of the first treatment plan at the specified time;
estimating photo parameters of the photo to generate alignment data to align an intended arrangement of the first treatment plan with the photo;
generating an alignment grid using the alignment data, the alignment grid comprising a three-dimensional (3D) grid representation of the patient's dentition at the specified time;
Estimating the first treatment plan for modification using the alignment grid; and
a proposed modification to the first treatment plan is identified based on the estimate.
76. The method of claim 75, wherein the photo parameters include one or more of camera parameters, position parameters, and orientation parameters.
77. The method of claim 75, wherein estimating the photo parameters includes optimizing the photo parameters using a trained neural network.
78. The method of claim 75, wherein estimating the photo parameters includes implementing a differential renderer for the photo parameters.
79. The method of claim 75, wherein estimating the photo parameters includes desirably maximizing the photo parameters.
80. The method of claim 75, wherein estimating the first treatment plan includes determining a location of a portion of the patient's dentition that has deviated from an expected location in the first treatment plan.
81. The method of claim 75, further comprising displaying one or more annotations representing the proposed modification to a physician.
82. The method of claim 75, further comprising displaying one or more overlays representing the proposed modification to a physician.
83. The method of claim 75, wherein:
the method further includes receiving a request from a physician for the first treatment plan;
the first treatment plan is retrieved in response to a request for the first treatment plan.
84. The method of claim 75, further comprising:
providing the proposed modification to a physician; and
facilitating review of the proposed modifications by the physician.
85. The method of claim 75, further comprising:
receiving an censored modification from a physician based on the proposed modification to the first treatment plan; and
one or more steps of the first treatment plan are refined using the censored modifications.
86. The method of claim 75, further comprising:
receiving an censored modification from a physician based on the proposed modification to the first treatment plan;
one or more steps of refining the first treatment plan using the censored modifications; and
a refined treatment plan is sent to the patient or physician, the refined treatment plan including one or more steps of refinement.
87. The method of claim 75, further comprising:
Receiving an censored modification from a physician based on the proposed modification to the first treatment plan;
one or more steps of the first treatment plan are refined using the censored modifications, wherein the one or more steps include placement of attachments, phasing of teeth, or time for conducting an interproximal reduction procedure.
88. The method of claim 75, further comprising taking a photograph of the patient's dentition at a mobile device associated with the patient.
89. The method of claim 75, wherein the photograph does not include a three-dimensional (3D) grid of teeth of the patient.
90. The method of claim 75, wherein the photograph does not include altitude map data.
91. The method of claim 75, further comprising:
training a neural network using a set of training photographs and a set of training treatment parameters to discern the one or more intelligent guidance rules; and
the one or more intelligent guidance rules are stored in a data store.
92. A system, comprising:
one or more processors;
a memory storing computer program instructions that, when executed by the one or more processors, cause the system to perform a computer-implemented method comprising:
Receiving a photograph of a state of a patient's dentition captured at a specified time;
retrieving a first treatment plan to treat dentition of the patient;
identifying an expected arrangement of the first treatment plan at the specified time;
estimating photo parameters of the photo to generate alignment data to align an intended arrangement of the first treatment plan with the photo;
generating an alignment grid using the alignment data, the alignment grid comprising a three-dimensional (3D) grid representation of the patient's dentition at the specified time;
estimating the first treatment plan for modification using the alignment grid; and
a proposed modification to the first treatment plan is identified based on the estimate.
93. The system of claim 92, wherein the memory stores computer program instructions that, when executed by the one or more processors, further cause the system to perform a computer-implemented method comprising any of claims 75 to 91.
94. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any one of claims 75-91.
95. A method for therapy-based photo guidance, the method comprising:
receive a treatment plan;
determining movement of the first one or more teeth based on movement of the tooth model in the treatment plan; and
one or more photographic views are determined for capturing movement of the first one or more teeth.
96. The method of claim 95, wherein the treatment plan includes a model of a position of a patient's teeth.
97. The method of claim 95, determining movement of the first one or more teeth comprising determining a difference between a position of a first tooth in a first phase of a treatment plan and a position of the first tooth in a second phase of the treatment plan.
98. The method of claim 97, wherein determining the movement includes determining that a vertex moves between the first stage and the second stage.
99. The method of claim 97, further comprising determining a motion vector based on the difference in location.
100. The method of claim 99, wherein determining a photo view comprises determining a position and orientation orthogonal to the motion vector and comprising a field of view of the first tooth.
101. The method of claim 95, wherein determining the photo view includes selecting the photo view from one or more predetermined photo views.
102. The method of claim 101, wherein the predetermined photo view includes a position and orientation for capturing the buccal and occlusal surfaces of each tooth.
103. The method of claim 102, further comprising merging the one or more photo views.
104. The method of claim 103, wherein merging the one or more photo views includes determining that at least two of the one or more photo views belong to adjacent teeth, and removing one of the at least two photo views from the determined photo views.
105. The method of claim 95, further comprising receiving an image stream from a camera; and
the images in the image stream are compared to the one or more photo views.
106. The method of claim 105, further comprising determining that the image in the image stream does not match the one or more photo views; and
guidance is provided to move the camera so that the image more closely matches the one or more photo views.
107. The method of claim 105, further comprising determining that the image in the image stream matches the one or more photo views; and
providing a guide to capture the image.
108. The method of claim 107, wherein determining that the image in the image stream matches the one or more photo views comprises determining that a field of view of the image includes at least one set of teeth in a field of view of the one or more photo views.
109. The method of claim 107, wherein determining that the image in the image stream matches the one or more photo views is performed and oriented within 10 ° of the orientation of the one or more photo views.
110. The method of claim 105, further comprising determining that the image in the image stream matches the one or more photo views; and
the image is captured based on determining that the image in the image stream substantially matches the one or more photo views.
111. The method of claim 95, wherein determining movement of the first one or more teeth comprises determining that the first one or more teeth are moving during a treatment phase.
112. The method of claim 111, wherein the photo view comprises one or more cheek views and one or more bite views, the one or more cheek views and the one or more bite views comprising the first one or more teeth that are moving during the treatment phase.
113. The method of claim 112, wherein the one or more cheek views include a cheek view centered along a midline of the patient and one or more cheek views offset from the midline of the patient by 15 °, 30 °, 45 °, 60 °, 75 °, or 90 °.
114. A system, comprising:
a camera;
at least one physical processor; and
a physical memory comprising computer executable instructions that, when executed by the physical processor, enable the physical processor to perform the method of any one of claims 95 to 113.
115. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any one of claims 95-113.
116. A method for photo-based orthodontic treatment, the method comprising:
receiving image data of a patient's dentition and orthodontic appliances;
identifying the orthodontic appliance from the image data;
calculating a misalignment height of the orthodontic appliance relative to a dentition of the patient;
determining whether the misalignment height meets a misalignment threshold; and
in response to the misalignment threshold being met, a notification is provided.
117. The method of claim 116, wherein identifying the orthodontic appliance further comprises: semantic segmentation is performed to classify each pixel of the image data into one of a plurality of categories.
118. The method of claim 117, further comprising training a neural network to perform the semantic segmentation by:
inputting an image dataset for semantic segmentation of the neural network;
calculating an error between an output of the neural network and a mask dataset corresponding to the image dataset; and
parameters of the neural network are adjusted to reduce the error.
119. The method of claim 117, wherein performing the semantic segmentation further comprises, for each pixel:
Determining a probability that the pixel matches each of the plurality of categories; and
the pixels are classified into one of the plurality of categories based on the corresponding highest probability value.
120. The method of claim 117, wherein the plurality of categories include a tooth category indicative of a patient's teeth, a gap category indicative of a gap between the orthodontic appliance and a corresponding gingival margin, and a gap category indicative of a gap between a incisal edge of the orthodontic appliance and a incisal edge of a corresponding tooth, and wherein the gap category and the gap category correspond to the misalignment.
121. The method of claim 116, wherein the misalignment height is calculated from the misaligned pixel heights.
122. The method of claim 121, wherein calculating the misalignment height further comprises:
determining a pixel height of an incisor identified in the image data;
obtaining incisor measurements in a standard measurement unit;
determining a scaling factor between a pixel and the standard measurement unit using the pixel height of incisors and the incisor measurements; and
the misalignment height is scaled from pixels to the standard measurement unit using the scaling factor.
123. The method of claim 122, wherein the incisor measurements are obtained from a treatment plan of the patient.
124. The method of claim 122, wherein the standard measurement unit is millimeters.
125. The method of claim 116, wherein the misalignment height is calculated by aggregating a plurality of identified misalignments.
126. The method of claim 125, wherein calculating the misalignment height further comprises determining an 80 th percentile of the plurality of identified misalignments.
127. The method of claim 116, wherein calculating the misalignment height further comprises subtracting a thickness deviation to simulate a material thickness of the orthodontic appliance.
128. The method of claim 127, wherein the thickness deviation is obtained from the treatment plan of the patient.
129. The method of claim 116, further comprising using time-varying image data to track the time-varying misalignment height.
130. The method of claim 129, further comprising:
identifying a misalignment trend from the tracked misalignment heights;
determining whether the misalignment trend meets a misalignment trend threshold; and
In response to the misalignment trend threshold being met, a notification is provided.
131. The method of claim 116, further comprising providing the misaligned visual covering.
132. The method of claim 116, wherein the misalignment threshold comprises a plurality of misalignment thresholds.
133. The method of claim 132, further comprising providing the misaligned visual covering, wherein each range between each of the plurality of misalignment thresholds corresponds to a unique color.
134. The method of claim 116, wherein identifying the orthodontic appliance further comprises:
one or more filters are applied to the image data to determine tooth edges and orthodontic appliance edges.
135. The method of claim 116, wherein identifying the orthodontic appliance further comprises:
the color value of each pixel is estimated to identify a portion of the tooth that does not have the orthodontic appliance and a portion of the tooth that does have the orthodontic appliance.
136. A system, comprising:
a camera;
at least one physical processor; and
a physical memory comprising computer executable instructions that, when executed by a physical processor, enable the physical processor to perform the method of any one of claims 116 to 135.
137. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any one of claims 116-135.
138. A method for capturing image data of a body part according to clinical requirements, the method comprising:
receiving an image data stream from a camera;
determining one or more binary classifications and one or more class classifications from the image data stream using an artificial intelligence scheme;
comparing the one or more binary classifications and the one or more class classifications to a set of requirements; and
feedback is provided based on the comparison.
139. The method of claim 138, wherein the one or more binary classifications include at least one of:
whether a particular tooth is visible;
whether the upper jaw is visible;
whether the mandible is visible;
whether the appliance is visible;
whether a focus threshold corresponding to whether the entire body part is visible is met;
whether the upper teeth are contacted with the lower teeth;
Whether the illumination threshold is met;
whether there are local stones; and
whether there is gingival recession.
140. The method of claim 138, wherein the one or more binary classifications are determined using binary cross entropy.
141. The method of claim 138, wherein the one or more category classifications include at least one of:
a front view;
left cheek view; and
right cheek view.
142. The method of claim 138, wherein the one or more category classifications include one or more groups of mutually exclusive categories.
143. The method of claim 138, wherein the feedback includes a coaching prompt when at least one of the set of requirements is not met.
144. The method of claim 143, wherein the instructional cue comprises at least one of:
instructions to adjust a camera view of the camera to include a particular body part in the camera view;
instructions for inserting a particular orthotic;
instructions to remove a particular orthotic;
instructions to move a particular body part; and
instructions to adjust one or more camera settings.
145. The method of claim 143, wherein the instructional cue comprises a visual cue.
146. The method of claim 143, wherein the instructional cue comprises an audible cue.
147. The method of claim 143, wherein the instructional cue comprises a tactile cue.
148. The method of claim 138, wherein the feedback includes automatically adjusting one or more camera settings when at least one requirement of the set of requirements is not met.
149. The method of claim 138, wherein the feedback includes automatically capturing image data of the body part when the set of requirements is satisfied.
150. The method of claim 138, wherein the feedback includes preventing capturing image data of the body part when at least one requirement of the set of requirements is not met.
151. The method of claim 138, wherein the feedback includes sending a notification.
152. The method of claim 138, further comprising determining the set of requirements based on a current state of a patient's dentition and treatment plan.
153. The method of claim 152, wherein the set of requirements includes at least one of:
visibility of a particular body part;
Visibility of a particular orthotic; and
the type of view captured.
154. The method of claim 153, wherein the particular body part corresponds to a tooth of interest identified from a current state of the treatment plan.
155. The method of claim 154, wherein the particular body part further corresponds to one or more teeth in proximity to the tooth of interest.
156. A system, comprising:
a camera;
at least one physical processor; and
a physical memory comprising computer executable instructions that, when executed by the physical processor, enable the physical processor to perform the method of any one of claims 138 to 155.
157. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any one of claims 138-155.
158. A method for determining a deviation of a tooth from captured image data of a dentition, the method comprising:
Receiving one or more two-dimensional images of a patient's teeth;
receiving a three-dimensional model of a patient's teeth;
registering the patient's teeth in the three-dimensional model with the patient's teeth in the one or more two-dimensional images;
projecting the registered teeth in the three-dimensional model onto each of the one or more two-dimensional images; and
an error image is generated based on differences in the positions of the projected teeth of the three-dimensional model and corresponding teeth of the one or more two-dimensional images.
159. The method of claim 158, wherein:
projecting the registered teeth includes projecting the registered teeth in the three-dimensional model onto an image plane of each of the one or more two-dimensional images.
160. The method of claim 158, wherein:
receiving a two-dimensional image of a patient's teeth includes receiving images of the patient's teeth from multiple perspectives.
161. The method of claim 159, wherein:
the plurality of viewing angles includes at least two of an upper bite viewing angle, a lower bite viewing angle, an anterior viewing angle, a right cheek viewing angle, and a left cheek viewing angle.
162. The method of claim 158, wherein:
Registering the patient's teeth in the three-dimensional model with the patient's teeth in the one or more two-dimensional images includes registering the patient's teeth in the three-dimensional model with the patient's teeth in multiple perspectives in the one or more two-dimensional images.
163. The method of claim 158, wherein:
receiving one or more two-dimensional images of the patient's teeth includes capturing, by a camera, one or more images of the patient's teeth.
164. The method of claim 158, further comprising:
an error image is generated based on the projection, the error image comprising data representing a difference between a position of a tooth in the three-dimensional model and a position of the tooth in the two-dimensional image.
165. The method of claim 164, wherein the three-dimensional model is a three-dimensional model of the patient's teeth in a stage of a treatment plan.
166. The method of claim 165, wherein the two-dimensional image of the patient's teeth corresponds to treatment of the patient in a treatment session.
167. The method of claim 164, wherein the error image includes a first edge and a second edge, the first edge corresponding to an edge of a first patient tooth in the two-dimensional image from a first perspective, the second edge corresponding to an edge of the first patient tooth in the three-dimensional image from the first perspective.
168. The method of claim 164, further comprising:
an image mask is generated based on the error data.
169. The method of claim 164, further comprising:
the image mask is applied to the two-dimensional image.
170. The method of claim 164, further comprising:
one or more of a color and a brightness of a mask portion of the two-dimensional image is adjusted.
171. The method of claim 164, wherein the error image comprises a projected tooth profile of the three-dimensional image on the two-dimensional image.
172. The method of claim 164, wherein the error image includes an overlay of a projected three-dimensional model on the two-dimensional image.
173. The method of claim 167, further comprising:
a distance between the first edge and the second edge in the error image is determined.
174. The method of claim 173, wherein:
the distance is determined based on a number of pixels spanning a distance between the first edge and the second edge.
175. The method of claim 174, further comprising:
a true distance is determined based on the number of pixels and a true size of the pixels.
176. A system, comprising:
a camera;
at least one physical processor; and
a physical memory comprising computer executable instructions that, when executed by a physical processor, enable the physical processor to perform the method of any one of claims 158 to 175.
177. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of the computing device, enable the computing device to perform the method of any one of claims 158-175.
178. A method of generating and providing guidance, the method comprising:
receiving patient dentition measurement information;
receiving patient treatment plan information;
receiving guide information;
patient guidance is generated by applying the guidance information based on the measurement information and the treatment plan information.
179. The method of claim 178, further comprising:
and eliminating the conflict of the guiding information.
180. The method of claim 179, further comprising:
And sending the instruction information to the patient.
181. The method of claim 178, wherein the patient dentition measurement information includes a distance between a current position of a tooth and an expected position of the tooth.
182. The method of claim 181, wherein the desired position of the tooth is based on a position of the tooth corresponding to an orthodontic treatment plan.
183. The method of claim 181, wherein the distance is determined from a two-dimensional image of the patient's teeth at the current location and a three-dimensional model of the patient's teeth at the expected location.
184. The method of claim 183, wherein the distance is based on a projection of the patient's teeth in the three-dimensional model onto one or more two-dimensional images of the patient's teeth.
185. The method of claim 178, wherein the instructional information comprises an instructional template and an individual case guide.
186. The method of claim 185, wherein eliminating conflicts of the guidance information comprises eliminating redundant guidance.
187. The method of claim 185, wherein eliminating conflicts of the guidance information comprises eliminating combining two or more guidance into a single guidance.
188. The method of claim 178, wherein the guidance includes one or more of a modified treatment, notification, or treatment instruction.
189. The method of claim 188, wherein the modified treatment includes one or more of using a chew based on the treatment plan, replacing with a different dental appliance, and wearing the dental appliance for a longer period of time than the initial indication.
190. The method of claim 178, wherein the guidance information includes a tooth deviation threshold that, when met, causes a guide to be generated.
191. The method of claim 190, wherein the threshold is based on a deviation measured during a single treatment session.
192. The method of claim 190, wherein the threshold is based on deviations measured over a plurality of treatment phases.
193. The method of claim 184, wherein the guidance template includes a set of generic thresholds used by a physician.
194. The method of claim 185, wherein the individual guide includes a threshold for a particular treatment.
195. The method of claim 178, wherein generating the patient guidance is based on measurements from a previous treatment session.
196. A system, comprising:
a camera;
at least one physical processor; and
a physical memory comprising computer executable instructions that, when executed by a physical processor, enable the physical processor to perform the method of any one of claims 178 to 195.
197. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any one of claims 178-195.
198. A method for photo-based orthodontic treatment, the method comprising:
receiving one or more photographs of a patient's dentition;
receive a treatment plan for a patient;
determining that a current position of the patient's teeth deviates from an expected position of the patient's teeth based on the treatment plan and the one or more photographs; and
based on the position of the patient's teeth in the photograph, an updated treatment plan is generated to move the patient's teeth toward the final position of the patient's teeth.
199. The method according to claim 198, wherein:
the position of the current position of the patient's teeth is based on the position of the teeth in the one or more photographs.
200. The method according to claim 198, wherein:
the expected position of the patient's teeth is based on the position of the patient's teeth during the phase of the treatment plan.
201. The method according to claim 198, wherein:
the final position is different from the final position of the patient's teeth and the same final position as in the treatment plan.
202. The method according to claim 198, wherein:
the final position is different from the final position of the patient's teeth than the final position in the treatment plan.
203. The method according to claim 198, wherein:
the one or more photographs are two-dimensional images of the patient's teeth from one or more perspectives.
204. The method of claim 198, wherein the measuring includes:
a distance between the position of the patient's teeth in the one or more photographs of the current position and a three-dimensional model of the patient's teeth at the expected position is determined.
205. The method of claim 204, wherein the distance is based on a projection of a patient's teeth in the three-dimensional model onto one or more photographs of the patient's teeth.
206. The method of claim 198, further comprising:
one or more three-dimensional parameters of the three-dimensional model of the patient's teeth are optimized to move teeth in the three-dimensional model into the current position based on a photograph of the patient's teeth.
207. The method of claim 206, wherein:
the three-dimensional model is a segmented dental mesh model of the patient's teeth.
208. The method of claim 206, wherein:
the segmented three-dimensional model is a segmented dental mesh model of the patient's teeth based on an intraoral three-dimensional scan of the patient's teeth.
209. The method of claim 208, further comprising:
an updated segmented dental mesh model of the patient's teeth is generated based on one or more optimized three-dimensional parameters.
210. The method of claim 208, wherein the updated segmented dental mesh model is used as an initial position of the patient's teeth to generate an updated treatment plan.
211. The method of claim 210, wherein the updated treatment plan includes a plurality of intermediate tooth positions to move the patient's teeth from the initial position toward the final position.
212. The method of claim 211, further comprising:
a plurality of dental aligners are made based on the intermediate tooth positions to move the teeth toward the final positions.
213. The method of claim 203, wherein:
the one or more viewing angles include one or more of an upper bite viewing angle, a lower bite viewing angle, a anterior viewing angle, a right cheek viewing angle, and a left cheek viewing angle.
214. The method according to claim 204, wherein:
in the event that the distance of one or more teeth is greater than 0.1mm, generating the updated treatment plan occurs.
215. The method of claim 206, wherein:
the optimizing includes using one or more of differential rendering and a desired maximization process.
216. A system, comprising:
a camera;
at least one physical processor; and
a physical memory comprising computer executable instructions that, when executed by a physical processor, enable the physical processor to perform the method of any one of claims 198 to 215.
217. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, enable the computing device to perform the method of any one of claims 198-215.
CN202180065294.4A 2020-07-23 2021-07-22 Systems, devices, and methods for dental care Pending CN116568239A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US62/705,954 2020-07-23
US202163200432P 2021-03-05 2021-03-05
US63/200,432 2021-03-05
PCT/US2021/042838 WO2022020638A1 (en) 2020-07-23 2021-07-22 Systems, apparatus, and methods for dental care

Publications (1)

Publication Number Publication Date
CN116568239A true CN116568239A (en) 2023-08-08

Family

ID=87490219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180065294.4A Pending CN116568239A (en) 2020-07-23 2021-07-22 Systems, devices, and methods for dental care

Country Status (1)

Country Link
CN (1) CN116568239A (en)

Similar Documents

Publication Publication Date Title
US11991439B2 (en) Systems, apparatus, and methods for remote orthodontic treatment
US11759291B2 (en) Tooth segmentation based on anatomical edge information
US20210074061A1 (en) Artificially intelligent systems to manage virtual dental models using dental images
US10952817B1 (en) Systems and methods for determining orthodontic treatments
US20230225831A1 (en) Photo-based dental appliance fit
EP3140809A1 (en) Identification of areas of interest during intraoral scans
US10631954B1 (en) Systems and methods for determining orthodontic treatments
US20220202295A1 (en) Dental diagnostics hub
KR20150039028A (en) Simulation method and system for orthodontic treatment
US20230062670A1 (en) Patient specific appliance design
US20230390027A1 (en) Auto-smile design setup systems
WO2022147160A1 (en) Dental diagnostics hub
US20230210634A1 (en) Outlier detection for clear aligner treatment
WO2016059550A1 (en) A method and a system for administration of prosthetic treatment process of a dental patient
US20230008883A1 (en) Asynchronous processing for attachment material detection and removal
US11399917B1 (en) Systems and methods for determining an orthodontic treatment
CN116568239A (en) Systems, devices, and methods for dental care
US20240122463A1 (en) Image quality assessment and multi mode dynamic camera for dental images
US20220378549A1 (en) Automated management of clinical modifications to treatment plans using three-dimensional controls
US20220401182A1 (en) Systems and methods for dental treatment
KR20220051059A (en) Method for providing section image of tooth and dental image processing apparatus therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination