EP4377840A2 - Modélisation de structures dentaires à partir d'un balayage dentaire - Google Patents

Modélisation de structures dentaires à partir d'un balayage dentaire

Info

Publication number
EP4377840A2
EP4377840A2 EP22850389.2A EP22850389A EP4377840A2 EP 4377840 A2 EP4377840 A2 EP 4377840A2 EP 22850389 A EP22850389 A EP 22850389A EP 4377840 A2 EP4377840 A2 EP 4377840A2
Authority
EP
European Patent Office
Prior art keywords
dental
model
tooth
training
video scan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22850389.2A
Other languages
German (de)
English (en)
Inventor
Alon Luis LIPNIK
Yarden EILAT-BLOCH
Oded KRAMS
Adam Benjamin SCHULHOF
Carmi Raz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Get Grin Inc
Original Assignee
Get Grin Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Get Grin Inc filed Critical Get Grin Inc
Publication of EP4377840A2 publication Critical patent/EP4377840A2/fr
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • A61C9/0046Data acquisition means or methods
    • A61C9/0053Optical means or methods, e.g. scanning the teeth by a laser or light beam
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Definitions

  • the systems and methods described herein relate to dental structure modeling, and more specifically a method and system for modeling a dental structure from a video dental scan.
  • Dental professionals and orthodontists may treat and monitor a patient’s dental condition based on in-person visits. Treatment and monitoring of a patient’s dental condition may require a patient to schedule multiple in-person visits to a dentist or orthodontist. The quality of treatment and the accuracy of monitoring may vary depending on how often and how consistently a patient sees a dentist or orthodontist. In some cases, suboptimal treatment outcomes may result if a patient is unable or unwilling to schedule regular visits to a dentist or orthodontist.
  • the present disclosure provides methods and systems that are capable of generating (or configured to generate) a three-dimensional (3D) model of a dental structure of a dental patient from a video of a dental scan collected using a mobile device.
  • the 3D model may be a 3D surface model (mesh) with fine details of the surface of the dental structure.
  • the 3D model reconstructed from the videos as described herein can have substantially the same or similar quality and surface details as those of a 3D model (e.g., optical impressions) produced using an existing high-resolution clinical intraoral scanner. It is noted that high- resolution clinical intraoral scans can be time-consuming and uncomfortable to the patient.
  • Methods and systems of the present disclosure beneficially provide a convenient and efficient solution for monitoring and evaluating the positions of a patient's teeth during the course of orthodontic treatment using a user mobile device, in the comfort of the patient’s home or another convenient location, without requiring the patient to travel to a dental clinic or undergo a time-consuming and uncomfortable full clinical intraoral dental scan.
  • a method for training a visual filter neural network to identify one or more tooth numbers of one or more teeth from one or more dental images comprising: (a) providing an intraoral region model, wherein the intraoral region model comprises one or more model teeth; (b) providing orientation data, wherein the orientation data correlates a spatial location of the one or more model teeth with the corresponding tooth number of the one or more model teeth; (c) providing a plurality of training dental images, wherein each training dental image of the plurality of training dental images comprises one or more teeth; (d) creating a plurality of training datasets by using the visual information corresponding to the one or more model teeth to label the one or more teeth in each one of the plurality of training dental images with a respective label, wherein the respective label indicates either a tooth number or a tooth number is not identifiable; and (e) training the visual filter neural network based on the plurality of training datasets to identify a tooth within a dental image of a subject and label the tooth with a corresponding tooth
  • the intraoral region model is a two-dimensional (2D) model representation of the intraoral region of an adult subject from a front perspective. In some cases, the intraoral region model is a 2D model representation of the intraoral region of an adult subject from a top view perspective. In some cases, the intraoral region model is a three- dimensional (3D) model representation of the intraoral region of an adult subject. In some cases, the intraoral region model is a 2D model representation of the intraoral region of a child subject from a front perspective. In some cases, the oral region model is a 2D model representation of the intraoral region of a child from a top view perspective. In some cases, the oral region model is a 3D model representation of the intraoral region of a child subject. In some cases, the orientation data is acquired from capturing the intraoral region model with a dental scope, and wherein the orientation data corresponds to the spatial orientation of the dental scope relative to the intraoral region being captured.
  • the dental image is of a human subject. In some cases, the dental image is captured within the visible light spectrum. In some cases, the dental image is acquired using a dental scope.
  • the creating of the plurality of training datasets comprises comparing and matching a rotation or orientation of a tooth in a training dental image with a rotation or orientation of the corresponding model tooth. In some cases, the creating of the plurality of training datasets comprises comparing and matching a scale of a tooth in a training dental image with a scale of the corresponding model tooth. In some cases, the creating of the plurality of training datasets comprises comparing and matching a contour of a tooth in a training dental image with a contour of the corresponding model tooth, wherein a contour of the tooth is determined from outlier pixel intensity values.
  • the creating of the plurality of training datasets comprises comparing and matching a color of a tooth in a training dental image with a color of the corresponding model tooth, wherein a color of the tooth is determined from pixel intensity values. In some cases, the creating of the plurality of training datasets comprises comparing and matching morphologic structure of a tooth in a training dental image with a morphologic structure of the corresponding model tooth, wherein the morphologic structure of the tooth is determined from the shape of the teeth and surface pixel color and intensity.
  • the creating of the plurality of training datasets comprises identifying a first tooth in the training dental image based on the relation of the first tooth to a second tooth adjacent to the first tooth. In some cases, the creating of the plurality of training datasets comprises identifying a first tooth in the training dental image based on the relation of the first tooth to a second tooth opposite of the first tooth. In some cases, the method further comprises reviewing the respective label of a training dental image of the plurality of training dental images to confirm the accuracy of the label.
  • a method to identify a number of a tooth from a dental image comprising: providing a dental image, wherein the dental image comprises a visible part of the tooth; and running a visual filter neural network to identify the tooth number.
  • the visual filter neural network is provided with an intraoral region model of a user, and wherein the dental image is of the user.
  • the dental image is projected on the identified tooth on the intraoral region model of the user.
  • a method for updating a three-dimensional (3D) dental model of at least one tooth comprising: (a) providing at least one two-dimensional (2D) dental image including the at least one tooth; (b) running a visual filter neural network on the 2D dental image to identify the tooth number of the at least one tooth; (c) providing a baseline 3D dental model that includes the at least one identified tooth; (d) generating a 2D capture of the baseline 3D dental model; (e) updating the 2D capture of the 3D dental model in accordance with the 2D dental image; and (f) using the updated 2D capture to update the 3D dental model.
  • a method for updating an initial three-dimensional (3D) dental model of a dental structure of a subject comprising: (a) providing a dental video scan of the dental structure of the subject captured using a camera of a mobile device, wherein the dental structure of the subject comprises one or more oral landmarks; (b) analyzing the dental video scan to identify an oral landmark of the one or more oral landmarks; (c) providing the initial 3D dental model of the dental structure of the subject; (d) comparing the dental scan video with the initial 3D dental model to determine differences between the identified oral landmark in the two models; and (e) updating the initial 3D dental model to include the differences of the identified oral landmark.
  • the analyzing of the dental video scan comprises running a visual filter neural network to identify the tooth number of at least one tooth in the dental structure of the subject.
  • the analyzing of the dental video scan comprises determining the relative distance between a camera used to capture the dental video scan and the oral landmark identified in the dental video scan.
  • the identified oral landmark is the arch plane of a subject, and the relative distance comprises the distance from the arch plane to the camera used to capture the dental video scan.
  • the analyzing of dental video scan comprises determining the object distance and time duration of at least two perspectives within the dental video scan.
  • the analyzing of the dental video scan comprises identifying at least one focus object in a frame of the dental video scan, generating a perspective focus plane of the at least one focus object, and identifying the relative distance from the focus plane to the camera used to capture the dental video scan.
  • the updating comprises: (i) applying structure from motion (SfM) to the dental video scan; (ii) applying a multi view stereo (MVS) algorithm of at least two perspectives to the dental video scan , (iii) determining a transformation of at least one element of the dental structure and applying the transformation to update a position of the at least one element in the 3D dental model; or (iv) deforming a surface of a local area of the at least one element of the dental structure using a deformation algorithm.
  • the 3D dental model is a generic model.
  • the 3D dental model comprises the dental structure of the subject.
  • the relative distance is retrieved from the dental video scan metadata.
  • an aspect provided here in a non-transitory computer-readable medium comprising machine-executable instructions that, upon execution by one or more computer processors, implements a method for delivering context based information to a mobile device in real time, the method comprising: a memory for storing a set of instructions; and one or more processors configured to execute the set of instructions to: (a) provide a dental video scan of the dental structure of the subject using a camera of a mobile device, wherein the dental structure of the subject comprises one or more oral landmarks; (b) analyze the dental video scan to identify an oral landmark of the one or more oral landmarks; (c) provide the 3D dental model of the dental structure of the subject; (d) compare the dental scan video with the 3D dental model to determine differences between the identified oral landmark in the two models; and (e) update the 3D dental model to include the differences of the identified oral landmark.
  • the analyzing of the dental video scan comprises running a visual filter neural network to identify the tooth number of at least one tooth in the dental structure of the subject.
  • the analyzing of the dental video scan comprises determining the relative distance between a camera used to capture the dental video scan and the oral landmark identified in the dental video scan.
  • the identified oral landmark is the arch plane of a subject, and the relative distance comprises the distance from the arch plane to the camera used to capture the dental video scan.
  • the analyzing of dental video scan comprises determining the object distance and time duration of at least two perspectives within the dental video scan.
  • the analyzing of the dental video scan comprises identifying at least one focus object in a frame of the dental video scan, generating a perspective focus plane of the at least one focus object, and identifying the relative distance from the focus plane to the camera used to capture the dental video scan.
  • the updating comprises: (i) applying structure from motion (SfM) to the dental video scan; (ii) applying a multi view stereo (MVS) algorithm of at least two perspectives to the dental video scan , (iii) determining a transformation of at least one element of the dental structure and applying the transformation to update a position of the at least one element in the 3D dental model; or (iv) deforming a surface of a local area of the at least one element of the dental structure using a deformation algorithm.
  • the 3D dental model is a generic model.
  • the 3D dental model comprises the dental structure of the subject.
  • the relative distance is retrieved from the dental video scan metadata.
  • the term “dental video scan” or “dental scan” refers to a video or an image frame from a video capture of the intraoral perspective of the teeth arch or of a tooth.
  • arch plane refers to at least one imaginary plane that is generated form cut line crossing at least one mouth dental arch, or at the top of the teeth (up or bottom).
  • the term “perspective focus plane” refers to at least one plane the generated by perspective of one camera shot or frame that capture image and the collection of objects that in the current focus of the camera.
  • the “perspective focus plane” is an imaginary plane generated by the objects that are in the same focal distance from the camera in selected time.
  • the term “dental structure” as utilized here may include intra-oral structures or dentition, such as human dentition, individual teeth, quadrants, full arches, upper and lower dental arches (which may be positioned and/or oriented in various occlusal relationships relative to each other), soft tissue (e.g., gingival and mucosal surfaces of the mouth, or perioral structures such as the lips, nose, cheeks, and chin), bones, and any other supporting or surrounding structures proximal to one or more dental structures.
  • Intra-oral structures may include both natural structures within a mouth and artificial structures such as dental objects (e.g., prosthesis, implant, appliance, restoration, restorative component, or abutment).
  • method refers to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of architecture and/or computer science.
  • Implementation of the methods and systems of the described herein may involve performing or completing selected tasks or steps manually, automatically, or a combination thereof.
  • several selected steps could be implemented by hardware or by software on any operating system of any firmware or a combination thereof.
  • selected steps could be implemented as a chip or a circuit.
  • selected steps could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system.
  • selected steps of the methods and systems described herein could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions.
  • FIG. 1 schematically illustrates an example of a method for training a visual filter neural network, in accordance with some embodiments.
  • FIG. 2 schematically illustrates an example of a system to designate tooth number to a tooth on dental images, in accordance with some embodiments.
  • FIG. 3 schematically illustrates an example of a method for updating a three- dimensional (3D) point cloud of at least one tooth, in accordance with some embodiments.
  • FIG. 4 schematically illustrates a computer system that is programmed or otherwise configured to implement t at least some of the methods or the systems disclosed herein, in accordance with some embodiments.
  • real-time generally refers to a simultaneous or substantially simultaneous occurrence of a first event or action with respect to an occurrence of a second event or action.
  • a real-time action or event may be performed within a response time of less than one or more of the following: ten seconds, five seconds, one second, a tenth of a second, a hundredth of a second, a millisecond, or less relative to at least another event or action.
  • a real-time action may be performed by one or more computer processors.
  • ком ⁇ онент can be a processor, a process running on a processor, an object, an executable, a program, a storage device, and/or a computer.
  • an application running on a server and the server can be a component.
  • One or more components can reside within a process, and a component can be localized on one computer and/or distributed between two or more computers.
  • the term “visual filter neural network” corresponds to a neural network used to identify a number of a tooth from one or more dental images.
  • the visual filter neural network works by: (a) providing an intraoral region model, wherein the intraoral region model comprises one or more model teeth; (b) providing orientation data, wherein the orientation data correlates the spatial location of the one or more model teeth with the corresponding tooth number of the one or more model teeth; (c) associating the tooth number of the one or more model teeth with visual information corresponding to the one or more model teeth; (d) providing a plurality of training dental images, wherein each training dental image of the plurality of training dental images comprises one or more teeth; (e) using the visual information corresponding to the one or more model teeth to create a plurality of training datasets by labeling the one or more teeth in each one of the plurality of training dental images with a respective label indicating a tooth number or that a tooth number could not be identified; and (f) training the visual filter neural network
  • these components can execute from various computer readable media having various data structures stored thereon.
  • the components can communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network, e.g., the Internet, a local area network, a wide area network, etc. with other systems via the signal).
  • a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network, e.g., the Internet, a local area network, a wide area network, etc. with other systems via the signal).
  • the present disclosure deals in various aspects of three dimensional (3D) digital representations of an individual’s intraoral structure.
  • Dental scans can be used to update such 3D representations of an individual’s intraoral structure.
  • a visual filter neural network can be used to update the 3D representations.
  • a method for training a visual filter neural network to identify one or more tooth numbers of one or more teeth from one or more dental images comprising: (a) providing an intraoral region model, wherein the intraoral region model comprises one or more model teeth; providing orientation data, wherein the orientation data correlates the spatial location of the one or more model teeth with the corresponding tooth number of the one or more model teeth; associating the tooth number of the one or more model teeth with visual information corresponding to the one or more model teeth; providing a plurality of training dental images, wherein each training dental image of the plurality of training dental images comprises one or more teeth; creating a plurality of training datasets by using the visual information corresponding to the one or more model teeth to label the one or more teeth in each one of the plurality of training dental images with a respective label, wherein the respective label indicates either a tooth number or a tooth number is not identifiable; and training the visual filter neural network based on the plurality of training datasets to identify a tooth within a dental image of
  • the intraoral region model is a two-dimensional (2D) model representation of the intraoral region of an adult subject from a front perspective. In some cases, the intraoral region model is a 2D model representation of of the intraoral region of an adult subject from a top view perspective. In some cases, the intraoral region model is a three-dimensional (3D) model representation of the intraoral region of an adult subject. In some cases, the intraoral region model is a 2D model representation of of the intraoral region of a child subject from a front perspective. In some cases, the oral region model is a 2D model representation of the intraoral region of a child from a top view perspective. In some cases, the oral region model is a 3D model representation of the intraoral region of a child subject.
  • the orientation data is acquired from capturing the intraoral region model with a dental scope, and wherein the orientation data corresponds to the spatial orientation of the dental scope relative to the intraoral region being captured.
  • the dental image is of a human subject.
  • the dental image is captured within the visible light spectrum.
  • the dental image is acquired using a dental scope.
  • the creating of the plurality of training datasets comprises comparing and matching a rotation or orientation of a tooth in a training dental image with a rotation or orientation of the corresponding model tooth. In some cases, the creating of the plurality of training datasets comprises comparing and matching a scale of a tooth in a training dental image with a scale of the corresponding model tooth. In some cases, the creating of the plurality of training datasets comprises comparing and matching a contour of a tooth in a training dental image with a contour of the corresponding model tooth, wherein a contour of the tooth is determined from outlier pixel intensity values. In some cases, the creating of the plurality of training datasets comprises comparing and matching a color of a tooth in a training dental image with a color of the corresponding model tooth, wherein a color of the tooth is determined from pixel intensity values.
  • the creating of the plurality of training datasets comprises identifying a first tooth in the training dental image based on the relation of the first tooth to a second tooth adjacent to the first tooth. In some cases, the creating of the plurality of training datasets comprises identifying a first tooth in the training dental image based on the relation of the first tooth to a second tooth opposite of the first tooth. In some cases, the method further comprises: reviewing the respective label of a training dental image of the plurality of training dental images to confirm the accuracy of the label.
  • the present disclosure provides a system for training a visual filter neural network for segmentation of type and number of a teeth from dental images comprising: providing an oral region model and a target orientation of a dental images defined by the classification neural network; creating a training dataset by labeling each one of a plurality of dental images provided from a storage server with a respective label indicating teeth number or with a respective label indicating a tooth is not identified; creating a second training dataset by labeling each one of a plurality of dental images provided from a storage server with a respective label indicating teeth number or with a respective label indicating a tooth is not identified;; providing additional dental images stored on the storage server; training the visual filter neural network based on the training datasets for classifying the additional dental images into classification category indicating teeth number; compare classified dental images on respective labels indicating teeth number or non-indicate and updating training sets.
  • the oral region model is a two-dimensional (2D) model representation of an adult teeth in front perspective. In some embodiments the oral region model is a 2D model representation of an adult teeth in up-view perspective.
  • the oral region model is a three-dimensional (3D) model representation of an adult teeth.
  • FIG. 1 schematically illustrates one example of a method for training a visual filter neural network 100 to classify the identify a tooth number from dental images.
  • the method may include providing an oral region model and a target orientations of a dental images defined by the classification neural network 102; creating a training dataset by labeling each one of a plurality of dental images provided from a storage server with a respective label indicating teeth number or with a respective label indicating a tooth is not identifiable 104; creating a second training dataset by labeling each one of a plurality of dental images provided from a storage server with a respective label indicating teeth number or with a respective label indicating non indicate 104A; providing additional dental images stored on the storage server 106; training the visual filter neural network based on the training datasets to classify the additional dental images into classification category indicating teeth number 108; compere classified dental images on respective labels indicating teeth number or a tooth is not identifiable 110 and updating training sets 114.
  • the method can further comprises reviewing classified dental images on respective labels for the respective label indicating teeth number or for the respective label indicating a tooth is not identifiable 114 and updating the training datasets 116. In some cases, reviewing is performed manually. Assigning Tooth Numbers on Dental Images [0052]
  • a method to identify a number of a tooth from a dental image comprising: providing a dental image, wherein the dental image comprises a visible part of the tooth; and running a trained visual filter neural network to identify the tooth number.
  • the visual filter neural network is provided with an intraoral region model of a user, and wherein the dental image is of the user.
  • the dental image is projected on the identified tooth on the intraoral region model of the user.
  • FIG. 2 schematically illustrate an example of a method 200 to designate teeth number to identify a number of a tooth from a dental image.
  • the method comprises providing at least one dental image including at least visible part of a at least one tooth 202; running a visual filter neural network to identify the tooth number 204; and receive a designated teeth number identification for the at least one tooth in the dental image 208. Updating a three-dimensional (3D) dental model
  • a method for updating a three-dimensional (3D) dental model of at least one tooth comprising: (a) providing at least one two- dimensional (2D) dental image including the at least one tooth; (b) running a trained visual filter neural network on the 2D dental image to identify the tooth number of the at least one tooth; (c) providing a baseline 3D dental model that includes the at least one tooth; (d) generating a 2D capture of the baseline 3D dental model; (e) updating the 2D capture of the 3D dental model to include the identified tooth number obtained from the 2D dental image; and (f) using the updated 2D capture to update the 3D dental to include the identified tooth number obtained from the 2D dental image.
  • the present disclosure provides methods and systems that are capable of generating (or configured to generate) a three-dimensional (3D) model of a dental structure of a dental patient using video of dental scan collected using a mobile device.
  • the 3D model may be a 3D surface model (mesh) with fine details of the surface of the dental structure.
  • the 3D model reconstructed from the videos as described herein can have substantially the same or similar quality and surface details as those of a 3D model (e.g., optical impressions) produced using an existing high-resolution clinical intraoral scanner. It is noted that high- resolution clinical intraoral scans can be time-consuming and uncomfortable to the patient.
  • the method comprisesproviding at least one 2D dental image including at least one tooth 302. running a visual filter neural network on the 2D dental image to receive tooth identification 304, providing a 3D dental model 306 and generating a 2D capture of the 3d dental model including the identified tooth location at the 2D dental image perspective 308, updating 310 the 2D capture in accordance with the 2D dental image; and updating the 3D dental model 306 in accordance with the updated 2D capture 312.
  • Methods and systems of the present disclosure beneficially provide a convenient and efficient solution for monitoring and evaluating the positions of a patient's teeth during the course of orthodontic treatment using a user mobile device, in the comfort of the patient’s home or another convenient location, without requiring the patient to travel to a dental clinic or undergo a time-consuming and uncomfortable full clinical intraoral dental scan.
  • the present disclosure provides a method for updating a three- dimensional (3D) model of dental structure, the method comprising: providing a 3D model of dental structure providing a dental video scan; analyzing the dental video scan to identify at least one tooth, video relative distance or time., and updating the 3D model of dental structure with at least part of the dental structure from the dental video scan.
  • the analyzing of the dental video scan comprises relative distance between the camera and a selected object on at least two perspectives in the dental video scan.
  • the analyzing comprises identification of at least one arch plane and the relative distance comprises the distance from the arch plane.
  • the analyzing of dental video scan comprises object distance and time duration of at least two perspectives in the dental video scan.
  • the analyzing of dental video scan comprises identification of at least one focus object in a video frame, generating perspective focus plane and the relative distance from the arch plane.
  • the updating comprises at least one of the following: (i) structure from motion (SfM) and (ii)multi view stereo (MVS) algorithm of at least two perspectives in the dental video, (iii) determine a transformation for at least one element of the dental structure and applying the transformation to update a position of the at least one element and (iv) deforming a surface of a local area of the at least one element using a deformation algorithm.
  • SfM structure from motion
  • MVS multi view stereo
  • the 3D model of dental structure is a generic model
  • the 3D model is a user dental structure
  • the relative distance is retrieved from the dental video scan metadata.
  • the present disclosure provides a non-transitory computer- readable medium comprising machine-executable instructions that, upon execution by one or more computer processors, implements a method for delivering context based information to a mobile device in real time, the method comprising: a memory for storing a set of instructions; and one or more processors configured to execute the set of instructions to: receive a 3D model of dental structure receive a dental video scan; Analyze the dental video scan to identify at least one tooth, video relative distance or time. Updating the 3D model of dental structure with at least part of the dental structure from the dental video scan.
  • the analyze of the dental video comprises relative distance between the camera and a selected object on at least two perspectives in the dental video scan.
  • the analyze comprises wherein the analyzing comprises identification of at least one arch plane and the relative distance comprises the distance from the arch plane.
  • the analyze of dental video scan comprises object distance and time duration of at least two perspectives in the dental video scan.
  • the analyze of dental video scan comprises identification of at least one focus object in a video frame, generating perspective focus plane and the relative distance from the arch plane.
  • the updating comprises at least one of the following: (i) structure from motion (SfM) and (ii)multi view stereo (MVS) algorithm of at least two perspectives in the dental video, (iii) determine a transformation for at least one element of the dental structure and applying the transformation to update a position of the at least one element and (iv) deforming a surface of a local area of the at least one element using a deformation algorithm.
  • SfM structure from motion
  • MVS multi view stereo
  • the 3D model of dental structure is a generic model
  • the 3D model is a user’s dental structure
  • the relative distance is retrieved from the dental video scan metadata.
  • a method for updating a three-dimensional (3D) dental model of at least one tooth comprising: (a) providing at least one two- dimensional (2D) dental image including the at least one tooth; (b) running a trained visual filter neural network on the 2D dental image to identify the tooth number of the at least one tooth; (c) providing a baseline 3D dental model that includes the at least one tooth; (d) generating a 2D capture of the baseline 3D dental model; (e) updating the 2D capture of the 3D dental model to include the identified tooth number obtained from the 2D dental image; and (f) using the updated 2D capture to update the 3D dental to include the identified tooth number obtained from the 2D dental image.
  • a method for updating an initial three- dimensional (3D) dental model of a dental structure of a subject comprising: (a) providing a dental video scan of the dental structure of the subject captured using a camera of a mobile device, wherein the dental structure of the subject comprises one or more oral landmarks; (b) analyzing the dental video scan to identify an oral landmark of the one or more oral landmarks; (c) providing the 3D dental model of the dental structure of the subject; (d) comparing the dental scan video with the 3D dental model to determine differences between the identified oral landmark in the two models; and (e) updating the 3D dental model to include the differences of the identified oral landmark.
  • the analyzing of the dental video scan comprises running a visual filter neural network to identify the tooth number of at least one tooth in the dental structure of the subject. In some cases, the analyzing of the dental video scan comprises determining the relative distance between a camera used to capture the dental video scan and the oral landmark identified in the dental video scan.
  • the identified oral landmark is the arch plane of a subject
  • the relative distance comprises the distance from the arch plane to the camera used to capture the dental video scan.
  • the analyzing of dental video scan comprises determining the object distance and time duration of at least two perspectives within the dental video scan.
  • the analyzing of the dental video scan comprises identifying at least one focus object in a frame of the dental video scan, generating a perspective focus plane of the at least one focus object, and identifying the relative distance from the focus plane to the camera used to capture the dental video scan.
  • the updating comprises: (i) applying structure from motion (SfM) to the dental video scan; (ii) applying a multi view stereo (MVS) algorithm of at least two perspectives to the dental video scan , (iii) determining a transformation of at least one element of the dental structure and applying the transformation to update a position of the at least one element in the 3D dental model; or (iv) deforming a surface of a local area of the at least one element of the dental structure using a deformation algorithm.
  • the 3D dental model is a generic model.
  • the 3D dental model comprises the dental structure of the subject.
  • the relative distance is retrieved from the dental video scan metadata.
  • the present disclosure provides methods and systems that are capable of generating (or configured to generate) a three-dimensional (3D) model of a dental structure of a dental patient using dental scan videos collected using a mobile device.
  • the 3D model may be a 3D surface model (mesh) with fine details of the surface of the dental structure.
  • artificial intelligence including machine learning algorithms, may be employed to train a predictive model for 3D model, and various other functionalities as described elsewhere herein.
  • a machine learning algorithm may be a neural network, for example. Examples of neural networks that may be used with embodiments herein may include a deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN).
  • DNN deep neural network
  • CNN convolutional neural network
  • RNN recurrent neural network
  • the model may be trained using supervised learning.
  • a machine learning algorithm trained model may be pre-trained and implemented on the physical dental imaging system, and the pre-trained model may undergo continual re-training that may involve continual tuning of the predictive model or a component of the predictive model (e.g., classifier) to adapt to changes in the implementation environment over time (e.g., changes in the image data, model performance, expert input, etc.).
  • the predictive model may be trained using unsupervised learning or semi- supervised learning.
  • the 3D model generated from the dental scan videos may preserve the fine surface details obtained from the high-resolution clinical intraoral scan while providing accurate and precise measurements of the current position and orientation of a particular dental structure (e.g., one or more teeth).
  • the clinical high-resolution intraoral scanner can use any suitable intra-oral imaging equipment such as a laser or structured light projection scanner.
  • the present disclosure provides methods for 3D model of a dental structure.
  • an initial three-dimensional (3D) model general or representing a patient's dental structure is provided by a high-quality intraoral scan as described above.
  • the initial 3D model may include a 3D surface model with fine surface details.
  • the initial 3D surface model can be obtained using any suitable intraoral scanning device.
  • raw point cloud data provided by the scanner may be processed to generate 3D surfaces of the dental structure (e.g., teeth along with the surrounding gingiva).
  • dental scan videos representing the dental structure may be conveniently provided using a user mobile device.
  • the dental scan videos may be processed to reconstruct a reduced three-dimensional (3D) model of the dental structure.
  • the 3D model may be a dense 3D point cloud that contains reduced 3D information of the dental structure without fine surface details.
  • a transformation between the reduced three-dimensional (3D) model reconstructed from the dental scan video and the initial 3D model (mesh model) is determined by aligning or registering elements of the initial 3D model with corresponding elements within the dental scan video.
  • a three-dimensional (3D) image of the dental structure is subsequently derived or reconstructed by transforming the initial 3D model using the transformation data.
  • the term “rough 3D model” as utilized herein may generally refer to a 3D model with reduced surface details.
  • the data collected from the dental scan video may include perspectives of the dentition (e.g., teeth) from multiple viewing angles.
  • the data may be processed using any suitable computer vision technique to reconstruct a 3D point cloud of the dental structure.
  • the algorithm may include a pipeline for structure from motion (SfM) and multi view stereo (MVS) processing.
  • the first 3D point cloud may be reconstructed by applying structure from motion (SfM) and multi view stereo (MVS) algorithms to the image data. For example, a SfM algorithm is applied to the collected image data to generate estimated camera parameters for each image (and a sparse point cloud describing the scene).
  • Structure from motion enables accurate and successful regeneration in cases where multiple scene elements (e.g., arches) do not move independently of each other throughout the image frames.
  • segmentation masks may be utilized to track the respective movement.
  • the estimated camera parameters may include both intrinsic parameters such as focal length, focus distance, distance between the micro lens array and image sensor, pixel size, and extrinsic parameters of the camera such as information about the transformations from 3D world coordinates to the 3D camera coordinates.
  • the image data and the camera parameters are processed by the multi-view stereo method to output a dense point cloud of the scene (e.g., a dental structure of a patient).
  • the dental scan video may be segmented such that each point may be annotated with semantic segmentation information.
  • the 3D model can be stored in any suitable file formats such as a Standard Triangle Language (STL) file, a WRL file, a 3MF file, an OBJ, a FBX file, a 3DS file, an IGES file, or a STEP file and various others.
  • STL Standard Triangle Language
  • pre-processing of the dental scan video may be performed to improve the accuracy and quality of the rough 3D model.
  • the pre-processing can include any suitable image processing algorithms, such as image smoothing, to mitigate the effect of sensor noise, image histogram equalization to enhance the pixel intensity values, or video stabilization methods.
  • an arch mask may be utilized to track the motion of the arch throughout the video to filter out non-interest anatomical features (e.g., lip, tongue, soft tissue, etc.) in the scene. This beneficially ensures that the rough 3D model (e.g., 3D point cloud) substantially corresponds to the surface of the initial 3D model (e.g., teeth and gum).
  • the pre-processing may be performed using machine learning techniques. For example, pixel segmentation can be used to isolate the upper and lower arches and/or mask out the undesired anatomical features. Pixel segmentation may be performed using a deep learning trained model. In another example, image processing such as smoothing, sharpening, stylization may also be performed using a machine learning trained model.
  • the machine learning network can include various types of neural networks including a deep neural network, convolutional neural network (CNN), and recurrent neural network (RNN).
  • the machine learning algorithm may comprise one or more of the following: a support vector machine (SVM), a naive Bayes classification, a linear regression, a quantile regression, a logistic regression, a random forest, a neural network, CNN, RNN, a gradient- boosted classifier or repressor, or another supervised or unsupervised machine learning algorithm (e.g., generative adversarial network (GAN), Cycle-GAN, etc.).
  • SVM support vector machine
  • GAN generative adversarial network
  • Cycle-GAN Cycle-GAN
  • the rough 3D model can be reconstructed using various other methods.
  • the rough 3D model may be reconstructed from a depth map.
  • the imaging device may comprise a camera, a video camera, a three-dimensional (3D) depth camera, a stereo camera, a depth camera, a Red Green Blue Depth (RGB-D) camera, a time-of-flight (TOF) camera, an infrared camera, a charge coupled device (CCD) image sensor, or a complementary metal oxide semiconductor (CMOS) image sensor.
  • the rough 3D model regeneration method may include generating the three-dimensional model using one or more aspects of passive tri angulation.
  • Passive triangulation may involve using stereo-vision methods to generate a three-dimensional model based on a plurality of images obtained using a stereoscopic camera comprising two or more lenses.
  • the 3D model generation method may include generating the three- dimensional model using one or more aspects of active tri angulation.
  • Active triangulation may involve using a light source (e.g., a laser source) to project a plurality of optical features (e.g., a laser stripe, one or more laser dots, a laser grid, or a laser pattern) onto one or more intraoral regions of a subject’s mouth.
  • Active triangulation may involve computing and/or generating a three-dimensional representation of the one or more intraoral regions of the subject’s mouth based on a relative position or a relative orientation of each of the projected optical features in relation to one another. Active triangulation may involve computing and/or generating a three-dimensional representation of the one or more intraoral regions of the subject’s mouth based on a relative position or a relative orientation of the projected optical features in relation to the light source or a camera of the mobile device.
  • a deep learning model may be utilized to process the input raw image data and output a 3D mesh model.
  • the deep learning model may include a pose estimation algorithm that can reconstruct a 3D surface model using a single image.
  • the 3D surface model may be reconstructed from multiple images.
  • the pose estimation algorithm can be any type of machine learning network such as a neural network.
  • remote monitoring and dental imaging may refer to monitoring a dental anatomy or a dental condition of a patient and taking images of the dental anatomy at one or more locations remote from the patient or dentist.
  • a dentist or a medical specialist may monitor the dental anatomy or dental condition in a first location that is different than a second location where the patient is located.
  • the first location and the second location may be separated by a distance spanning at least 1 meter, 1 kilometer, 10 kilometers, 100 kilometers, 1000 kilometers, or more.
  • the remote monitoring may be performed by assessing a dental anatomy or a dental condition of the subject using one or more intraoral images captured by the subject when the patient is located remotely from the dentist or a dental office.
  • the remote monitoring may be performed in real time such that a dentist is able to assess the dental anatomy or the dental condition when a subject uses a mobile device to acquire one or more intraoral images of one or more intraoral regions in the patient’s mouth.
  • the remote monitoring and dental imaging may be performed using equipment, hardware, and/or software that is not physically located at a dental office.
  • FIG. 4 shows a computer system 401 that is programmed or otherwise configured to implement a method for dental scan, to implement a method for training a neural network, to implement method for designate teeth number or method for updating 3D dental model.
  • the method and implantation can be done in one computer, in few computer systems in different location or in a computer cloud system.
  • the computer system 401 may be configured to, for example, process intraoral videos or images captured using the camera of the mobile device, and designate teeth number to a tooth on dental images.
  • the computer system 401 may be configured to, for example, process a for training a neural network.
  • the computer system 401 may be configured to updating 3D dental model.
  • the computer system 401 can be an electronic device of a user or a computer system that is remotely located with respect to the electronic device.
  • the electronic device can be a mobile electronic device.
  • the computer system 401 can be a smartphone.
  • the computer system 401 may include a central processing unit (CPU, also "processor” and “computer processor” herein) 405, which can be a single core or multi core processor, or a plurality of processors for parallel processing.
  • the computer system 401 also includes memory or memory location 410 (e.g., random-access memory, read-only memory, flash memory), electronic storage unit 415 (e.g., hard disk, Solid State drive or equivalent storge unit), communication interface 420 (e.g., network adapter) for communicating with one or more other systems, and peripheral devices 425, such as cache, other memory, data storage and/or electronic display adapters.
  • the memory 410, storage unit 415, interface 420 and peripheral devices 425 are in communication with the CPU 405 through a communication bus (solid lines), such as a motherboard.
  • the storage unit 415 can be a data storage unit (or data repository) for storing data.
  • the computer system 401 can be operatively coupled to a computer network ("network") 430 with the aid of the communication interface 420.
  • the network 430 can be the Internet, an internet and/or extranet, or an intranet and/or extranet that is in communication with the Internet.
  • the network 430 in some cases is a telecommunication and/or data network.
  • the network 430 can include one or more computer servers, which can enable distributed computing, such as cloud computing.
  • the network 430 in some cases with the aid of the computer system 401, can implement a peer-to-peer network, which may enable devices coupled to the computer system 401 to behave as a client or a server.
  • the CPU 405 can execute a sequence of machine-readable instructions, which can be embodied in a program or software.
  • the instructions may be stored in a memory location, such as the memory 410.
  • the instructions can be directed to the CPU 405, which can subsequently program or otherwise configure the CPU 405 to implement methods of the present disclosure. Examples of operations performed by the CPU 405 can include fetch, decode, execute, and writeback.
  • the CPU 405 can be part of a circuit, such as an integrated circuit.
  • a circuit such as an integrated circuit.
  • One or more other components of the system 401 can be included in the circuit.
  • the circuit is an application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • the storage unit 415 can store files, such as drivers, libraries and saved programs.
  • the storage unit 415 can store user data, e.g., user preferences and user programs.
  • the computer system 401 in some cases can include one or more additional data storage units that are located external to the computer system 401 (e.g., on a remote server that is in communication with the computer system 401 through an intranet or the Internet).
  • the computer system 401 can communicate with one or more remote computer systems through the network 430.
  • the computer system 401 can communicate with a remote computer system of a user (e.g., a subject, a dental user, or a dentist).
  • remote computer systems include personal computers (e.g., portable PC), slate or tablet PC's (e.g., Apple® iPad, Samsung® Galaxy Tab), telephones, Smart phones (e.g., Apple® iPhone, Android-enabled device, Blackberry®), or personal digital assistants.
  • the user can access the computer system 401 via the network 430.
  • Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the computer system 401, such as, for example, on the memory 410 or electronic storage unit 415.
  • the machine executable or machine readable code can be provided in the form of software.
  • the code can be executed by the processor 405.
  • the code can be retrieved from the storage unit 415 and stored on the memory 410 for ready access by the processor 405.
  • the electronic storage unit 415 can be precluded, and machine- executable instructions are stored on memory 410.
  • the code can be pre-compiled and configured for use with a machine having a processor adapted to execute the code, or can be compiled during runtime.
  • the code can be supplied in a programming language that can be selected to enable the code to execute in a pre-compiled or as-compiled fashion.
  • aspects of the systems and methods provided herein can be embodied in programming.
  • Various aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of machine (or processor) executable code and/or associated data that is carried on or embodied in a type of machine readable medium.
  • Machine-executable code can be stored on an electronic storage unit, such as memory (e.g., read-only memory, random-access memory, flash memory) or a storage unit.
  • Storage type media can include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer into the computer platform of an application server.
  • another type of media that may bear the software devices includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links.
  • a machine readable medium such as computer-executable code
  • a tangible storage medium such as computer-executable code
  • Non-volatile storage media including, for example, optical or magnetic disks, or any storage devices in any computer(s) or the like, may be used to implement the databases, etc. shown in the drawings.
  • Volatile storage media include dynamic memory, such as main memory of such a computer platform.
  • Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system.
  • Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications.
  • RF radio frequency
  • IR infrared
  • Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a ROM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code and/or data.
  • Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
  • the computer system 401 can include or be in communication with an electronic display 435 that comprises a user interface (E ⁇ ) 440 for providing, for example, a portal for a subject or a dental user to view one or more intraoral images or videos captured using a mobile device of the subject or the dental user.
  • the portal may be provided through an application programming interface (API).
  • API application programming interface
  • a user or entity can also interact with various devices in the portal via the UI. Examples of UI's include, without limitation, a graphical user interface (GUI) and web-based user interface.
  • GUI graphical user interface
  • the computer system 401 can include or be in communication with a Camera 445 for providing, for example, ability to capture videos or images of the subject or a dental user.
  • the computer system 401 can include or be in communication with a sensor or Sensors 450 including, but not limited to orientation sensor or motion sensor for providing, for example, orientation sensor data or motion sensor data during the dental scan. And for example, retrieve at least one dental scan date (such as acceleration) that can be used to analyzed and compered to at least one dental scan properties
  • Methods and systems of the present disclosure can be implemented by way of one or more algorithms.
  • An algorithm can be implemented by way of software upon execution by the central processing unit 405.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Dentistry (AREA)
  • Primary Health Care (AREA)
  • Veterinary Medicine (AREA)
  • Optics & Photonics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

L'invention concerne un procédé de mise à jour d'un modèle dentaire tridimensionnel (3D) d'au moins une dent, consistant à : (a) fournir au moins une image dentaire 2D comprenant ladite au moins une dent ; (b) exécuter un réseau neuronal de filtre visuel entraîné sur l'image dentaire 2D pour identifier le numéro de dent de ladite au moins une dent ; (c) fournir un modèle dentaire 3D de référence qui comprend ladite au moins une dent ; (d) générer une une capture 2D du modèle dentaire 3D de référence ; (e) mettre à jour la capture 2D du modèle dentaire 3D pour inclure le numéro de dent identifié obtenu à partir de l'image dentaire 2D ; et (f) utiliser la capture 2D mise à jour pour mettre à jour le modèle dentaire 3D pour y inclure le numéro de dent identifié obtenu à partir de l'image dentaire 2D.
EP22850389.2A 2021-07-29 2022-07-29 Modélisation de structures dentaires à partir d'un balayage dentaire Pending EP4377840A2 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202163227066P 2021-07-29 2021-07-29
US202263358544P 2022-07-06 2022-07-06
PCT/US2022/038943 WO2023009859A2 (fr) 2021-07-29 2022-07-29 Modélisation de structures dentaires à partir d'un balayage dentaire

Publications (1)

Publication Number Publication Date
EP4377840A2 true EP4377840A2 (fr) 2024-06-05

Family

ID=85088304

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22850389.2A Pending EP4377840A2 (fr) 2021-07-29 2022-07-29 Modélisation de structures dentaires à partir d'un balayage dentaire

Country Status (3)

Country Link
US (1) US20240164874A1 (fr)
EP (1) EP4377840A2 (fr)
WO (1) WO2023009859A2 (fr)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11464467B2 (en) * 2018-10-30 2022-10-11 Dgnct Llc Automated tooth localization, enumeration, and diagnostic system and method
EP4185993A1 (fr) * 2020-07-21 2023-05-31 Get-Grin Inc. Systèmes et procédés de modélisation de structures dentaires

Also Published As

Publication number Publication date
US20240164874A1 (en) 2024-05-23
WO2023009859A2 (fr) 2023-02-02
WO2023009859A3 (fr) 2023-03-30

Similar Documents

Publication Publication Date Title
US11232573B2 (en) Artificially intelligent systems to manage virtual dental models using dental images
US11735306B2 (en) Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches
US20210118132A1 (en) Artificial Intelligence System For Orthodontic Measurement, Treatment Planning, And Risk Assessment
US9191648B2 (en) Hybrid stitching
US9418474B2 (en) Three-dimensional model refinement
US20230149135A1 (en) Systems and methods for modeling dental structures
US11991439B2 (en) Systems, apparatus, and methods for remote orthodontic treatment
US11250580B2 (en) Method, system and computer readable storage media for registering intraoral measurements
US20230225832A1 (en) Photo-based dental attachment detection
US20210267716A1 (en) Method for simulating a dental situation
US20220378548A1 (en) Method for generating a dental image
US20240164874A1 (en) Modeling dental structures from dental scan
Wirtz et al. Automatic model-based 3-D reconstruction of the teeth from five photographs with predefined viewing directions
US20240122463A1 (en) Image quality assessment and multi mode dynamic camera for dental images
US20240164875A1 (en) Method and system for presenting dental scan
WO2023203385A1 (fr) Systèmes, procédés et dispositifs d'analyse statique et dynamique faciale et orale
WO2024138003A1 (fr) Systèmes et procédés de présentation de balayages dentaires
WO2024121067A1 (fr) Procédé et système d'alignement de représentations 3d

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20240215

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR