WO2021157966A1 - 딥러닝 인공지능 알고리즘을 이용한 치열 교정에 대한 정보 제공 방법 및 이를 이용한 디바이스 - Google Patents
딥러닝 인공지능 알고리즘을 이용한 치열 교정에 대한 정보 제공 방법 및 이를 이용한 디바이스 Download PDFInfo
- Publication number
- WO2021157966A1 WO2021157966A1 PCT/KR2021/001239 KR2021001239W WO2021157966A1 WO 2021157966 A1 WO2021157966 A1 WO 2021157966A1 KR 2021001239 W KR2021001239 W KR 2021001239W WO 2021157966 A1 WO2021157966 A1 WO 2021157966A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- surgical treatment
- tooth
- treatment
- orthodontic
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 238000013473 artificial intelligence Methods 0.000 title description 4
- 238000013135 deep learning Methods 0.000 title description 2
- 230000001815 facial effect Effects 0.000 claims abstract description 107
- 238000001356 surgical procedure Methods 0.000 claims abstract description 88
- 238000012148 non-surgical treatment Methods 0.000 claims abstract description 57
- 238000000605 extraction Methods 0.000 claims description 68
- 238000011282 treatment Methods 0.000 claims description 44
- 238000003745 diagnosis Methods 0.000 claims description 41
- 238000013145 classification model Methods 0.000 claims description 34
- 238000012937 correction Methods 0.000 claims description 32
- 238000011328 necessary treatment Methods 0.000 claims description 20
- 238000013527 convolutional neural network Methods 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 10
- 210000002050 maxilla Anatomy 0.000 claims description 9
- 210000003128 head Anatomy 0.000 description 60
- 238000004458 analytical method Methods 0.000 description 10
- 238000012549 training Methods 0.000 description 9
- 238000005259 measurement Methods 0.000 description 8
- 241000282465 Canis Species 0.000 description 7
- 210000004763 bicuspid Anatomy 0.000 description 7
- 210000004283 incisor Anatomy 0.000 description 7
- 238000007781 pre-processing Methods 0.000 description 6
- 206010061274 Malocclusion Diseases 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000002601 radiography Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 210000004513 dentition Anatomy 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 210000004373 mandible Anatomy 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000036346 tooth eruption Effects 0.000 description 2
- 208000006650 Overbite Diseases 0.000 description 1
- 208000002925 dental caries Diseases 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 208000024693 gingival disease Diseases 0.000 description 1
- 230000005802 health problem Effects 0.000 description 1
- 238000007917 intracranial administration Methods 0.000 description 1
- 230000018984 mastication Effects 0.000 description 1
- 238000010077 mastication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/45—For evaluating or diagnosing the musculoskeletal system or teeth
- A61B5/4538—Evaluating a particular part of the muscoloskeletal system or a particular medical condition
- A61B5/4542—Evaluating the mouth, e.g. the jaw
- A61B5/4547—Evaluating teeth
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/51—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for dentistry
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C7/00—Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C7/00—Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
- A61C7/002—Orthodontic computer assisted systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
Definitions
- malocclusion a state in which the dentition is not aligned and the upper and lower teeth occlusion is abnormal is called malocclusion.
- Such malocclusion may cause functional problems such as mastication and pronunciation problems and aesthetic problems on the face, as well as health problems such as tooth decay and gum disease.
- an important part may be to determine a treatment plan.
- the determination of whether to extract, furthermore, the treatment plan establishment step of determining the tooth to be extracted may be very important.
- a poor treatment plan may cause failure of anchorage control, abnormal inclination of anterior teeth, improper occlusion, inadequate horizontal overjet and overbite of teeth, and difficulty in occlusion of the extraction space. It can cause problems.
- the inventors of the present invention paid attention to artificial intelligence technology that is being used in various fields such as classification, face recognition, and character recognition as a way to solve the above problems.
- the inventors of the present invention derive measurement values by taking specific landmarks from the standard radiographic image of the lateral head, and derive the measurement values by performing model analysis from the oral model. could be expected to do.
- predicting the lateral facial region may include predicting the coordinates of the lateral facial region in the lateral head medical image by using the lateral facial region prediction model.
- predicting the parafacial area or the tooth area may include predicting the tooth area using a tooth area prediction model configured to predict the tooth area in the intraoral image.
- the step of classifying the surgical treatment or the non-surgical treatment may include classifying the necessary treatment for the subject as a surgical treatment or a non-surgical treatment based on the tooth area.
- the step of classifying the surgical treatment or the non-surgical treatment includes an orthodontic classification model configured to classify the necessary treatment as a surgical treatment or a non-surgical treatment based on the parafacial region or the tooth region. It may include the step of classifying the necessary treatment for the subject by using the.
- the orthodontic classification model may be further configured to classify a necessary treatment into a surgical treatment or a non-surgical treatment based on the parafacial region and the tooth region.
- the orthodontic classification model includes two independent feature extraction layers, configured to extract features, and a fusion layer that integrates two features extracted from the two feature extraction layers. fusion layer).
- the method may further include, after the step of predicting the lateral facial region or the tooth region, generating a box including the lateral facial region or the tooth region.
- the non-surgical treatment includes non-extraction correction, upper and lower first premolar extraction correction, upper and lower second premolar extraction correction, maxilla first premolar extraction correction, and maxilla first premolar mandibular second premolar extraction correction. may be at least one of
- the method may further include receiving a selection input for a sample tooth region within the image, and generating a paratrophic medical image for training based on the coordinates of the sample parafacial region or the coordinates of the sample tooth region and the sample paracephalometric medical image.
- the method may further include determining a treatment plan for the subject.
- the device includes a receiving unit configured to receive a paracephalometric medical image or an intraoral image of a subject, and a processor connected to the receiving unit.
- the processor predicts a parafacial region or a dental region including a paratrophic region in the paratrophic medical image or intraoral image, and based on the paratrophic facial region or tooth region, a surgical treatment or non-surgical treatment required for the subject It is structured to classify treatments.
- the processor predicts the lateral facial region using a lateral facial region prediction model configured to predict the lateral facial region within the lateral head medical image, and a necessary treatment for the subject based on the lateral facial region may be further configured to classify as surgical treatment or non-surgical treatment.
- the processor predicts a tooth area using a tooth area prediction model configured to predict a tooth area in an intraoral image, and provides a surgical treatment or a necessary treatment for the subject based on the tooth area. It can be configured to classify as a non-surgical treatment.
- the present invention provides an information providing system for orthodontics using a predictive model configured to predict a parafacial region and a dental region for each of the paracephalic medical images and/or intraoral images, and furthermore, a classification model configured to classify necessary treatments. There is an effect that can provide information for the orthodontic analysis of the subject.
- the present invention provides information on orthodontic treatment by using a predictive model to classify and evaluate necessary treatments only with medical images, without the process of extracting measurement values from clinical data, which is an essential step in the conventional orthodontic analysis program. can do.
- the present invention is to overcome the limitations of the conventional orthodontic system, which derives measurement values by taking specific landmarks from the standard radiographic image of the lateral head, and derives the measurement values by performing model analysis from the oral model, and performs diagnosis based on this.
- the present invention can implement a system that provides information on orthodontics only with a lateral head medical image and/or an intraoral image, compared to the prior art using both a radiographic image, an intraoral image, an extraoral image, and an oral model.
- the present invention has an effect of overcoming the limitations of the conventional orthodontic information providing system that is dependent on the knowledge and experience of the medical staff based on clinical data.
- 1B exemplarily shows the configuration of a device for providing information on orthodontics according to an embodiment of the present invention.
- Figure 2 shows a procedure of a method for providing information about orthodontics according to an embodiment of the present invention.
- 3B exemplarily illustrates a procedure of a tooth area based on a tooth area prediction model in a method for providing information on orthodontics according to an embodiment of the present invention.
- FIG. 5 exemplarily illustrates a pre-processing procedure of a lateral facial region and a tooth region in a method for providing orthodontic information according to an embodiment of the present invention.
- the term “lateral head medical image” may refer to all images including a side view of a subject received from a medical imaging apparatus.
- the lateral head medical image disclosed herein may be a lateral head standard radiographic image, but is not limited thereto.
- the lateral head medical image may be a two-dimensional image, a three-dimensional image, a still image of one cut, or a moving image composed of a plurality of cuts.
- the sidebar head medical image is a moving picture composed of a plurality of cuts
- the sideface facial region for each of the plurality of sidehead head medical images may be predicted according to the orthodontic information providing method according to an embodiment of the present invention.
- the lateral head medical image may include a lateral facial region including a lateral facial region.
- the intraoral image may include a tooth area for the incisors, canines, premolars, and molars of the upper jaw and/or a tooth area for at least one of the incisors, canines, premolars, and molars of the lower teeth.
- the tooth region may be a tooth region for the incisors, canines, premolars, and molars of the upper teeth and/or regions for the plurality of teeth of the incisors, canines, premolars, and molars of the lower teeth.
- the arrangement of the tooth region may reflect the characteristics of the examinee's dentition.
- the term "paralateral facial region prediction model” may be a model configured to predict a paratrophic facial region including a lateral facial region, which is a target region to be measured for orthodontic treatment with respect to a lateral head medical image.
- the lateral facial region prediction model may be a faster R-CNN trained to predict the lateral facial region within the lateral head medical image.
- the lateral facial region prediction model receives a lateral head medical image for training in which the coordinates of the lateral facial region are predetermined, and sets the lateral facial region as a region of interest in the lateral head medical image for training based on the coordinates of the lateral facial region. It can be a model trained to predict.
- the lateral facial region prediction model is not limited thereto and may be based on more various image segmentation algorithms.
- the lateral facial region may have different pixel values and textures from other regions, for example, the background region. Accordingly, the lateral facial region prediction model may predict the lateral facial region based on a pixel value or a texture.
- the term "tooth region prediction model” may be a model configured to predict a plurality of tooth regions of a subject corresponding to upper and lower teeth, which is a target region to be measured for orthodontic treatment with respect to a paracephalometric medical image.
- the tooth area prediction model may be a faster R-CNN trained to predict a tooth area within an intraoral image. More specifically, the tooth area prediction model receives an intraoral image for learning in which the coordinates of these areas are predetermined for each of the incisors, canines, premolars, and molars of the upper and lower teeth, and the coordinates of the predetermined tooth area are received. It may be a model trained to predict a tooth region as a region of interest in an intraoral image for learning based on it.
- the tooth region prediction model is not limited thereto and may be based on more various image segmentation algorithms.
- the tooth region may have different pixel values and textures from other regions, for example, background regions such as lips and tongue. Accordingly, the tooth region prediction model may predict the tooth region based on a pixel value or texture.
- the term “surgical treatment” refers to surgical orthodontic treatment, and may refer to orthognathic surgery. In this case, the surgical treatment may be determined based on the arrangement of the lateral facial region and/or the tooth region of the subject.
- the surgical treatment may be at least one of orthodontic class 2 extraction surgery, orthodontic class 2 non-extraction surgery, orthodontic class 3 extraction surgery, and orthodontic class 3 non-extraction surgery.
- non-surgical treatment may refer to non-surgical orthodontic correction, for example, correction using a bracket.
- the non-surgical treatment may be determined based on the arrangement of the lateral facial region and/or the tooth region of the subject.
- the non-surgical treatment includes at least one of non-extraction correction, upper and lower first premolar extraction correction, upper and lower second premolar extraction correction, maxilla first premolar extraction correction, and maxilla first premolar mandibular second premolar extraction correction.
- non-extraction correction upper and lower first premolar extraction correction, upper and lower second premolar extraction correction, maxilla first premolar extraction correction, and maxilla first premolar mandibular second premolar extraction correction.
- the term “orthodontic classification model” may be a model configured to classify surgical or non-surgical treatment required for a subject based on a parafacial region and/or a tooth region.
- the orthodontic classification model may include at least one of orthodontic class 2 extraction surgery, orthodontic class 2 non-extraction surgery, orthodontic class 3 extraction surgery, and orthodontic class 3 non-extraction surgery required for the subject, from the images of the lateral facial region and the tooth region. It may be a model trained to probabilistically provide a diagnostic result of a surgical procedure.
- the orthodontic classification model is, from the images of the lateral facial region and the tooth region, non-extraction correction, upper and lower first premolar extraction correction, upper and lower second premolar extraction correction, upper and lower first premolar extraction correction, and maxilla first premolar mandible. It may be a model trained to probabilistically provide a diagnostic result of at least one non-surgical treatment of 2 premolar extraction correction.
- the orthodontic classification model is a fusion that integrates two features extracted from two feature extraction layers and two feature extraction layers configured to extract features for each of the input lateral facial region and tooth region. It may be formed of a fusion layer.
- the orthodontic classification model may be a Two-Stream Convolutional Neural Network (CNN), but is not limited thereto.
- the orthodontic classification model may include a first orthodontic classification model configured to classify a surgical or non-surgical treatment based on a parafacial region and a first orthodontic classification model configured to classify a surgical or non-surgical treatment based on a tooth region. It may be an ensemble model in which two independent models of the second orthodontic classification model are combined.
- the orthodontic classification model has an initial learning rate of 0.01, a momentum of 0.9, a weight decay of 0.0005, and a dropout ( dropout), a learning factor value of 0.5 may be set.
- learning factor values of parameters input for learning are not limited thereto.
- 1A illustrates an orthodontic analysis system using a device for providing information on orthodontics according to an embodiment of the present invention.
- 1B exemplarily shows the configuration of a device for providing information on orthodontics according to an embodiment of the present invention.
- a lateral head medical image 210 may be acquired from a lateral head measurement radiography apparatus, and an intraoral image 220 of a subject may be acquired from the intraoral imaging apparatus.
- the acquired medical image 200 of the paratrophic medical image 210 and/or the intraoral image 220 is received by the device 100 for providing orthodontic information according to an embodiment of the present invention.
- the device 100 for providing information on orthodontics predicts a parafacial region and/or a tooth region in the received paratrophic medical image 210 and/or intraoral image 220 , and based on the predicted region The necessary surgical or non-surgical treatment is determined and provided for the subject.
- the device 100 for providing information on orthodontics includes a receiving unit 110 , an input unit 120 , an output unit 130 , a storage unit 140 , and a processor 150 . do.
- the receiver 110 may be configured to receive a lateral head medical image of the subject from the paracephalometric radiography apparatus or to receive an intraoral image of the subject from the intraoral imaging apparatus.
- the lateral head medical image 210 obtained by the receiver 110 may be a lateral head standard radiographic image
- the intraoral image 220 may be an RGB color image, but is not limited thereto.
- the receiving unit 110 may be further configured to transmit the acquired paracephalometric medical image 210 and/or intraoral image 220 to the processor 150 to be described later.
- the lateral head medical image 210 obtained through the receiver 110 may include a lateral facial region
- the intraoral image 220 may include a plurality of tooth regions.
- the input unit 120 may set the device 100 for providing information on orthodontics. Furthermore, the user may directly select a parafacial region and a tooth region with respect to each of the lateral head medical image 210 and the intraoral image 220 .
- the input unit 120 may be a keyboard, a mouse, or a touch screen panel, but is not limited thereto.
- the output unit 130 may visually display the lateral head medical image 210 and/or the intraoral image 220 obtained from the receiving unit 110 . Furthermore, the output unit 130 may be configured to display a parafacial region determined by the processor 150 in the paratrophic medical image 210 , and/or a plurality of tooth regions determined in the intraoral image 210 . . Furthermore, the output unit 130 may be configured to display information about a diagnosis necessary for the subject determined by the processor 150 . However, the present invention is not limited thereto, and the output unit 130 may be configured to display more various information determined by the processor 150 for orthodontic treatment of the examinee.
- the storage unit 140 stores the medical image 200 of the subject acquired through the receiving unit 110, and stores the instruction of the device 100 for providing orthodontic information set through the input unit 120. can be Furthermore, the storage unit 140 is configured to store results predicted by the processor 150, which will be described later. However, it is not limited to the above, and the storage unit 140 may store more various pieces of information determined by the processor 150 for orthodontic treatment of the examinee.
- the processor 150 may be a component for providing an accurate prediction result to the device 100 for providing orthodontic information.
- the processor 150 predicts the parafacial region with respect to the paratrophic medical image 210 or predicts a plurality of tooth regions within the intraoral image 220, and predicts the predicted facial region and/or the plurality of tooth regions. It may be configured to classify and provide diagnostic results according to the dental condition of the examinee based on the classification.
- the processor 150 obtains from the receiving unit 110 a prediction model trained to predict a parafacial region in the paratrophic medical image 210 of the subject, and a tooth region in the intraoral image 220 . It may be configured to use a predictive model that has been trained to predict. Furthermore, the processor 150 may be configured to use a classification model that classifies and provides a diagnosis result according to a dental condition of a subject based on a parafacial region and/or a tooth region predicted by the predictive model. In this case, the model trained to predict each of the parafacial region and the tooth region may be based on a faster R-CNN, and the classification model of orthodontic diagnosis may be based on a Two-Stream CNN, but is not limited thereto.
- the prediction models and classification models used in various embodiments of the present invention are a Deep Neural Network (DNN), a Deep Convolution Neural Network (DCNN), a Recurrent Neural Network (RNN), Predict the region of interest based on RBM (Restricted Boltzmann Machine), DBN (Deep Belief Network), SSD (Single Shot Detector) model, SVM (Support Vector Machine) or U-net, or provide appropriate medical treatment for the subject can be configured to classify.
- DNN Deep Neural Network
- DCNN Deep Convolution Neural Network
- RNN Recurrent Neural Network
- Predict the region of interest based on RBM (Restricted Boltzmann Machine), DBN (Deep Belief Network), SSD (Single Shot Detector) model, SVM (Support Vector Machine) or U-net, or provide appropriate medical treatment for the subject can be configured to classify.
- the device for providing information on orthodontics converts the medical image to a black-and-white image when the lateral head medical image and/or the intraoral image received by the receiving unit 110 is an RGB color image. It may further include a data pre-processing unit configured to convert to and vectorize a black-and-white image.
- Figure 2 shows a procedure of a method for providing information about orthodontics according to an embodiment of the present invention.
- 3A exemplarily illustrates a procedure of a lateral facial region based on a lateral facial region prediction model in a method for providing information on orthodontics according to an embodiment of the present invention.
- 3B exemplarily illustrates a procedure of a tooth area based on a tooth area prediction model in a method for providing information on orthodontics according to an embodiment of the present invention.
- 4 exemplarily illustrates a procedure of medical treatment classification based on an orthodontic classification model in a method for providing information on orthodontics according to an embodiment of the present invention.
- 5 exemplarily illustrates a pre-processing procedure of a lateral facial region and a tooth region in a method for providing orthodontic information according to an embodiment of the present invention.
- a medical treatment decision procedure is as follows. First, a lateral head medical image or an intraoral image of the subject is received (S210). Thereafter, the lateral facial region is predicted with respect to the lateral head medical image, or the tooth region is predicted within the intraoral image ( S220 ). Next, a surgical treatment or a non-surgical treatment necessary for the subject is determined based on the parafacial area and/or the tooth area ( S230 ). Finally, a predicted result is provided (S240).
- a lateral head medical image of the subject is received from the lateral head measurement radiography apparatus, or an intraoral image of the subject is received from the intracranial imaging apparatus.
- the lateral head medical image and the intraoral image may be received together.
- the lateral head medical image may be a lateral head standard radiographic image
- the intraoral image may be an RGB color image, but is not limited thereto.
- the medical image of the temporal head is pre-processed to have constant pixels so that the medical image and the intraoral image can be quickly analyzed. Images and intraoral images may be received.
- step S220 of predicting the parafacial region or predicting the tooth region in the intraoral image with respect to the paracephalic medical image the region of interest for each medical image may be predicted.
- the paratrophic facial region prediction model 310 in the step of predicting the paratrophic facial region with respect to the paratrophic medical image or predicting the tooth region within the intraoral image (S220), the paratrophic facial region prediction model 310, the above-mentioned
- the lateral head medical image 210 obtained in the step S210 of receiving the unilateral head medical image or the intraoral image is input.
- the paratrophic region 312 which is a region of interest corresponding to the lateral facial region in the lateral head medical image 210 , is determined by the lateral facial region prediction model 310 .
- the lateral facial region prediction model 310 may be further configured to form a box surrounding the lateral facial region 312 .
- the lateral facial region prediction model 310 may be configured to predict the lateral facial region 312 based on the coordinates for the lateral facial region 312 (or box) in the lateral head medical image 210 , but It is not limited.
- the lateral facial region 312 predicted by the result of predicting the lateral facial region with respect to the lateral head medical image or predicting the tooth region within the intraoral image ( S220 ) may be cropped to include only the corresponding region. there is.
- the dental region prediction model 320 is applied to the above-described paratrophic medical image.
- the intraoral image 220 obtained in step S210 of receiving the intraoral image is input.
- a plurality of tooth regions 322 that are regions of interest corresponding to each tooth in the intraoral image 220 are determined by the tooth region prediction model 320 .
- the tooth region prediction model 320 may be configured to predict the region for all teeth appearing in the oral image 220 rather than predicting the region of a single tooth specified in the oral image 220 .
- the arrangement of the plurality of tooth regions 322 may represent the characteristics of the examinee's teeth.
- the tooth region prediction model 320 may be further configured to form a box surrounding the tooth region 322 .
- the tooth region prediction model 320 calculates a plurality of tooth regions 322 based on the coordinates for the regions (or boxes) of the upper and lower incisors, canines, premolars, and molars in the intraoral image 220 . It may be configured to predict, but is not limited thereto.
- a plurality of tooth regions 322 predicted by the result of predicting the parafacial region with respect to the paratrophic medical image or predicting the dental region within the intraoral image ( S220 ) are to be cropped to include only the corresponding region.
- a surgical treatment or a non-surgical treatment based on the parafacial region and/or the plurality of tooth regions, surgical treatment or non-surgical treatment according to the dental status of the subject A diagnostic outcome of the treatment may be determined.
- the surgical treatment diagnosis probability or the non-surgical treatment diagnosis probability may be determined.
- the residual probability of orthodontic class 2 extraction surgery, the diagnosis probability of the orthodontic class 2 non-extraction surgery, and the diagnosis probability of the orthodontic class 3 extraction surgery , or the diagnosis probability of orthognathic grade 3 non-extraction surgery may be determined.
- the non-extraction orthodontic diagnosis probability, the upper and lower first premolar extraction correction diagnosis probability, the upper and lower second premolar extraction correction diagnosis probability, the maxilla first premolar extraction correction diagnosis probability A probability, or a diagnosis probability of maxillary first premolar and mandibular second premolar extraction correction may be determined.
- the higher the diagnosis probability of the medical treatment the higher the success rate of orthodontic treatment when applied to the subject.
- the treatment required for the subject may be determined by the orthodontic classification model.
- the paratrophic facial region 312 predicted by the result of the step S220 of predicting the lateral facial region with respect to the above-described lateral head medical image or predicting the tooth region within the intraoral image. ) and a plurality of tooth regions 322 may be input to the orthodontic classification model 330 .
- the orthodontic classification model 330 integrates two independent feature extraction layers 332 and their features configured to extract features for each of the parafacial region 312 and the plurality of tooth regions 322, and finally may consist of a fusion layer 334 configured to determine the necessary medical treatment.
- the feature extraction layer 332 may correspond to a Two-Stream CNN
- the fusion layer 334 may correspond to a fusion and a fully connected (FC) layer, but is not limited thereto.
- the orthodontic classification model 330 is based on the plurality of tooth regions 322 and a first orthodontic classification model configured to classify surgical or non-surgical procedures based on the parafacial region 312 . It may be an ensemble model in which two independent models of the second orthodontic classification model configured to classify surgical treatment or non-surgical treatment are combined.
- Each of the input lateral facial region 312 and the plurality of dental regions 322 is subjected to a feature extraction layer 332, features are extracted, and features are integrated via a fusion layer 334, and finally an appropriate surgical treatment; or a diagnostic result 342 of the non-surgical treatment is determined.
- the diagnosis result 342 is, for the subject, the residual probability of orthodontic class 2 extraction surgery, the diagnosis probability of orthodontic class 2 non-extraction surgery, the diagnosis probability of orthodontic class 3 extraction surgery, and the diagnosis probability of orthodontic class 3 non-extraction surgery. It may include at least one.
- diagnosis result 342 is the non-extraction orthodontic diagnosis probability, the maxillary and mandibular first premolar extraction correction diagnosis probability, the upper and lower second premolar extraction correction diagnosis probability, the maxilla first premolar extraction correction diagnosis probability, and the maxilla first premolar mandible second It may include at least one of premolar extraction orthodontic diagnosis probability.
- the diagnosis probability for the second grade extraction surgery orthodontics appears as 95%. This may mean that the success rate of orthodontic treatment may be higher than that of other surgical treatments when grade 2 orthodontic extraction is performed on the base of the test.
- the medical treatment according to the dental condition of the examinee can be determined probabilistically.
- each medical image including the lateral facial region and/or the tooth region is an RGB color image
- a step of preprocessing these medical images may be further performed.
- the RGB color lateral facial region 312 and the tooth region 322 are converted to black and white, and the black and white converted lateral facial region or black and white converted tooth region is vectorized can be
- each of the lateral facial region 312 and the tooth region 322 converted into a black-and-white image may be vectorized in the direction of the pixel having the largest brightness difference value with respect to a plurality of pixels.
- the processing speed can be improved.
- step (S240) of providing the predicted result information on the surgical treatment or the non-surgical treatment determined as a result of the step (S230) of determining the surgical treatment or non-surgical treatment may be provided.
- the 'second-class extraction surgery' determined according to the tooth condition based on the subject's lateral facial region 312 and the tooth region 322
- the surgical treatment of 'correction (95% diagnosis probability)' can be printed out and provided to the medical staff.
- the medical staff can be provided with information about the subject's orthodontics, so that they can make a decision and establish a treatment plan with a high probability of success.
- FIGS. 6A to 6D are diagrams illustrating a process of generating image data for training of predictive models used in a method for providing orthodontic information and a device for providing orthodontic information using the same according to an embodiment of the present invention.
- a plurality of sample lateral head standard radiographic images are prepared. Then, in the plurality of lateral head standard radiographic images, the lateral facial region is indicated as a rectangular area, and then its coordinates are designated. Then, a json file including location information on the sample lateral facial region (box) formed for each of the plurality of lateral head standard radiographic images is prepared. More specifically, the json file is a name for each of the plurality of sample lateral head standard radiographic image files, and the position values (top, left, width and height) of the box formed for the sample lateral head area within the plurality of lateral head standard radiographic images. may include. As a result, a lateral head medical image for learning about the lateral facial region prediction model containing location information on the sample lateral facial region may be generated. On the other hand, medical imaging of the lateral head for learning
- the lateral facial region prediction model may be a faster R-CNN-based AI model capable of detecting a region of interest in a lateral head medical image for training.
- R-CNN-based AI model capable of detecting a region of interest in a lateral head medical image for training.
- it is not limited thereto.
- a plurality of sample intraoral images are prepared for learning the tooth region prediction model. Then, in the plurality of sample intraoral images, a box surrounding each area of the plurality of teeth (incisors, canines, premolars, molars) is formed, and thereafter, coordinates for each box are designated. Then, a json file including location information for a plurality of sample tooth regions (boxes) formed for each of a plurality of intraoral images is prepared.
- the json file may include a name for each of a plurality of intraoral image files, and position values (top, left, width and height) of a box formed for each of a plurality of sampled tooth regions within a plurality of intraoral images.
- Predictive models used in a method for providing orthodontic information providing method and a device for providing orthodontic information using the same by adopting the above learning algorithm, a parafacial region or tooth in a medical image
- the region of interest of the region can be predicted with high accuracy.
- the lateral facial region prediction model and the tooth region prediction model are not limited to the above-described ones and may be learned in more various ways.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Veterinary Medicine (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Physics & Mathematics (AREA)
- Biophysics (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Epidemiology (AREA)
- Artificial Intelligence (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- Robotics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Primary Health Care (AREA)
- Optics & Photonics (AREA)
- Radiology & Medical Imaging (AREA)
- High Energy & Nuclear Physics (AREA)
- Evolutionary Computation (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Rheumatology (AREA)
- General Engineering & Computer Science (AREA)
- Physical Education & Sports Medicine (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
Claims (21)
- 피검자에 대한 측모 두부 의료 영상 또는 구내 영상을 수신하는 단계;상기 측모 두부 의료 영상 또는 상기 구내 영상 내에서 측모 안면 부위를 포함하는 측모 안면 영역 또는 치아 영역을 예측하는 단계, 및상기 측모 안면 영역 또는 상기 치아 영역을 기초로, 상기 피검자에 대하여 필요한 수술적 처치 또는 비수술적 처치를 분류하는 단계를 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제1항에 있어서,측모 안면 영역 또는 치아 영역을 예측하는 단계는,측모 두부 의료 영상 내에서 측모 안면 영역을 예측하도록 구성된 측모 안면 영역 예측 모델을 이용하여, 상기 측모 안면 영역을 예측하는 단계를 포함하고,수술적 처치 또는 비수술적 처치로 분류하는 단계는,상기 측모 안면 영역을 기초로 상기 피검자에 대하여 필요한 처치를 수술적 처치 또는 비수술적 처치로 분류하는 단계를 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제2항에 있어서,상기 측모 안면 영역을 예측하는 단계는,상기 측모 안면 영역 예측 모델을 이용하여, 상기 측모 두부 의료 영상 내에서 상기 측모 안면 영역에 대한 좌표를 예측하는 단계를 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제1항에 있어서,측모 안면 영역 또는 치아 영역을 예측하는 단계는,구내 영상 내에서 치아 영역을 예측하도록 구성된 치아 영역 예측 모델을 이용하여, 상기 치아 영역을 예측하는 단계를 포함하고,수술적 처치 또는 비수술적 처치로 분류하는 단계는,상기 치아 영역을 기초로 상기 피검자에 대하여 필요한 처치를 수술적 처치 또는 비수술적 처치로 분류하는 단계를 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제4항에 있어서,상기 치아 영역을 예측하는 단계는,상기 치아 영역 예측 모델을 이용하여, 상기 구내 영상 내에서 상기 치아 영역에 대한 좌표를 예측하는 단계를 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제1항에 있어서,상기 수술적 처치 또는 비수술적 처치로 분류하는 단계는,측모 안면 영역 또는 치아 영역을 기초로, 필요한 처치를 수술적 처치 또는 비수술적 처치로 분류하도록 구성된 치열 교정 분류 모델을 이용하여, 상기 피검자에 대하여 필요한 처치를 분류하는 단계를 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제6항에 있어서,상기 치열 교정 분류 모델은,측모 안면 영역 및 치아 영역을 기초로, 필요한 처치를 수술적 처치 또는 비수술적 처치로 분류하도록 더 구성된, 치열 교정에 대한 정보 제공 방법.
- 제6항에 있어서,상기 치열 교정 분류 모델은,특징을 추출하도록 구성된, 독립된 두 개의 특징 추출 레이어 (feature extraction layer), 상기 두 개의 특징 추출 레이어로부터 추출된 두 개의 특징을 통합하는 퓨젼 레이어 (fusion layer) 를 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제6항에 있어서,상기 치열 교정 분류 모델은,상기 Two-Stream CNN (Convolutional Neural Network) 인, 치열 교정에 대한 정보 제공 방법.
- 제1항에 있어서,상기 측모 안면 영역 또는 치아 영역을 예측하는 단계 이후에, 상기 측모 안면 영역 또는 상기 치아 영역을 포함하는 박스를 생성하는 단계를 더 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제1항에 있어서,상기 수술적 처치는,악교정 2급 발치 수술, 악교정 2급 비발치 수술, 악교정 3급 발치 수술 및 악교정 3급 비발치 수술 중 적어도 하나인, 치열 교정에 대한 정보 제공 방법.
- 제1항에 있어서,상기 비수술적 처치는,비발치 교정, 상하악 제1 소구치 발치 교정, 상하악 제2 소구치 발치 교정, 상악 제1 소구치 발치 교정 및 상악 제1 소구치 하악 제2 소구치 발치 교정 중 적어도 하나인, 치열 교정에 대한 정보 제공 방법.
- 제1항에 있어서,상기 수술적 처치 또는 비수술적 처치로 분류하는 단계는,상기 수술적 처치 또는 비수술적 처치를 확률적으로 예측하는 단계를 포함하고,상기 분류하는 단계 이후에,수술적 처치 진단 확률 또는 비수술적 처치 진단 확률을 제공하는 단계를 더 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제1항에 있어서,상기 측모 안면 영역 또는 치아 영역을 예측하는 단계 이후에,상기 측모 안면 영역 또는 상기 치아 영역을 흑백으로 전환하는 단계, 및흑백 전환된 측모 안면 영역 또는 흑백 전환된 치아 영역을 벡터화하는 단계를 더 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제1항에 있어서,상기 수신하는 단계 이전에,표본 피검자에 대한 표본 측모 두부 의료 영상 또는 표본 구내 영상을 수신하는 단계;상기 표본 측모 두부 의료 영상 내에서 표본 측모 안면 영역에 대한 선택 또는 상기 표본 구내 영상 내에서 표본 치아 영역에 대한 선택을 입력 받는 단계;상기 표본 측모 안면 영역 또는 상기 표본 치아 영역의 좌표 에 대한 좌표 및 상기 표본 측모 두부 의료 영상을 기초로, 학습용 측모 두부 의료 영상을 생성하는 단계를 더 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제15항에 있어서,상기 학습용 측모 두부 의료 영상을 생성하는 단계는,상기 표본 측모 두부 의료 영상에 대한 정보 및 상기 표본 측모 안면 영역에 대한 좌표를 포함하는 json 파일 또는 xml 파일을 생성하는 단계를 포함하는, 치열 교정에 대한 정보 제공 방법.
- 제1항에 있어서,상기 분류하는 단계 이후에,분류 결과를 기초로, 상기 피검자에 대한 치료 계획을 결정하는 단계를 더 포함하는, 치열 교정에 대한 정보 제공 방법.
- 피검자에 대한 측모 두부 의료 영상 또는 구내 영상을 수신하도록 구성된 수신부, 및상기 수신부와 연결된 프로세서를 포함하고,상기 프로세서는,상기 측모 두부 의료 영상 또는 상기 구내 영상 내에서 측모 안면 부위를 포함하는 측모 안면 영역 또는 치아 영역을 예측하고, 상기 측모 안면 영역 또는 상기 치아 영역을 기초로, 상기 피검자에 대하여 필요한 수술적 처치 또는 비수술적 처치를 분류하도록 구성된, 치열 교정에 대한 정보 제공용 디바이스.
- 제18항에 있어서,상기 프로세서는,측모 두부 의료 영상 내에서 측모 안면 영역을 예측하도록 구성된 측모 안면 영역 예측 모델을 이용하여, 상기 측모 안면 영역을 예측하고, 상기 측모 안면 영역을 기초로 상기 피검자에 대하여 필요한 처치를 수술적 처치 또는 비수술적 처치로 분류하도록 더 구성된, 치열 교정에 대한 정보 제공용 디바이스.
- 제18항에 있어서,상기 프로세서는,구내 영상 내에서 치아 영역을 예측하도록 구성된 치아 영역 예측 모델을 이용하여, 상기 치아 영역을 예측하고, 상기 치아 영역을 기초로 상기 피검자에 대하여 필요한 처치를 수술적 처치 또는 비수술적 처치로 분류하도록 구성된, 치열 교정에 대한 정보 제공용 디바이스.
- 제18항에 있어서,상기 프로세서는,측모 안면 영역 또는 치아 영역을 기초로, 필요한 처치를 수술적 처치 또는 비수술적 처치로 분류하도록 구성된 치열 교정 분류 모델을 이용하여, 상기 피검자에 대하여 필요한 처치를 분류하도록 더 구성된, 치열 교정에 대한 정보 제공용 디바이스.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2020-0012517 | 2020-02-03 | ||
KR1020200012517A KR20210098683A (ko) | 2020-02-03 | 2020-02-03 | 딥러닝 인공지능 알고리즘을 이용한 치열 교정에 대한 정보 제공 방법 및 이를 이용한 디바이스 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021157966A1 true WO2021157966A1 (ko) | 2021-08-12 |
Family
ID=77199286
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2021/001239 WO2021157966A1 (ko) | 2020-02-03 | 2021-01-29 | 딥러닝 인공지능 알고리즘을 이용한 치열 교정에 대한 정보 제공 방법 및 이를 이용한 디바이스 |
Country Status (2)
Country | Link |
---|---|
KR (2) | KR20210098683A (ko) |
WO (1) | WO2021157966A1 (ko) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115694776A (zh) | 2021-07-27 | 2023-02-03 | 三星电子株式会社 | 存储设备、存储系统操作方法和计算系统 |
CN117897119A (zh) * | 2021-08-12 | 2024-04-16 | 3M创新有限公司 | 用于生成正畸矫治器中间阶段的深度学习 |
KR102448169B1 (ko) * | 2021-10-05 | 2022-09-28 | 세종대학교산학협력단 | 딥러닝 기반 치아 교정치료 결과 예측 방법 및 장치 |
KR20230090824A (ko) * | 2021-12-15 | 2023-06-22 | 연세대학교 산학협력단 | 근관 치료에 대한 정보 제공 방법 및 이를 이용한 근관 치료에 대한 정보 제공용 디바이스 |
KR102464472B1 (ko) * | 2022-04-28 | 2022-11-07 | 김신엽 | 인공지능을 이용한 치열 교정법 추천 시스템 및 그 방법 |
KR102470875B1 (ko) * | 2022-05-13 | 2022-11-25 | 주식회사 쓰리디오엔에스 | 3d 의료 영상 기반의 목표 영상 생성방법, 장치 및 컴퓨터프로그램 |
KR102469288B1 (ko) * | 2022-05-13 | 2022-11-22 | 주식회사 쓰리디오엔에스 | 자동 치열 교정 계획 방법, 장치 및 프로그램 |
KR102710043B1 (ko) | 2022-06-30 | 2024-09-25 | 김두희 | 예측교정 이미지 제공 방법 |
KR20240096420A (ko) | 2022-12-19 | 2024-06-26 | 사회복지법인 삼성생명공익재단 | 방사선 두부 영상을 이용하여 제3 대구치의 발치 난이도를 예측하는 방법 및 분석 장치 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007037687A (ja) * | 2005-08-02 | 2007-02-15 | Hidefumi Ito | 歯科診療支援方法及びシステム |
JP2013063140A (ja) * | 2011-09-16 | 2013-04-11 | Hiroki Hirabayashi | 歯列矯正治療における顎骨手術要否判断指標の計算方法、歯列矯正治療における顎骨手術要否判断方法、歯科治療における上下顎骨不調和判断指標の計算方法、歯科治療における上下顎骨不調和判断方法、プログラムおよびコンピュータ |
KR20160004864A (ko) * | 2014-07-04 | 2016-01-13 | 주식회사 인스바이오 | 치과 시술 시뮬레이션을 위한 치아모델 생성 방법 |
KR101769334B1 (ko) * | 2016-03-14 | 2017-08-21 | 오스템임플란트 주식회사 | 치아 교정치료 지원을 위한 이미지 처리 방법, 이를 위한 장치, 및 이를 기록한 기록매체 |
KR20170142572A (ko) * | 2016-06-20 | 2017-12-28 | 주식회사 디오코 | 치아 교정 및 얼굴 성형 시뮬레이션 장치에서의 시뮬레이션 방법 및 이를 저장하는 컴퓨터로 판독 가능한 기록 매체 |
KR101952887B1 (ko) * | 2018-07-27 | 2019-06-11 | 김예현 | 해부학적 랜드마크의 예측 방법 및 이를 이용한 디바이스 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10568716B2 (en) * | 2010-03-17 | 2020-02-25 | ClearCorrect Holdings, Inc. | Methods and systems for employing artificial intelligence in automated orthodontic diagnosis and treatment planning |
-
2020
- 2020-02-03 KR KR1020200012517A patent/KR20210098683A/ko not_active IP Right Cessation
-
2021
- 2021-01-29 WO PCT/KR2021/001239 patent/WO2021157966A1/ko active Application Filing
-
2022
- 2022-03-14 KR KR1020220031302A patent/KR102462185B1/ko active IP Right Grant
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007037687A (ja) * | 2005-08-02 | 2007-02-15 | Hidefumi Ito | 歯科診療支援方法及びシステム |
JP2013063140A (ja) * | 2011-09-16 | 2013-04-11 | Hiroki Hirabayashi | 歯列矯正治療における顎骨手術要否判断指標の計算方法、歯列矯正治療における顎骨手術要否判断方法、歯科治療における上下顎骨不調和判断指標の計算方法、歯科治療における上下顎骨不調和判断方法、プログラムおよびコンピュータ |
KR20160004864A (ko) * | 2014-07-04 | 2016-01-13 | 주식회사 인스바이오 | 치과 시술 시뮬레이션을 위한 치아모델 생성 방법 |
KR101769334B1 (ko) * | 2016-03-14 | 2017-08-21 | 오스템임플란트 주식회사 | 치아 교정치료 지원을 위한 이미지 처리 방법, 이를 위한 장치, 및 이를 기록한 기록매체 |
KR20170142572A (ko) * | 2016-06-20 | 2017-12-28 | 주식회사 디오코 | 치아 교정 및 얼굴 성형 시뮬레이션 장치에서의 시뮬레이션 방법 및 이를 저장하는 컴퓨터로 판독 가능한 기록 매체 |
KR101952887B1 (ko) * | 2018-07-27 | 2019-06-11 | 김예현 | 해부학적 랜드마크의 예측 방법 및 이를 이용한 디바이스 |
Also Published As
Publication number | Publication date |
---|---|
KR20210098683A (ko) | 2021-08-11 |
KR20220039677A (ko) | 2022-03-29 |
KR102462185B1 (ko) | 2022-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021157966A1 (ko) | 딥러닝 인공지능 알고리즘을 이용한 치열 교정에 대한 정보 제공 방법 및 이를 이용한 디바이스 | |
US12070375B2 (en) | Area of interest overlay on dental site using augmented reality | |
WO2021145544A1 (ko) | 교정적 치아배열 형상 생성 방법 및 장치 | |
WO2021215582A1 (ko) | 치주염 자동 진단 방법 및 이를 구현하는 프로그램 | |
US10405754B2 (en) | Standardized oral health assessment and scoring using digital imaging | |
WO2021025296A1 (ko) | 크라운 모델 자동 추천방법 및 이를 수행하는 보철 캐드 장치 | |
US11963840B2 (en) | Method of analysis of a representation of a dental arch | |
WO2021210723A1 (ko) | 딥러닝을 이용한 3차원 의료 영상 데이터의 특징점 자동 검출 방법 및 장치 | |
WO2023013805A1 (ko) | 자연 두부 위치에서 촬영된 3차원 cbct 영상에서 기계 학습 기반 치아 교정 진단을 위한 두부 계측 파라미터 도출방법 | |
WO2021006471A1 (ko) | 임플란트 구조물 자동 식립을 통한 임플란트 수술 계획 수립 방법, 이를 위한 사용자 인터페이스 제공 방법 및 그 치아영상 처리장치 | |
WO2021141416A1 (ko) | 데이터 정합을 통한 3차원 모델 생성 장치 및 방법 | |
WO2020013642A1 (ko) | 구강 관리 장치 및 이를 포함하는 구강 관리 서비스 시스템 | |
WO2016200167A1 (ko) | 치아교정 가이드 장치 방법 | |
KR20220019860A (ko) | 딥러닝 알고리즘을 이용한 치아 검진 방법 | |
JP2005349176A (ja) | 顎運動解析方法及び顎運動解析システム | |
WO2021145607A1 (ko) | 치과 의무 기록 장치 및 그 치과 의무 기록 방법 | |
WO2021054700A1 (ko) | 치아 병변 정보 제공 방법 및 이를 이용한 장치 | |
WO2020209496A1 (ko) | 치아 오브젝트 검출 방법 및 치아 오브젝트를 이용한 영상 정합 방법 및 장치 | |
WO2023058994A1 (ko) | 딥러닝 기반 치아 교정치료 결과 예측 방법 및 장치 | |
KR20220068583A (ko) | 딥러닝을 이용한 임플란트 주위염 진단 시스템 및 그 방법 | |
WO2022086052A1 (ko) | 치아교합시뮬레이션 방법 및 시스템 | |
Carneiro | Enhanced tooth segmentation algorithm for panoramic radiographs | |
WO2020209495A1 (ko) | 영상 데이터의 전처리 장치 | |
WO2024196140A1 (ko) | 치과용 정합 데이터의 생성 방법 | |
WO2022260442A1 (ko) | 구강 이미지를 처리하는 데이터 처리 장치 및 구강 이미지 처리 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21750210 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21750210 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 19/12/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21750210 Country of ref document: EP Kind code of ref document: A1 |