CN115861154A - Method for determining development stage based on X-ray head shadow image - Google Patents

Method for determining development stage based on X-ray head shadow image Download PDF

Info

Publication number
CN115861154A
CN115861154A CN202111123218.2A CN202111123218A CN115861154A CN 115861154 A CN115861154 A CN 115861154A CN 202111123218 A CN202111123218 A CN 202111123218A CN 115861154 A CN115861154 A CN 115861154A
Authority
CN
China
Prior art keywords
image
neural network
artificial neural
vertebral bodies
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111123218.2A
Other languages
Chinese (zh)
Inventor
马成龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Chaohou Information Technology Co ltd
Original Assignee
Hangzhou Chaohou Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Chaohou Information Technology Co ltd filed Critical Hangzhou Chaohou Information Technology Co ltd
Priority to CN202111123218.2A priority Critical patent/CN115861154A/en
Priority to PCT/CN2022/116718 priority patent/WO2023045734A1/en
Publication of CN115861154A publication Critical patent/CN115861154A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

One aspect of the present application provides a computer-implemented method for determining a developmental stage based on an X-ray cephalogram image, comprising: acquiring an X-ray head shadow image; finding a plurality of vertebral bodies in the X-ray head shadow image by utilizing a trained target detection artificial neural network; carrying out dense key point detection on the images of the plurality of vertebral bodies by using the trained key point detection artificial neural network, and respectively generating mask images for the plurality of vertebral bodies on the basis of the detected corresponding key points; and determining a developmental stage based on the mask map and the overall image of the plurality of vertebral bodies using the trained developmental stage artificial neural network.

Description

Method for determining development stage based on X-ray head shadow image
Technical Field
The present application relates generally to methods for determining developmental stages based on X-ray cephalogram images.
Background
In the clinical practice of orthodontics, particularly for adolescents or children in their developmental stages, it is necessary to design corrective regimens specifically according to the stage of growth and development in which the patient is located. One common method is to determine the stage of growth and development of the patient based on the age of the cervical vertebrae.
In specific practice, the growth and development stage is generally determined according to the morphology of the cervical vertebrae in the lateral plate, for example, the growth and development process is divided into six stages of CVS 1-CVS 6 according to the changes of the volume and shape of 5 (C2-C6) cervical vertebrae.
Currently, it is common to manually observe the morphology of the cervical vertebrae in the lateral plates and determine the stage of growth and development in which the patient is located. On the one hand, however, the diagnostic result of this method is greatly influenced by the subjectivity of the operator; on the other hand, because transitional forms exist in the cervical vertebra development process, the transitional forms are difficult to judge accurately by manpower; on the other hand, to master such methods often requires a lot of learning and training, which increases the cultivation cost of the relevant personnel.
Although, attempts have been made to quantify diagnostic criteria, such as the degree of inferior margin concavity, the anterior-posterior ratio of the vertebral body, etc., and to establish an equation for the bone age of the cervical spine based on the quantified indicators. However, in such methods, a professional is required to label feature points of the vertebral body, then the quantified features are calculated according to the feature points of the vertebral body, and finally the bone age of the cervical vertebra is obtained according to the feature calculation. The method is not only cumbersome, but also the inventor of the application finds that partial qualitative description in the diagnosis standard is difficult to quantify, which increases the difficulty of the method in implementation, and in addition, the bone age staging method only utilizes the image bottom layer characteristics, has poor robustness and is only suitable for partial specific scenes.
In view of the above, there is a need to provide a new method for determining developmental stages based on X-ray cephalogram images.
Disclosure of Invention
One aspect of the present application provides a computer-implemented method for determining a developmental stage based on an X-ray cephalogram image, comprising: acquiring an X-ray head shadow image; finding out a plurality of centrums in the X-ray head shadow image by utilizing the trained target detection artificial neural network; carrying out dense key point detection on the images of the plurality of vertebral bodies by using the trained key point detection artificial neural network, and respectively generating mask images for the plurality of vertebral bodies on the basis of the detected corresponding key points; and determining a developmental stage based on the mask map and the overall image of the plurality of vertebral bodies using the trained developmental stage artificial neural network.
In some embodiments, the computer-implemented method for determining a developmental stage based on an X-ray cephalogram image further comprises: finding at least one auxiliary object while finding the plurality of vertebral bodies in the X-ray cephalogram image using the trained target detection artificial neural network; correcting the direction of the X-ray head-shadow image based on the auxiliary object; and determining the classification of the plurality of vertebral bodies based on the position relation of the plurality of vertebral bodies in the corrected X-ray head shadow image.
In some embodiments, the orientation correction of the X-ray cephalogram image is based on the physiological morphology and/or relative positional relationship of the auxiliary subject.
In some embodiments, the auxiliary subject comprises a nose and teeth.
In some embodiments, the plurality of vertebral bodies comprises C2-C4 vertebral bodies.
In some embodiments, the keypoint detection artificial neural network comprises a C2 vertebral body keypoint detection artificial neural network for detecting keypoints of C2 vertebral body images and a non-C2 vertebral body keypoint detection artificial neural network for detecting keypoints of other vertebral body images.
In some embodiments, the target detection artificial neural network is one of: YOLOv5 network, single Shot Detection network, and Faster R-CNN.
In some embodiments, the keypoint detection artificial neural network is one of: high-Resolution Net and Hourglass Net.
In some embodiments, the developmental stage network is one of: efficientNet and ResNet.
In some embodiments, the developmental stage is divided into six stages CVS 1-CVS 6.
In some embodiments, the developmental stage artificial neural network is capable of outputting a result representing a transitional developmental stage between two adjacent developmental stages.
In some embodiments, the computer-implemented method for determining a developmental stage based on an X-ray cephalogram image further comprises: and superposing the whole images of the plurality of vertebral bodies and the mask images of the plurality of vertebral bodies, and taking the superposed images as the input of the artificial neural network in the development stage.
In some embodiments, the image obtained by the superposition comprises a plurality of channels, and the overall image of the plurality of vertebral bodies and each image of the mask map of the plurality of vertebral bodies occupy a separate channel.
Drawings
The above and other features of the present application will be further explained with reference to the accompanying drawings and detailed description thereof. It is appreciated that these drawings depict only several exemplary embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope. The drawings are not necessarily to scale and wherein like reference numerals refer to like parts, unless otherwise specified.
FIG. 1 is a schematic flow chart diagram of a computer-implemented method for determining a developmental stage based on an X-ray cephalogram image in one embodiment of the present application;
FIG. 2 illustrates an X-ray head-shadow image in one example;
FIG. 3 illustrates key points of the C2 vertebral body in one example;
FIG. 4 illustrates key points of a non-C2 vertebral body in one example;
FIG. 5A illustrates an overall image of the vertebral bodies in one example;
FIG. 5B illustrates a C2 cone mask of the example of FIG. 5A;
FIG. 5C illustrates a C3 cone mask of the example of FIG. 5A; and
fig. 5D illustrates a C4 cone mask map of the example shown in fig. 5A.
Detailed Description
The following detailed description refers to the accompanying drawings, which form a part of this specification. The exemplary embodiments mentioned in the description and the drawings are only for illustrative purposes and are not intended to limit the scope of the present application. Those skilled in the art, having benefit of this disclosure, will appreciate that many other embodiments can be devised which do not depart from the spirit and scope of the present application. It should be understood that the aspects of the present application, as described and illustrated herein, may be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are within the scope of the present application.
One aspect of the present application provides a computer-implemented method for determining a stage of development based on an X-ray cephalogram image.
In one example, the growth and development process can be divided into the following six stages according to the morphology of the 2 nd to 4 th cervical vertebral bodies. Although the following examples illustrate the method of the present application based on this staging, it is understood that the method of the present application is applicable to any other staging based on cone morphology in addition to this staging.
CVS1: the lower edges of the 2 nd to 4 th cervical vertebra bodies are flat, and the 3 rd and 4 th vertebral bodies are conical. Indicating that the growth peak period occurs after the 2 years at the fastest speed.
CVS2: the lower edge of the 2 nd cervical vertebra is slightly sunken, and the 3 rd and 4 th cervical vertebra are conical. Indicating that the peak period of growth occurs after this year.
CVS3: the lower edges of the 2 nd and 3 rd cervical vertebra bodies are sunken, and the 3 rd and 4 th vertebral bodies are conical or horizontally rectangular, which shows that the growth and development peak period occurs at the stage.
CVS4: the lower edges of the 2 nd and 3 rd cervical vertebra bodies are sunken, and the 3 rd and 4 th cervical vertebra bodies are horizontally rectangular, which indicates that the growth and development peak period is finished at the end of the period or is finished in the previous year of the period.
CVS5: the lower edges of the 2 nd and 3 rd cervical vertebra bodies are sunken, and at least one of the 3 rd and 4 th vertebra bodies is square, which indicates that the growth and development peak is finished before the year.
CVS6: the lower edges of the 2 nd and 3 rd cervical vertebra bodies are sunken, and at least one of the 3 rd and 4 th vertebra bodies is vertically rectangular, which shows that the growth peak is finished two years ago.
Referring to FIG. 1, a schematic flow chart of a computer-implemented method 100 for determining a developmental stage based on an X-ray cephalogram image in one embodiment of the present application is shown.
In 101, an X-ray head-shadow image is acquired.
The capture of X-ray overhead images is well known in the art and will not be described in detail herein.
Referring to fig. 2, an X-ray head-shadow image is shown.
In 103, a trained target detection artificial neural network is used to find relevant objects in the X-ray cephalogram image.
In one embodiment, an object may be surrounded by a rectangular box to determine its extent and location in the X-ray cephalogram image.
Referring to fig. 2 again, the rectangular frame surrounding each cone, nose and tooth is the corresponding region (or called "target region") of each object found by the target detection artificial neural network. That is, the trained target detection artificial neural network detects and locates target areas of a vertebral body, a nose and teeth, outputs position information of the target areas, and cuts out each target area according to the position information as an input of a subsequent operation.
The number of the nose and the tooth target areas is 1, while the number of the vertebral body target areas is not fixed, specifically, all vertebral bodies from the C2 to the bottom of the picture.
The nose and the tooth target area are detected for the next step of correcting the direction of the input X-ray cephalogram image. It will be appreciated in light of the present application that if all X-ray cephalogram images are oriented in unison, then only a vertebral body target region need be detected in this operation, and no nose and tooth target regions need be detected. It will be appreciated in light of the present application that the X-ray cephalogram image may be directionally corrected by other target areas, such as eyes and mouth, in addition to nose and dental target areas. Hereinafter, the target region for correcting the X-ray head image direction is referred to as an auxiliary target region, and these objects are referred to as auxiliary objects.
In a preferred embodiment, the target detection artificial neural network may adopt a YOLOv5 network, which has the advantages of fast calculation speed and high precision. It is understood that any other suitable network may be used in the target Detection artificial neural network, such as a Single Shot Detection network (SSD network), a fast R-CNN network, etc., for example.
At 105, the X-ray head-portrait image is subjected to direction correction based on the detected auxiliary object, and the corresponding vertebral body type is determined based on the relative position relationship between the vertebral bodies in the X-ray head-portrait image after the direction correction.
In one embodiment, the orientation of the face in the X-ray cephalogram image can be determined based on the physiological characteristics and relative positions of the nose and teeth. If the orientation is not consistent with the preset orientation, the X-ray head-shadow image can be turned up and down or left and right to be consistent with the preset orientation. In one embodiment, the predetermined orientation may be with the skull under the superior vertebra and the face orientation is right. It will be appreciated that the predetermined orientation may be other than the above.
After the direction of the X-ray head shadow image is corrected, the type of each vertebral body can be determined according to the relative position relation of the vertebral bodies, the types of the vertebral bodies from top to bottom are C2, C3, C4 and the like in sequence, and the image of each vertebral body is cut out from the X-ray head shadow image according to the position information of the vertebral bodies and is used as the input of the subsequent operation.
At 107, dense keypoint detection is performed on the cropped image of the vertebral body using a trained keypoint detection artificial neural network.
In one embodiment, the vertebral bodies can be divided into two types, namely a C2 vertebral body and a non-C2 vertebral body (wherein the non-C2 vertebral body includes C3, C4, C5, etc.), and different key points are trained respectively for the C2 vertebral body and the non-C2 vertebral body to detect the artificial neural network.
In one embodiment, a training set of keypoint detection artificial neural networks may be established according to the following method.
First, the contour and salient corner points of the vertebral body are labeled by a practitioner. Referring to fig. 3, for the C2 vertebral body, the image of the upper side area of the vertebral body is fuzzy and difficult to distinguish, so the sampled contour is a part of the front edge and the rear edge of the vertebral body and the complete lower edge, and the prominent corner points are two end points lp and la of the lower edge of the vertebral body. Referring to fig. 4, for a non-C2 vertebral body, the vertebral body is approximately quadrilateral and the image is clear, so the contour is the complete contour of the vertebral body, and the significant corner points are the four corner points lp, la, up and ua of the vertebral body, where lp and la are two end points of the lower edge of the vertebral body, and up and ua are two end points of the upper edge of the vertebral body.
The contour of the vertebral body can then be divided into different curves according to the corner points. Taking the C2 cone as an example, the cone can be divided into three curves of a front edge, a lower edge and a rear edge according to the angular point, then spline fitting is carried out on each curve for the first time, equidistant sampling is carried out according to the fitted spline curve, and the sampling number is determined by visual observation and accuracy requirements. In one embodiment, the sampling numbers of the front edge, the rear edge and the lower edge of the C2 vertebral body can be respectively 8, 9 and 10, and the total sampling number is 27 key points. The sampling number of the front edge, the rear edge, the upper edge and the lower edge of the non-C2 vertebral body can be 10, and the total number of the sampling points is 40.
After a sufficient number of images of the C2 vertebral body and the non-C2 vertebral body marked with the key points are obtained, the images can be used as a training set for training the key points of the C2 vertebral body and the non-C2 vertebral body to detect the artificial neural network.
And sending the cut centrum image into a corresponding key point detection artificial neural network for key point detection to obtain coordinates of each key point. Taking the C2 vertebral body image as an example, the keypoint detection artificial neural network corresponding to the C2 vertebral body image needs to detect 27 keypoints, and generates and outputs 27 thermodynamic diagrams based on the cropped C2 vertebral body image, where each thermodynamic diagram corresponds to one keypoint.
After dense keypoint detection of each vertebral body image is completed, a mask map of each vertebral body can be drawn based on the detected keypoints, which represent the contour of the corresponding vertebral body.
The conventional key point detection is to detect some meaningful points, and the dense key point detection in this application is to sample a plurality of points from the contour and connect the points to obtain the contour line.
From the above, in many cases, the superior edge portion of the C2 vertebral body is difficult to label, so the mask map of the C2 vertebral body can be the inferior half of the C2 vertebral body.
In a preferred embodiment, HRNet (High-Resolution Net) can be used as a key point for detecting the artificial neural network. It will be appreciated that any other suitable network may be used for the keypoint detection artificial neural network, such as, for example, hourglass Net, etc., as taught herein.
In 109, a developmental stage is determined based on the mask map and the overall image of each vertebral body using a trained developmental stage artificial neural network.
After the position information of each cone in the X-ray head-shadow image is obtained, the whole image of the cones can be cut out. In one embodiment, the mask map and the whole image of the vertebral body may be superimposed, and the superimposed image is used as an input of the developmental stage artificial neural network. In one embodiment, the mask map and the whole image of the vertebral body are superimposed channel by channel, that is, the superimposed image includes a plurality of channels, and each channel corresponds to a corresponding image. The superposition of the mask images is equivalent to providing contour information of each vertebral body, so that the artificial neural network in the development stage can more accurately determine the form of each vertebral body, and further more accurately determine the development stage. Please refer to fig. 5A to 5D, which are an overall image of the vertebral body to be superimposed and mask images of the vertebral bodies C2 to C4, respectively. In this example, the superimposed image includes 4 channels.
Given that there are transition stages between developmental stages, in one embodiment, the developmental stage artificial neural network can output non-integer results to indicate that it is in a transition stage. For example, predictors 1.2, 1.5 and 1.7 indicate a developmental stage between CVS1 and CVS2, and predictors 2.6 and 2.3 indicate a developmental stage between CVS2 and CVS 3.
In a preferred embodiment, the developmental stage artificial neural network can adopt an EfficientNet network, which has the advantages of high calculation speed and small occupied memory. It will be appreciated that any other suitable network may be used for the developmental stage artificial neural network, e.g., a ResNet network, etc., as will be appreciated in light of the present disclosure.
Compared with the prior art, the method for determining the development stage based on the X-ray head shadow image greatly improves the efficiency and the consistency of results, and can accurately predict the transition stage.
While various aspects and embodiments of the disclosure are disclosed herein, other aspects and embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification. The various aspects and embodiments disclosed herein are for purposes of illustration only and are not intended to be limiting. The scope and spirit of the application are to be determined only by the claims appended hereto.
Likewise, the various diagrams may illustrate an exemplary architecture or other configuration of the disclosed methods and systems that is useful for understanding the features and functionality that may be included in the disclosed methods and systems. The claimed subject matter is not limited to the exemplary architectures or configurations shown, but rather, the desired features can be implemented using a variety of alternative architectures and configurations. In addition, the block sequences presented herein with respect to flow diagrams, functional descriptions, and method claims should not be limited to the various embodiments that are implemented in the same order to perform the recited functions, unless the context clearly dictates otherwise.
Unless otherwise expressly stated, the terms and phrases used herein, and variations thereof, are to be construed as open-ended as opposed to limiting. In some instances, the presence of an extensible term or phrases such as "one or more," "at least," "but not limited to," or other similar terms should not be construed as intended or required to imply a narrowing in instances where such extensible terms may not be present.

Claims (13)

1. A computer-implemented method for determining a stage of development based on an X-ray cephalogram image, comprising:
acquiring an X-ray head shadow image;
finding a plurality of vertebral bodies in the X-ray head shadow image by utilizing a trained target detection artificial neural network;
carrying out dense key point detection on the images of the plurality of vertebral bodies by using the trained key point detection artificial neural network, and respectively generating mask images for the plurality of vertebral bodies on the basis of the detected corresponding key points; and
determining a developmental stage based on the mask map and the overall image of the plurality of vertebral bodies using a trained developmental stage artificial neural network.
2. The computer-implemented method of claim 1 for determining a stage of development based on an X-ray cephalogram image, further comprising:
finding at least one auxiliary object while finding the plurality of vertebral bodies in the X-ray cephalogram image using the trained target detection artificial neural network;
correcting the direction of the X-ray head-shadow image based on the auxiliary object; and
determining the classification of the plurality of vertebral bodies based on the position relation of the plurality of vertebral bodies in the corrected X-ray head shadow image.
3. The computer-implemented method of claim 2, wherein the orientation correction of the radiographic image is based on the physiological morphology and/or relative positional relationship of the subject.
4. The computer-implemented method of claim 2, wherein the auxiliary objects include a nose and teeth.
5. The computer-implemented method of claim 1, wherein the plurality of vertebral bodies comprises C2-C4 vertebral bodies.
6. The computer-implemented method of claim 5, wherein the keypoint detection artificial neural network comprises a C2 vertebral body keypoint detection artificial neural network for detecting keypoints in C2 vertebral body images and a non-C2 vertebral body keypoint detection artificial neural network for detecting keypoints in other vertebral body images.
7. The computer-implemented method of claim 1, wherein the target-detecting artificial neural network is one of: a YOLOv5 network, a Single Shot Detection network, and Faster R-CNN.
8. The computer-implemented method of claim 1, wherein the key point detection artificial neural network is one of: high-Resolution Net and Hourglass Net.
9. The computer-implemented method of claim 1, wherein the developmental stage network is one of: efficientNet and ResNet.
10. The computer-implemented method of claim 1, wherein the developmental stage is divided into six stages, CVS 1-CVS 6.
11. The computer-implemented method of claim 1, wherein the developmental stage artificial neural network is capable of outputting a result representing a transitional developmental stage between two adjacent developmental stages.
12. The computer-implemented method of claim 1 for determining a stage of development based on an X-ray cephalogram image, further comprising: and superposing the whole images of the plurality of vertebral bodies and the mask images of the plurality of vertebral bodies, and taking the superposed images as the input of the artificial neural network in the development stage.
13. The computer-implemented method of claim 12, wherein the superimposed image comprises a plurality of channels, and wherein the entire image of the plurality of vertebral bodies and each image of the mask map of the plurality of vertebral bodies occupy a separate channel.
CN202111123218.2A 2021-09-24 2021-09-24 Method for determining development stage based on X-ray head shadow image Pending CN115861154A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111123218.2A CN115861154A (en) 2021-09-24 2021-09-24 Method for determining development stage based on X-ray head shadow image
PCT/CN2022/116718 WO2023045734A1 (en) 2021-09-24 2022-09-02 Method for determining development stage on the basis of x-ray cephalometric image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111123218.2A CN115861154A (en) 2021-09-24 2021-09-24 Method for determining development stage based on X-ray head shadow image

Publications (1)

Publication Number Publication Date
CN115861154A true CN115861154A (en) 2023-03-28

Family

ID=85653172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111123218.2A Pending CN115861154A (en) 2021-09-24 2021-09-24 Method for determining development stage based on X-ray head shadow image

Country Status (2)

Country Link
CN (1) CN115861154A (en)
WO (1) WO2023045734A1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895367B (en) * 2017-11-14 2021-11-30 中国科学院深圳先进技术研究院 Bone age identification method and system and electronic equipment
CN112754458A (en) * 2019-11-01 2021-05-07 上海联影医疗科技股份有限公司 Magnetic resonance imaging method, system and storage medium
CN113205535B (en) * 2021-05-27 2022-05-06 青岛大学 X-ray film spine automatic segmentation and identification method

Also Published As

Publication number Publication date
WO2023045734A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
KR101952887B1 (en) Method for predicting anatomical landmarks and device for predicting anatomical landmarks using the same
Bazina et al. Precision and reliability of Dolphin 3-dimensional voxel-based superimposition
US11464467B2 (en) Automated tooth localization, enumeration, and diagnostic system and method
KR101613159B1 (en) Automatic dental image registration method, apparatus, and recording medium thereof
Bulatova et al. Assessment of automatic cephalometric landmark identification using artificial intelligence
CN110246580B (en) Cranial image analysis method and system based on neural network and random forest
Kim et al. Accuracy of automated identification of lateral cephalometric landmarks using cascade convolutional neural networks on lateral cephalograms from nationwide multi‐centres
CN114287915B (en) Noninvasive scoliosis screening method and system based on back color images
CN112258516A (en) Method for generating scoliosis image detection model
KR102461343B1 (en) Automatic tooth landmark detection method and system in medical images containing metal artifacts
CN115797730B (en) Model training method and device and head shadow measurement key point positioning method and device
EP4074259A1 (en) Method and apparatus for automatically detecting feature points of three-dimensional medical image data by using deep learning
Yao et al. Automatic localization of cephalometric landmarks based on convolutional neural network
Reesu et al. Automated Identification from Dental Data (AutoIDD): A new development in digital forensics
WO2022124462A1 (en) Method for automatically detecting landmark in three-dimensional dental scan data, and computer-readable recording medium with program for executing same in computer recorded thereon
EP4382074A1 (en) Method for deriving head measurement parameters for tooth correction diagnosis based on machine learning from three-dimensional cbct image captured at natural head position
US20220246269A1 (en) Implant surgery planning method using automatic placement of implant structure, user interface providing method therefor, and teeth image processing device therefor
KR101801376B1 (en) Skull deformity analyzing system using a 3d topological descriptor and a method for analyzing skull deformity using the same
KR102255592B1 (en) method of processing dental CT images for improving precision of margin line extracted therefrom
CN115861154A (en) Method for determining development stage based on X-ray head shadow image
KR20200012707A (en) Method for predicting anatomical landmarks and device for predicting anatomical landmarks using the same
CN112545537A (en) Head shadow measurement tracing graph generation method and system
CN113017868B (en) Orthodontic anterior-posterior skull side film registration method and orthodontic anterior-posterior skull side film registration equipment
EP3806034A1 (en) Segmentation device
George et al. Dental radiography analysis and diagnosis using YOLOv8

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication