WO2023153564A1 - Medical image processing method and apparatus - Google Patents
Medical image processing method and apparatus Download PDFInfo
- Publication number
- WO2023153564A1 WO2023153564A1 PCT/KR2022/010816 KR2022010816W WO2023153564A1 WO 2023153564 A1 WO2023153564 A1 WO 2023153564A1 KR 2022010816 W KR2022010816 W KR 2022010816W WO 2023153564 A1 WO2023153564 A1 WO 2023153564A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- bone density
- cbct
- learning
- raw
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 48
- 238000012549 training Methods 0.000 claims abstract description 62
- 210000000988 bone and bone Anatomy 0.000 claims abstract description 34
- 238000013507 mapping Methods 0.000 claims abstract description 25
- 229910052500 inorganic mineral Inorganic materials 0.000 claims abstract description 16
- 239000011707 mineral Substances 0.000 claims abstract description 16
- 230000037182 bone density Effects 0.000 claims description 296
- 230000006870 function Effects 0.000 claims description 57
- 238000000034 method Methods 0.000 claims description 44
- 238000013136 deep learning model Methods 0.000 claims description 41
- 210000003484 anatomy Anatomy 0.000 claims description 17
- 238000012937 correction Methods 0.000 claims description 16
- 238000007408 cone-beam computed tomography Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 14
- 230000015654 memory Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 238000005259 measurement Methods 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 5
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 125000004122 cyclic group Chemical group 0.000 description 3
- 238000001739 density measurement Methods 0.000 description 3
- 210000002050 maxilla Anatomy 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 239000007943 implant Substances 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000000126 in silico method Methods 0.000 description 2
- 238000012417 linear regression Methods 0.000 description 2
- 210000004373 mandible Anatomy 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 210000003625 skull Anatomy 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 208000010392 Bone Fractures Diseases 0.000 description 1
- 206010017076 Fracture Diseases 0.000 description 1
- 238000013256 Gubra-Amylin NASH model Methods 0.000 description 1
- 208000001132 Osteoporosis Diseases 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 210000004763 bicuspid Anatomy 0.000 description 1
- 239000004053 dental implant Substances 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012623 in vivo measurement Methods 0.000 description 1
- 210000004283 incisor Anatomy 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009206 nuclear medicine Methods 0.000 description 1
- 230000000414 obstructive effect Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000000087 stabilizing effect Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/51—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for dentistry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Definitions
- the present disclosure relates to a method and apparatus for processing a medical image, and more specifically, to a method and apparatus for processing a Cone-Beam CT (CBCT) image.
- CBCT Cone-Beam CT
- Bone Mineral Density (BMD) measurement is a method of estimating bone mass to diagnose osteoporosis and predict future fracture risk. Even in dental implant treatment, accurate in vivo measurement of alveolar bone quality is very important in determining the primary stability of the implant. Therefore, it is necessary to diagnose whether alveolar bone mineral density (BMD) is sufficient before implant placement.
- BMD bone mineral density
- HU Hounsfield Units
- CBCT images have been widely used for dental diagnosis, treatment, and surgical planning.
- CBCT images offer advantages such as low radiation dose to the patient, short acquisition time, and high resolution compared to conventional CT images.
- the voxel value of the CBCT image is arbitrary and does not accurately provide the HU value, it is impossible to accurately measure the bone quality of the alveolar bone using CBCT.
- the problem to be solved is to provide a method for quantitatively and immediately measuring bone density from a CBCT image having voxel values in a non-linear relationship with bone density values, unlike general CT images.
- it may be used to achieve other tasks not specifically mentioned.
- a method of processing a medical image performed by a computing device wherein the method learns mapping information of a Cone-Beam CT (CBCT) image for learning and a bone density-related image for learning corresponding to the CBCT image for learning
- CBCT Cone-Beam CT
- learning a first model to infer the bone density-related image for learning from the CBCT image for learning and inferring the bone density-related image for learning from the initial bone density-related image output from the CBCT image for learning and the first model , training the second model.
- the bone density-related image for learning may be either a quantitative CT (QCT) or a CT image.
- the first model includes a first generator that learns the mapping information to infer a bone density-related image from a CBCT image, and a first generator that determines whether an input image is a real bone density-related image or a synthesized bone density-related image. It may be learned in a direction of minimizing a result of a first loss function including a discriminator and an adversarial loss function of the first model calculated based on a discriminating success probability of the first discriminator.
- the first model includes a second generator that learns the mapping information and infers the CBCT image from the bone density-related image, and a second generator that determines whether an input image is a real CBCT image or a synthesized CBCT image. Further comprising a discriminator, the adversarial loss function of the first model calculated based on the discriminant success probabilities of the first discriminator and the second discriminator, and a cycle consistency loss of the first model It may be optimized in a direction of minimizing a result of the first loss function including a combination of functions.
- the first model may be implemented to include a Cycle-GAN (Generative Adversarial Network) structure.
- Cycle-GAN Generic Adversarial Network
- the bone density-related image includes the mapping information on the voxel intensity relationship between the CBCT image for learning and the bone density-related image for learning, and bone contrast is increased with respect to the CBCT image for learning QCT -It may be a QCT-like or CT-like image.
- the second model may output a final bone density related image including bone mineral density (BMD) data corresponding to the CBCT image for training.
- BMD bone mineral density
- the second model may be optimized in a direction of minimizing a result of a second loss function including a function related to a difference between the bone density related image for training and the final bone density related image.
- the function related to the difference is the mean absolute difference (MAD) of the voxel intensity difference between the training bone density related image and the final bone density related image and the structural similarity between the training bone density related image and the final bone density related image ( Structural similarity; SSIM).
- MAD mean absolute difference
- SSIM structural similarity
- the second model may be implemented to include a multi-channel U-Net structure that receives the CBCT image for learning and the initial bone density-related image through multi-channel.
- the final bone density related image may be a QCT-like image or a CT-like image including the mapping information included in the initial bone density related image and anatomical structure information of the CBCT image for learning and suppressing artifacts. .
- the method includes obtaining a raw CBCT image of a subject and a raw CT image corresponding to the raw CBCT image, respectively; Bone density of the raw CT image taken under conditions corresponding to the raw CT image Calibrating using a BMD calibration phantom CT image and acquiring the corrected raw CT image as a raw QCT image, removing a non-anatomical region from the raw CBCT image and the raw QCT image,
- the method may further include registering a raw CBCT image and the raw QCT image, acquiring the raw CBCT image as the CBCT image for training, and obtaining the raw QCT image as the bone density-related image for training. .
- the method includes obtaining a raw CBCT image of a subject and a raw CT image corresponding to the raw CBCT image, respectively, removing a non-anatomical region from the raw CBCT image and the raw CT image.
- the step of registering the raw CBCT image and the raw CT image, and acquiring the raw CBCT image as the CBCT image for training and acquiring the raw CT image as the bone density-related image for training can do.
- a medical image processing method performed by a computing device comprising: acquiring a Cone-Beam CT (CBCT) image; and inputting the CBCT image to a pre-learned deep learning model. and obtaining a final bone density-related image including Bone Mineral Density (BMD) data corresponding to the CBCT image.
- CBCT Cone-Beam CT
- BMD Bone Mineral Density
- the final bone density related image may be a QCT-like image or a CT-like image.
- the method further comprises acquiring the bone density data corresponding to the CBCT image based on the final bone density associated image, wherein the acquiring the bone density data comprises: the final bone density associated image is the QCT-like image If , immediately acquiring the bone density data from the final bone density related image; or, if the final bone density related image is the CT-like image, correcting the final bone density related image to correspond to the CT-like image.
- the method may include generating a QCT image and acquiring the bone density data from the generated QCT image.
- bone density can be quantitatively and immediately measured from a CBCT image having voxel values in a non-linear relationship with bone density values.
- a synthesized QCT image with improved bone contrast and uniformity in the image is obtained by processing a CBCT image through a deep learning model including a combination of a Cycle-GAN structure and a multi-channel U-Net structure.
- FIG. 1 is a block diagram illustrating a computing device providing a medical image processing method according to some embodiments of the present disclosure.
- FIG. 2 is a schematic diagram of a medical image processing method according to some embodiments of the present disclosure.
- FIG. 3 is a flowchart of a medical image processing method according to some embodiments of the present disclosure.
- FIG. 4 is a flowchart of a deep learning model learning method for providing a medical image processing method according to some embodiments of the present disclosure.
- FIG. 5 is a schematic diagram of a deep learning model learning method for providing a medical image processing method according to some embodiments of the present disclosure.
- FIG. 6 is a diagram exemplarily illustrating a deep learning model training method for providing a medical image processing method according to some embodiments of the present disclosure.
- FIG. 7 is a diagram illustrating bone density data measurement performance according to the medical image processing method of the present disclosure.
- FIG. 8 is a diagram illustrating bone density data measurement performance according to the medical image processing method of the present disclosure.
- FIG. 9 is a diagram illustrating bone density data measurement performance according to the medical image processing method of the present disclosure.
- FIG. 10 is a flowchart of a method of obtaining training data in a medical image processing method according to some embodiments of the present disclosure.
- a part in the present disclosure, when a part "includes" a certain component, it means that it may further include other components without excluding other components unless otherwise stated.
- Devices constituting the network may be implemented as hardware, software, or a combination of hardware and software.
- the term “image” may be used as a medical image of a subject, particularly a medical image that is a target of input/output and processing for a deep learning model in the present disclosure.
- the image may be a CT image, a CBCT image, and/or a QCT image captured of the maxilla region of the subject.
- the image may be a synthesized QCT image that is output by inputting the CBCT image captured of the subject to a deep learning model.
- the output image may be an image including bone density data corresponding to the input CBCT image.
- images in the present disclosure include CT images, CBCT images, QCT images, magnetic resonance imaging (MRI), ultrasound images, and endoscopic examination images of any body part of the subject. , thermography images, nuclear medicine images, and/or images output as intermediate or final products by inputting them to a deep learning model.
- the term “image associated with bone density” may be an image including bone density data of a subject.
- it may mean an image capable of measuring bone density data of a subject and/or an image usable for measuring bone density data.
- a bone density related image may be a general CT image or a quantitative CT (QCT) image obtained by correcting a general CT image based on a phantom for correction.
- QCT quantitative CT
- model may be used to mean a deep learning model including one or more neural network structures.
- a model may mean a deep learning model for obtaining data by processing a medical image.
- the model may be a deep learning model trained to receive a CBCT image captured of a subject and output an image including bone density data for the input CBCT image.
- the model is a deep learning model trained to receive various medical images other than CBCT images and output arbitrary data such as quantification data, corrected image data, and segmentation data for the input medical images.
- a model may be implemented as a combination of one or more neural network structures, for example, a combination of a Cycle-GAN (Generative Adversarial Network) structure and a U-Net structure.
- the model may be implemented with only one of a Cycle-GAN structure and a U-Net structure, and in addition, CNN (Convolutional Neural Network; CNN), RNN (Recurrent Neural Network), LSTM It can be implemented using any neural network structure usable for medical image processing, such as Long Short-Term Memory (Long Short-Term Memory), Autoencoder, and Deep Residual Network (DRN).
- CNN Convolutional Neural Network
- RNN Recurrent Neural Network
- LSTM Long Short-Term Memory
- DNN Deep Residual Network
- the term “raw image” may be used as a term referring to a medical image captured of an object under examination, and in particular, a medical image in a state in which image preprocessing is not performed after being captured.
- a raw image may be classified into a raw CBCT image, a raw CT image, a raw QCT image, etc. according to the type of a photographed medical image, but is not limited thereto.
- the following learning data may be obtained through pre-processing of raw images.
- learning data may be used as a term meaning data used for learning the above-described model.
- data for learning may be composed of one or more image pairs including a CBCT image and a CT image or QCT image corresponding to the CBCT image, that is, obtained from a subject corresponding to the CBCT image or a corresponding imaging condition. .
- data for learning may be composed of an image for learning obtained through pre-processing of the aforementioned raw image.
- a raw CBCT image may be matched with a corresponding raw QCT image, and the matched raw CBCT image may be obtained as a CBCT image.
- the corresponding raw CBCT image may be matched with a corresponding raw QCT image, and the matched raw CBCT image may be obtained as a CBCT image.
- the matched raw QCT image can be obtained as a QCT image through noise removal and/or matching with the corresponding raw CBCT image. can However, it is not limited thereto.
- bone density data refers to data related to bone density of a subject, and in particular, in the present disclosure, it means one of data to be acquired by inputting a medical image to a deep learning model. can do.
- the term bone density data may refer to data in a form capable of quantitatively measuring bone density.
- bone density data in the present disclosure may be a synthesized QCT image obtained by inputting a CBCT image of a subject to a deep learning model, or may be bone density related data obtained based on voxel intensity of the synthesized QCT image. there is.
- the bone density data may be obtained in any form of data, such as a graph, a heat map, or a table, in addition to an image.
- FIG. 1 is a block diagram illustrating a computing device providing a medical image processing method according to some embodiments of the present disclosure.
- a computing device 10 includes one or more processors 11, a memory 12 for loading programs executed by the processors 11, and a storage 13 for storing programs and various data. ), and a communication interface 14.
- the computing device 10 may have more or fewer components than the components listed above.
- the computing device 10 may further include an output unit and/or an input unit (not shown), or the storage 13 may be omitted.
- a program consists of a series of computer readable instructions grouped together on a functional basis and is executed by a processor.
- the program may include instructions that, when loaded into memory 12, cause processor 11 to perform methods/operations in accordance with various embodiments of the present disclosure. That is, the processor 11 may perform methods/operations according to various embodiments of the present disclosure by executing instructions.
- the processor 11 controls the overall operation of each component of the computing device 10 .
- the processor 11 may include at least one of a Central Processing Unit (CPU), a Micro Processor Unit (MPU), a Micro Controller Unit (MCU), a Graphic Processing Unit (GPU), or any type of processor well known in the art of the present disclosure. can be configured to include Also, the processor 11 may perform an operation for at least one application or program for executing a method/operation according to various embodiments of the present disclosure.
- CPU Central Processing Unit
- MPU Micro Processor Unit
- MCU Micro Controller Unit
- GPU Graphic Processing Unit
- the processor 11 learns mapping information of a Cone-Beam CT (CBCT) image for learning and a bone density-related image for learning, and learns a first model to infer a bone density-related image for learning from the CBCT image for learning, and The second model may be trained to infer a bone density related image for learning from the CBCT image and the initial bone density related image output from the first model.
- CBCT Cone-Beam CT
- the processor 11 controls the communication interface 14 to be described later to receive a CBCT image for learning and a bone density-related image for learning, or a raw CBCT image and a raw CT received through the communication interface 14 Data for learning may be generated by processing an image or a raw QCT image.
- the processor 11 controls the communication interface 14 to receive a CBCT image of a subject, inputs the CBCT image to a pre-learned deep learning model stored in memory and/or storage, and calculates the final bone density.
- a related image may be obtained, and an output unit (not shown) may be controlled to output a final bone density related image to a user terminal.
- the processor 11 controls the overall operation of each component of the computing device 10 to provide the medical image processing method according to the present disclosure, and do not limit the present disclosure.
- Memory 12 stores various data, commands and/or information. Memory 12 may load one or more programs from storage 13 to execute methods/operations according to various embodiments of the present disclosure.
- the memory 12 may be implemented with volatile memory such as RAM, but the technical scope of the present disclosure is not limited thereto.
- the storage 13 may store programs non-temporarily.
- the storage 13 is a non-volatile memory such as read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, or the like, a hard disk, a removable disk, or well known in the art. It may be configured to include any known type of computer-readable recording medium.
- the storage 13 may store, for example, a deep learning model according to the present disclosure learned by the processor 11 described above.
- the storage 13 may store a program including instructions for training a deep learning model according to the present disclosure.
- it is not limited thereto.
- the communication interface 14 may be a wired/wireless communication module.
- the communication interface 14 according to the present disclosure receives a CBCT image of an arbitrary subject from a server (not shown) or inputs the received CBCT image to the deep learning model to obtain a final bone density related image. It may be transmitted to a server and/or a user terminal, or may receive training data for learning the above-described deep learning model. However, it is not limited thereto.
- FIG. 2 is a schematic diagram of a medical image processing method according to some embodiments of the present disclosure.
- the computing device 10 inputs a CBCT image 200 to a pre-learned deep learning model 100, and includes bone density data corresponding to the CBCT image.
- a final bone density-related image 500 may be obtained.
- bone mineral density can be measured immediately and quantitatively from CBCT images. This is due to the fact that the voxel intensity value of the conventional CBCT image is arbitrary and does not correctly provide the HU (Hounsfield Units) value for bone density measurement, despite advantages such as low radiation dose, short acquisition time, and high resolution. This contrasts with those that were difficult to use for measuring bone density.
- HU Hounsfield Units
- FIG. 3 is a flowchart of a medical image processing method according to some embodiments of the present disclosure.
- FIG. 3 shows the medical image processing method of the present disclosure described in FIG. 2 in detail according to the type of the final bone density related image.
- the computing device 10 may obtain a CBCT image (S110).
- the processor 11 may control a camera unit (not shown) to receive a CBCT image captured of the subject, or control the communication interface 14 to receive a CBCT image captured of the subject. .
- the computing device 10 may acquire a final bone density-related image including bone density data corresponding to the CBCT image by inputting the CBCT image to the pre-learned deep learning model (S120).
- the final bone density related image may be a QCT-like image or a CT-like image corresponding to the previously input CBCT image.
- the type of final bone density-related image obtained by inputting a CBCT image to a pre-learned deep learning model may be different according to the type of training data used to learn the deep learning model.
- the deep learning model learns mapping information between the CBCT image and the QCT image, and accordingly, when the CBCT image is input, the CBCT image A QCT-like image including bone density data corresponding to may be output as a final bone density related image.
- the deep learning model learns mapping information between the CBCT image and the CT image, and accordingly, when the CBCT image is input, the CBCT image A CT-like image including bone density data corresponding to may be output as a final bone density related image.
- a method for learning the deep learning model 100 will be described later in detail with reference to FIGS. 4 to 6 and a method for obtaining learning data used for learning the deep learning model 100 with reference to FIG. 10 .
- the computing device 10 may immediately acquire bone density data from the final bone density related image (S140).
- bone density data may be immediately measured and obtained from voxel values of a final bone density related image, which is a QCT-like image.
- the computing device 10 corrects the final bone density related image to generate a corresponding QCT image, and generates the generated QCT image.
- Bone density data can be obtained from (S150).
- the final bone density related image is a CT-like image
- the CT image is corrected through a phantom for bone density correction to obtain a QCT image
- a QCT image is obtained from the corresponding QCT image.
- data can be measured.
- a method of correcting a CT image using a phantom for bone density correction is a method known in the art of the present disclosure, and thus a detailed description thereof will be omitted.
- a method for learning the deep learning model 100 according to the present disclosure will be described in detail with reference to FIGS. 4 to 6 below.
- FIG. 4 is a flowchart of a deep learning model learning method for providing a medical image processing method according to some embodiments of the present disclosure.
- the computing device 10 learns a CBCT image for learning and mapping information of a bone density-related image for learning corresponding to the CBCT image for learning, and trains a first model to infer a bone density-related image for learning from the CBCT image for learning. It can (S210).
- the first model 110 is trained to infer a bone density-related image for learning corresponding to the CBCT image for learning from the CBCT image for learning, and outputs an initial bone density-related image 400 according to the input of the CBCT image.
- the initial bone density related image 400 may include voxel intensity mapping information between an input CBCT image for learning and a corresponding bone density related image for learning.
- the bone density-related image for learning corresponding to the CBCT image for learning may be, for example, a QCT image or a CT image that can be obtained from the same subject and/or under the same imaging conditions as the CBCT image for learning. That is, the initial bone density-related image 400 may include information about a voxel intensity mapping (or correspondence) relationship between a CBCT image and a QCT image or CT image that can be obtained from the same subject and/or under the same imaging conditions.
- the bone density distribution information of the bone density related image corresponding to the CBCT image is reflected in the initial bone density related image 400, the bone contrast (the difference in luminance between the bone and the surrounding tissue in the medical image) with respect to the CBCT image is may have a tendency to increase. That is, the initial bone density related image 400 may be a QCT-like image or a CT-like image in which bone contrast is increased with respect to the input CBCT image.
- the initial bone density related image 400 may be a QCT-like image.
- the initial bone density related image (400 ) may be a CT-like image.
- the computing device 10 may train the second model to infer a bone density related image for learning from the CBCT image for training and the initial bone density related image output from the first model (S220).
- the second model 120 is trained to infer a bone density related image for learning corresponding to the CBCT image for learning from the CBCT image for learning and the initial bone density related image 400, and outputs the final bone density related image 500.
- the second model 120 may be connected in series with the first model 110 to form the deep learning model 100 according to the present disclosure as an integral body. However, it is not limited thereto.
- the final bone density related image 500 output from the second model 120 includes voxel intensity mapping information included in the initial bone density related image 400 and anatomical structure information of the input CBCT image, and artifacts ) may be suppressed QCT-like images or CT-like images.
- the final bone density related image 500 is It may be a QCT-like image.
- the second model 120 is trained to infer a training bone density related image corresponding to a training CBCT image from the training CBCT image and the initial bone density related image 400 that is a CT-like image
- the final bone density related image 500 may be a CT-like image.
- the image with artifact suppression here is an image from which obstructive elements such as artifacts and scattering noise are removed, and an image with improved uniformity, which means the degree to which the HU value of each pixel of the image for a uniform object is uniform can mean
- the final bone density-related image 500 may be an image including bone density data corresponding to the input CBCT image. Accordingly, bone density data corresponding to the CBCT image 200 may be obtained from the final bone density related image 500 obtained by inputting the CBCT image 200 to the deep learning model 100 . For example, information on the bone density of a subject whose CBCT image has been taken may be obtained immediately and quantitatively from the final bone density related image 500, which is a QCT-like image.
- FIG. 5 is a schematic diagram of a deep learning model learning method for providing a medical image processing method according to some embodiments of the present disclosure. Specifically, FIG. 4 schematically shows the learning process of the first model 110 and the second model 120 of the present disclosure described above with reference to FIG. 3 .
- the first model 110 receives an image pair of a CBCT image 300 for learning and a bone density related image 310 for learning corresponding thereto, and maps information between the CBCT image 300 for learning and the bone density related image 310 for learning learning, and based on this, it may be learned to infer the bone density related image 310 for learning from the CBCT image 300 for learning. Accordingly, the first model 110 may output an initial bone density related image 400 according to the input of the CBCT image 300 for learning.
- the first model 110 may be optimized based on at least a determination success probability for discriminating an actual bone density related image among the output initial bone density related image 400 and the training bone density related image 310 .
- the learning bone density related image 310 is a QCT image corresponding to the learning CBCT image 300 and the initial bone density related image 400 is a QCT-like image
- the first model 110 is an initial bone density related image ( 400) and the bone density related image 310 for learning
- the actual QCT image can be optimized based on at least the probability of success in discriminating.
- the first model 110 is an initial bone density related image ( 400) and the bone density-related image 310 for learning, it can be optimized based on at least the probability of success in discriminating an actual CT image.
- a first loss function may be used to optimize the first model 110 .
- the value of the first loss function may be calculated by including a plurality of factors including a probability of success in discriminating between the initial bone density related image 400 and the bone density related image 310 for learning. Calculation of the first loss function will be described later with reference to FIG. 6 .
- the second model 120 receives the CBCT image 300 for learning and the initial bone density related image 400 output from the first model 110, and learns to infer the bone density related image 310 for learning from them It can be. Accordingly, the second model 120 may output a final bone density related image 500 including bone density data corresponding to the CBCT image 300 for training.
- the second model 120 may be optimized based on the difference (or loss) between the bone density related image 310 for training and the output final bone density related image 500 .
- a second loss function may be used to optimize the second model 120 .
- the value of the second loss function may include a function related to a difference between the bone density related image 310 for training and the output final bone density related image 500 .
- the value of the second loss function is the mean absolute difference (MAD) of the voxel intensity difference between the bone density related image 310 for training and the output final bone density related image 500, and the bone density related image for training ( 310) and structural similarity (SSIM) between the output final bone density associated image 500. Calculation of the second loss function will be described later with reference to FIG. 6 .
- FIG. 6 is a diagram exemplarily illustrating a deep learning model training method for providing a medical image processing method according to some embodiments of the present disclosure.
- the first model 110 may be implemented to include a Cycle-GAN (Generative Adversarial Network) structure
- the second model 120 is a CBCT image for learning (multi-channel) 300) and an initial bone density related image 400 may be implemented to include a multi-channel U-Net structure.
- a general GAN structure consists of a generator and discriminator trained simultaneously.
- the generator aims to trick the discriminator into classifying the similar image as a real image by creating a real-like image, while the discriminator aims to classify real-like images from each other.
- Cycle-GAN adds a constraint condition that the loss between the generated image and the actual image must be minimized.
- the Cycle-GAN structure further includes a generator and a discriminator that restores the generated image closer to the actual image, that is, performs an inverse transformation process, and thus consists of two generators and two discriminators. In this way, Cycle-GAN can limit the model to double by doubling the process of a general GAN by applying inverse transformation and increase the accuracy of the output image.
- the first model 110 learns the mapping information to infer the bone density related image 400 for learning from the CBCT image 300 for learning.
- the second generator 112 learned to infer the CBCT image 300, the first discriminator 115 learned to determine whether the input image is an actual bone density related image or a synthesized bone density related image, and the input A second discriminator 116 trained to determine whether the image is a real CBCT image or a synthesized CBCT image may be included.
- the first model 110 may be composed of only the first generator 111 and the first discriminator 115 among the above-described configurations.
- the first model 110 may be implemented to include a GAN structure instead of a Cycle-GAN structure.
- the first generator 111 may receive the CBCT image 300 for learning and output an initial bone density related image 400, and the second generator 113 may output the first bone density related image 400.
- the restored CBCT image 301 may be output by receiving the initial bone density related image 400 output from the generator 111 .
- the restored CBCT image 301 may be an image obtained by reconstructing the initial bone density related image 400 to be close to the CBCT image 300 for learning input to the first generator 111 .
- the second generator 112 may receive the bone density related image 310 for learning and output a synthesized CBCT image 311, and the first generator 114 may output the second
- the synthesized CBCT image 311 output from the generator 112 may be input and a restored bone density related image 312 may be output.
- the restored bone density related image 312 may be an image obtained by reconstructing the synthesized CBCT image 311 to be close to the bone density related image 310 for learning input to the second generator 112 .
- the learning bone density related image 310 is a QCT image corresponding to the learning CBCT image 300 and the initial bone density related image 400 is a QCT-like image
- the first model 110 is an initial bone density related image ( 400) and the bone density related image 310 for learning are input, and it is possible to determine whether the input image is an actual QCT image or a synthesized QCT image.
- the first model 110 is an initial bone density related image ( 400) and the bone density related image 310 for learning are input, and it is possible to determine whether the input image is an actual CT image or a synthesized CT image.
- the first discriminator 115 may output 1 when it determines that the input image is an actual bone density-related image, and outputs 0 when it determines that the input image is a synthesized bone density-related image. Accordingly, if the first discriminator 115 outputs 0 when the initial bone density-related image 400 is input and outputs 1 when the bone density-related image 310 for learning is input, it is determined that the discrimination was successful. It can be.
- the second discriminator 116 may receive the synthesized CBCT image 311 and/or the CBCT image 300 for learning, and determine whether the input image is an actual CBCT image or a synthesized CBCT image. Specifically, the second discriminator 116 may output 0 when it determines that the input image is an actual CBCT image, and outputs 1 when it determines that the input image is a synthesized CBCT image. Accordingly, if the second discriminator 116 outputs 1 when the synthesized CBCT image 311 is input and outputs 0 when the CBCT image 300 for learning is input, it can be determined that discrimination has succeeded. there is.
- an adversarial loss function calculated based on the discrimination success probabilities of the above-described first discriminator 115 and the second discriminator 116 may be used.
- the adversarial loss function calculated based on the output of the discriminator may constitute a first loss function for optimizing the first model 110 together with a cycle consistency loss function to be described later.
- the adversarial loss function constituting the first loss function may be defined as follows for each of the first discriminator 115 and the second discriminator 116 .
- D QCT is the first discriminator 115
- D CBCT is the second discriminator 116
- I QCT is the bone density-related image 310 for learning
- I CBCT is the CBCT image 300 for learning
- G CBCT ⁇ QCT is the first 1 generator 111
- G QCT ⁇ CBCT refer to the second generator 112, respectively:
- the value of the adversarial loss function (Equation 1) for the first discriminator 115 may be zero.
- the value of the adversarial loss function (Equation 2) for the second discriminator 116 may be zero.
- the first loss function for optimizing the first model 110 may include a cycle coherence loss function in addition to the above-described adversarial loss function.
- the cycle coherence loss function may be a function that calculates a loss by comparing an 'image obtained by reconstructing a synthesized image close to a real image' and a previously input 'real image'. In other words, it may be a function indicating how closely the synthesized image output by inputting the real image to the generator is restored to the real image.
- the cycle consistency loss function of the first model 110 is the loss (or difference) between the restored CBCT image 301 and the training CBCT image 300, and the restored bone density related image 312 and the training bone density related image ( 310) can be calculated based on the loss between
- the cyclic coherence loss function constituting the first loss function may be defined as:
- LCYC G QCT ⁇ CBCT (G CBCT ⁇ QCT (I CBCT) )-I CBCT + G CBCT ⁇ QCT (G QCT ⁇ CBCT (I QCT ))-I QCT
- the first term to the right of the equal sign is the loss between the restored CBCT image 301 and the learning CBCT image 300
- the second term is the restored bone density related image 312 and the learning bone density related image 310 indicates liver loss.
- the value of the first term of Equation 3 will approach 0, similarly As the first generator 114 reconstructs the synthesized CBCT image 311 closer to the bone density-related image 310 for training input to the second generator 112, the value of the second term of Equation 3 may also approach 0. .
- the first loss function of the first model 110 as a combination of the adversarial loss function and the cyclic coherence loss function, can be defined as follows.
- ⁇ is a weight, and controls the relative importance of the adversarial loss function of the first model 110.
- ⁇ may be 10.
- any suitable ⁇ value can be selected in silico or experimentally:
- L GAN L ADV (G CBCT ⁇ QCT) +L ADV (G QCT ⁇ CBCT )+ ⁇ L CYC
- the first model 110 may be optimized based on the result of a first loss function comprising a combination of an adversarial loss function and a cyclic coherence loss function. As described above, the higher the discrimination success probability of the first discriminator 115 and the second discriminator 116, and the higher the performance of the first generators 111 and 114 and the second generators 112 and 113 The higher the restoration performance (herein, in particular, the higher the restoration performance), the resultant value of the first loss function of the first model 110 may be reduced. In other words, the first model 110 may be optimized in a direction in which the result of the first loss function is minimized.
- convolution blocks are 7x7 and 3x3 convolutions to which batch normalization and ReLU activation are applied. It is composed of solution layers, and residual blocks may be inserted between down-sampling layers and up-sampling layers.
- Each of the generators 111, 112, 113, and 114 constituting the first model 110 may be implemented as a ResNet structure including, for example, 9 residual blocks. Through the residual block included in each generator, the network learns the difference between the source and the target, and through this, the initial bone density including more accurate voxel intensity mapping information between the CBCT image 300 for training and the bone density related image 400 for training A related image 400 may be generated. However, it is not limited thereto.
- the convolution block may be composed of 4x4 convolution layers to which batch normalization and Leaky ReLu activation are applied, and subsequent down-sampling layers.
- Each discriminator may be implemented as PatchGAN, which discriminates whether an image generated by a generator is genuine or not, in units of patches of a specific size, for example.
- PatchGAN of 70 x 70 patches.
- it is not limited thereto.
- the first model 110 includes training data including a CBCT image 300 for training and a bone density related image 310 for training corresponding thereto, and a residual block Since it is implemented using , it is possible to learn voxel intensity mapping information between the CBCT image 300 for learning and the bone density-related image 310 for learning. Accordingly, the bone density distribution information of the learning bone density related image 310 corresponding to the learning CBCT image is reflected in the initial bone density related image 400, so the bone contrast increases with respect to the learning CBCT image 300 It can be.
- the first model 110 as described above works complementarily with the second model 120 to be described later, ultimately resulting in improved bone contrast and uniformity from the CBCT image and a final bone density-related image containing highly accurate bone density data. (500) can be obtained.
- a general U-Net structure includes a U-shaped encoder and decoder structure. Specifically, U-Net extracts the overall context of the input image in the contracting path of the encoder, and on the other hand, in the expanding path of the decoder, the contracting path of the contracting path through skip connection. The context extracted at the same level is combined with the pixel location information. That is, localization accuracy may be increased by combining a part of the result output from the encoding region with the decoding region.
- the second model 120 may include a multi-channel input for receiving the CBCT image 300 for training and the initial bone density related image 400 output from the first model 110 .
- the second model 120 can be trained to infer the training bone density related image 310 from the training CBCT image 300 and the initial bone density related image 400, and includes bone density data corresponding to the training CBCT image 300
- a final bone density related image 500 may be output.
- the second model 120 can simultaneously learn spatial information of the CBCT image 300 for learning and the initial bone density-related image 400 corresponding to the CBCT image 300 through multi-channel input. Specifically, the second model 120 can learn voxel intensity mapping information between the CBCT image and the bone density related image included in the initial bone density related image 400 while maintaining the anatomical structure of the CBCT image 300. Accordingly, artifacts and scattering noise are suppressed, and the final bone density-related image 500 with improved uniformity can be output.
- the second model 120 may be optimized based on a result of a second loss function including a function of a difference between the bone density related image 310 for learning and the final bone density related image 500.
- the function for the difference between the learning bone density related image 310 and the final bone density related image 500 is the average value absolute deviation (Mean It may include at least one of Absolute Difference (MAD) and Structural Similarity (SSIM).
- the average absolute deviation is defined as an average of absolute differences in intensity between the training bone density-related image 310 and the final bone density-related image 500, and can be expressed as follows.
- I QCBCT refers to the final bone density associated image 500:
- SSIM structural similarity
- the second loss function of the second model 120 may be defined as follows.
- ⁇ is a weight, for example, ⁇ may be 0.6.
- any suitable value of ⁇ can be selected in silico or experimentally:
- the second model 120 by training the second model 120 using the second loss function including a combination of mean value absolute deviation (MAD) and structural similarity (SSIM), considering structural similarity and pixel-wise errors, more You can get faster convergence and higher accuracy.
- MAD mean value absolute deviation
- SSIM structural similarity
- the multi-channel U-Net structure of the second model 120 includes, in addition to the above-described multi-channel input, an encoder and a decoder composed of 3 x 3 convolutional layers to which batch normalization and ReLU activation are applied, and each layer level A skip connection in may be further included.
- 6 shows an example of a multi-channel U-Net structure implemented to include four skip connections for each level of an encoder and a decoder. However, it is not limited thereto.
- the second model 120 is implemented to include a multi-channel U-Net structure and is capable of extracting both global and local features for a spatial region of an image, which is an initial bone density related image.
- the final bone density-related image 500 including the mapping information included in 400 and the anatomical structure information of the CBCT image 300 for learning and suppressing artifacts can be output.
- the deep learning model 100 according to the present disclosure including the combination of the first model 110 and the second model 120 improves bone contrast and uniformity with respect to the input CBCT image, while the input A final bone density-related image 500 including bone density data for the CBCT image may be output. Accordingly, the medical image processing method according to the present disclosure may ultimately provide a method of quantitatively and immediately measuring bone mineral density from a CBCT image.
- FIGS. 7 to 9 are diagrams illustrating performance of measuring bone density data according to the medical image processing method of the present disclosure.
- Figures 7-9 show, in particular, the bone density data measurement performance for some embodiments of the present disclosure where the final bone density related image is a QCT-like image.
- bone density data measurement performance may mean bone density data measurement performance for each medical image.
- bone density data measurement performance may mean bone density data measurement performance for each medical image.
- the performance of measuring bone density data for a conventional CBCT image can be regarded as lower than the performance of measuring bone density data for a general CT image.
- FIG. 7 is a diagram quantitatively illustrating the performance of measuring bone density data according to the medical image processing method of the present disclosure, and specifically, a final bone density related image (500, hereinafter “QCBCT image”) according to the present disclosure.
- QCBCT image a final bone density related image
- CYC_CBCT image cycle-GAN model
- U_CBCT image U-Net model
- CAL_CBCT image quantitative performance difference with respect to a CBCT image corrected using a phantom for BMD correction.
- PSNR peak signal-to-noise ratio
- MAX Maximum Possible Intensity
- MSE Root Mean Squared Error
- NCC normalized cross-correlation
- spatial non-uniformity is defined as an absolute value of a difference between a maximum value and a minimum intensity value in rectangular ROIs (Regions Of Interest) set in an image.
- the slope as an index for evaluating the linearity of bone density, is a value obtained by analyzing the relationship between voxel intensity between an actual QCT image and an image to be compared through linear regression of intensity values within an image.
- Peak signal-to-noise ratio PSNR
- structural similarity SSIM
- normalized cross-correlation NCC
- slope Slope
- MAD mean absolute deviation
- SNU spatial non-uniformity
- the final bone density-related image (QCBCT image in FIG. 7 , see rows 3 and 9) according to the present disclosure is compared with the actual QCT image, the average absolute deviation (MAD), peak signal-to-noise ratio (PSNR) ), structural similarity (SSIM), normalized cross correlation (NCC), and slope (Slope) values, CYC_CBCT images, U_CBCT images, and CAL_CBCT regardless of the location of the captured bone (eg, maxilla or mandible) It appears to greatly outperform the video.
- MAD average absolute deviation
- PSNR peak signal-to-noise ratio
- SSIM structural similarity
- NCC normalized cross correlation
- Slope slope
- the bone density measurement performance of the medical image processing method according to the present disclosure is equivalent to the bone density of the conventional method of correcting a CBCT image using a model implemented with only one of Cycle-GAN and U-Net and/or a phantom for BMD correction. Compared with the measurement performance, it shows a significant difference.
- FIG. 8 is a diagram qualitatively showing the performance of measuring bone density data according to the present disclosure, and specifically, an actual QCT image for each of the maxilla and mandible, and a final bone density related image (or, in FIG. QCBCT image), CYC_CBCT image, U_CBCT image and CAL_CBCT image (see rows 1 and 3 of FIG. 8 below), and images obtained by subtracting each image from the actual QCT image (subtraction images, see rows 2 and 4 of FIG. 8 ) is shown.
- the quality of the bone density image of the final bone density related image is compared with the actual QCT image (see column 1), the CYC_CBCT image (see column 3) ), shows significant improvement for U_CBCT images (see column 4) and CAL_CBCT images (see column 5).
- the final BMD-related image in the final BMD-related image according to the present disclosure, a large bone density (voxel intensity value) difference in the tooth region found in the CAL_CBCT image and a dense bond of high BMD values are greatly reduced. can see what has happened
- FIG. 9 shows an actual QCT image, a final bone density related image according to the present disclosure (see column 1), a CYC_CBCT image (see column 2), a U_CBCT image (see column 3), and a CAL_CBCT image (see column 4).
- Reference shows a linear relationship between each.
- the first row (a to d) of FIG. 9 is an image obtained for the upper jaw region under the condition of 80 kVp - 8 mA
- the second row (e to h) is the image obtained for the lower jaw region under the condition of 80 kVp - 8 mA.
- Images acquired for the region 3 rows (i to l) show images acquired for the upper jaw region under the condition of 90 kVp - 10 mA, and 4 rows (m to p) show images acquired for the lower jaw region under the condition of 90 kVp - 10 mA. do.
- the linear relationship between the actual QCT image and the final bone density-related image has a higher slope and goodness of fit than the linear relationship between the actual QCT image and other images. Show contrast and correlation. In addition, the tendency of this linear relationship appears consistently regardless of the location of the bone to be imaged (eg, upper jaw or lower jaw) or imaging conditions (eg, 80 kVp-18 mA or 90 kVp-10 mA).
- the medical image processing method quantitatively evaluates bone density from CBCT images through a deep learning model including a combination of Cycle-GAN and multi-channel U-Net. It can provide a measurable method that can be measured both locally and immediately.
- the first model (ie, Cycle-GAN structure) according to the present disclosure includes mapping information between a CBCT image and a bone density-related image that is a QCT image or a CT image, and bone collation is performed for the CBCT image input to the first model.
- An initial bone density related image which is a QCT-like image or a CT-like image with increased bone contrast, may be output.
- the second model i.e., multi-channel U-Net structure
- a final bone density related image which is a QCT-like image or a CT-like image including structural information, suppressed artifacts and increased uniformity, may be output.
- the medical image processing method can obtain a final bone density-related image with significantly improved anatomical accuracy and quantitative accuracy in the uniformity and contrast of the bone image from the CBCT image, ultimately reducing radiation dose, It is possible to provide a method capable of quantitatively and immediately measuring bone mineral density from CBCT images while using CBCT imaging techniques having advantages such as short acquisition time and high resolution.
- FIG. 10 is a flowchart of a method of obtaining training data in a medical image processing method according to some embodiments of the present disclosure. Specifically, FIG. 10 shows a method of acquiring a CBCT image 300 for learning and a bone density related image 310 for learning, respectively, in order to train the deep learning model 100 according to the present disclosure as described above with reference to FIGS. 4 to 6 do.
- the computing device may first acquire a raw CBCT image of a subject under examination and a raw CT image corresponding to the raw CBCT image (S310).
- the subject may be, for example, phantoms of human skulls, and in this case, the skull phantom may include at least one of those with and without a metal restoration that causes artifacts.
- the present invention is not limited thereto, and an object for learning data acquisition may be one or three or more phantoms for an arbitrary part.
- a raw CBCT image and a raw CT image may be respectively captured.
- the raw CBCT image and the raw CT image can be captured under the same conditions, such as voltage and current.
- the present invention is not limited thereto, and a raw CBCT image and a raw CT image may be captured under different conditions for the subject and then aligned through a step such as matching.
- a raw CT image is a voxel size of 0.469 x 0.469 x 0.5 mm 3 , dimensions of 512 x 512 pixels, depth of 16 bits, voltage of 120 (kVp), and current of 130 (mA).
- a raw CBCT image is obtained for the subject with a voxel size of 0.3 x 0.3 x 0.3 mm 3 , dimensions of 559 x 559 pixels, depth of 16 bits, voltage of 80 or 90 (kVp), and current of 8 or 10 (mA) can be taken under the condition.
- the computing device 10 may next obtain a raw QCT image by correcting the raw CT image (S320).
- the QCT image is an image obtained through correction of the HU value in a general CT image, and in this case, a CT image of a phantom for bone density correction may be used for correction.
- a raw CT image may be corrected using a phantom CT image for bone density correction taken under conditions corresponding to the raw CT image, and the corrected raw CT image may be obtained as a raw QCT image. That is, referring to the above example, using the phantom CT image for bone density correction taken under the conditions of voltage 120 (kVp) and current 130 (mA) with respect to the bone density correction phantom, voltage 120 (kVp) and current 130 (mA) It is possible to correct the raw CT imaged under the condition of and obtain a raw QCT image. However, it is not limited thereto.
- the computing device 10 may remove non-anatomical elements from the raw CBCT image and the raw QCT image (S330). By removing non-anatomical regions from the learning images, it is possible to prevent adverse impacts such as a decrease in accuracy due to the non-anatomical regions in the learning process.
- a non-anatomical region may be removed from the raw CBCT image and the raw QCT image by applying a binary mask to the raw CBCT image and the raw QCT image, respectively.
- the binary mask image can be created using thresholding and morphological operations. Specifically, edges of anatomical regions may be extracted by applying a local range filter to each of the training CBCT image and the corresponding training QCT image forming a training image pair. Next, morphological operations of opening and flood fill can be applied to the binarized edges obtained through thresholding to remove small blobs and fill the inner regions.
- the raw CBCT image and the raw QCT image may be multiplied by the intersection of two binary masks generated from the raw CBCT image and the raw QCT image through the above-described operation. Meanwhile, voxel values outside the masked area may be replaced with -1000 HU.
- the above is only an example of a method for removing non-anatomical regions from learning images, and the present disclosure is not limited thereto.
- the computing device 10 may match the raw CBCT image and the raw QCT image (S340).
- a raw QCT image may be matched to a raw CBCT image by paired-point registration.
- a plurality of landmarks may be set for matching. 6, including, for example, the vertex on the lateral incisors, the buccal cusps of the first premolars, and the distobuccal cusps of the first molars. Dog landmarks may be set. However, it is not limited thereto.
- the computing device may additionally perform a step of cropping the matched raw CBCT image and the raw QCT image into an image of an arbitrary size, and then resizing the matched raw CBCT image into an image of a preset size. For example, from the matched low CBCT image and low QCT image, a 559 x 559 x 200 pixel image centered on the maxillary region can be cropped and then resized to a 256 x 256 x 200 pixel image. .
- the computing device 10 may acquire the matched raw CBCT image as a CBCT image for learning, and acquire the matched raw QCT image as a bone density-related image for learning (S350).
- the cropped and/or resized raw CBCT image and raw QCT image are the CBCT image for learning and the bone density for learning, respectively. It can be obtained as a related image.
- the computing device 10 removes a non-anatomical region from the acquired raw CBCT image and raw CT (S360), The raw CBCT image from which the non-anatomical region has been removed and the raw CT image may be matched (S370), and the registered CBCT image may be acquired as a CBCT image for learning, and the registered raw CT image may be obtained as a bone density-related image for learning ( S380). Since the specific method of removing the non-anatomical region for each image and the specific method of matching each image with each other have been described above, detailed descriptions are omitted here to avoid redundant description.
- the acquired CBCT images for learning and the corresponding BMD-related images for learning may form a pair of training images together, and a set of one or more pairs of training images may constitute training data according to the present disclosure.
- the learning data obtained in this way may be used for learning the deep learning model 100 according to the present disclosure described above through FIGS. 4 to 6 .
- the embodiments of the present disclosure described above are not implemented only through devices and methods, and may be implemented through a program that realizes functions corresponding to the configuration of the embodiments of the present disclosure or a recording medium on which the program is recorded.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Optics & Photonics (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- High Energy & Nuclear Physics (AREA)
- Theoretical Computer Science (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Dentistry (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
- Pulmonology (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Physiology (AREA)
- Computer Vision & Pattern Recognition (AREA)
Abstract
According to an embodiment disclosed herein, a medical image processing method performed by a computing device may include the steps of: training a first model on mapping information about a cone-beam CT (CBCT) training image and a bone mineral density-related training image corresponding to the CBCT training image, and thereby inferring the bone mineral density-related training image from the CPCT training image; and training a second model to infer the bone mineral density-related training image from the CBCT training image and an initial bone mineral density-related image output from the first model.
Description
본 개시는 의료 영상 처리 방법 및 장치에 관한 것으로, 구체적으로 CBCT(Cone-Beam CT) 영상을 처리하는 방법 및 장치에 관한 것이다.The present disclosure relates to a method and apparatus for processing a medical image, and more specifically, to a method and apparatus for processing a Cone-Beam CT (CBCT) image.
골밀도(Bone Mineral Density; BMD) 측정은 골다공증을 진단하고 향후 골절 위험을 예측하기 위해서 골량을 추정하는 방법이다. 치과 임플란트 치료에 있어서도, 치조골 골질의 정확한 생체 내 측정은 임플란트의 1차적 안정성을 결정하는 데에 매우 중요하다. 따라서 임플란트를 식립하기 전에 치조골 골밀도(BMD)가 충분한 지 여부를 수술 전에 진단할 필요가 있다. 일반적 CT 영상에서 골밀도는 HU(Hounsfield Units) 값의 보정을 통해 정량적으로 측정되며, 이러한 방법은 QCT(Quantitative CT)로 불린다. Bone Mineral Density (BMD) measurement is a method of estimating bone mass to diagnose osteoporosis and predict future fracture risk. Even in dental implant treatment, accurate in vivo measurement of alveolar bone quality is very important in determining the primary stability of the implant. Therefore, it is necessary to diagnose whether alveolar bone mineral density (BMD) is sufficient before implant placement. In general CT images, bone mineral density is quantitatively measured through correction of HU (Hounsfield Units) values, and this method is called QCT (Quantitative CT).
한편 최근 들어 치과 진단, 치료 및 수술 계획 수립에 CBCT 영상이 널리 사용되고 있다. CBCT 영상은 일반적 CT 영상에 비해 환자에 대한 낮은 방사선 량, 짧은 획득 시간, 그리고 높은 해상도와 같은 이점을 제공한다. 그러나 CBCT 영상의 복셀(voxel) 값은 임의적이며 HU 값을 올바르게 제공하지 못하기 때문에, CBCT를 이용한 치조골 골질의 정확한 측정은 불가능한 실정이다. Meanwhile, recently, CBCT images have been widely used for dental diagnosis, treatment, and surgical planning. CBCT images offer advantages such as low radiation dose to the patient, short acquisition time, and high resolution compared to conventional CT images. However, since the voxel value of the CBCT image is arbitrary and does not accurately provide the HU value, it is impossible to accurately measure the bone quality of the alveolar bone using CBCT.
일반적 CT와 CBCT 데이터 사이의 이러한 HU 불일치를 해결하기 위해 여러 연구가 수행되고 있다. 대표적으로 다양한 CT 감쇠 계수의 재료 삽입물이 들어있는 골밀도 보정용 팬텀(BMD calibration phantom)을 사용하여 CBCT 영상의 복셀 값과 일반적 CT 영상의 HU 간의 관계를 분석하기 위한 연구가 수행되었다. 그러나 이는 CBCT 영상의 복셀 값의 불균일적(nonuniformity) 특성과 CBCT 영상의 복셀 값 및 HU 간의 비선형적(nonlinearity) 관계로 인해, CBCT 영상의 복셀 값을 HU 값과 연관시키는 것은 불가능하다는 결과를 얻는 것에 그쳤다. Several studies are being conducted to resolve this HU discrepancy between conventional CT and CBCT data. Representatively, a study was conducted to analyze the relationship between voxel values of CBCT images and HU of general CT images using a bone density calibration phantom (BMD calibration phantom) containing material inserts of various CT attenuation coefficients. However, this is due to the nonuniformity of the voxel value of the CBCT image and the nonlinearity relationship between the voxel value and the HU of the CBCT image, resulting in the fact that it is impossible to correlate the voxel value of the CBCT image with the HU value. stopped
이에, CBCT 영상으로부터 골밀도를 정량적으로 그리고 즉각적으로 측정하는 기술에 대한 당업계의 요구가 존재한다. Accordingly, there is a need in the art for a technique for quantitatively and immediately measuring bone mineral density from CBCT images.
해결하고자 하는 과제는, 일반적 CT 영상과 달리 골밀도 값과 비선형적 관계의 복셀 값을 갖는 CBCT 영상으로부터, 골밀도를 정량적으로 그리고 즉각적으로 측정하는 방법을 제공하는 것이다. 상기 과제 이외에도 구체적으로 언급되지 않은 다른 과제를 달성하는 데 사용될 수 있다.The problem to be solved is to provide a method for quantitatively and immediately measuring bone density from a CBCT image having voxel values in a non-linear relationship with bone density values, unlike general CT images. In addition to the above tasks, it may be used to achieve other tasks not specifically mentioned.
본 발명의 몇몇 실시예에 따른 컴퓨팅 장치에 의해 수행되는 의료 영상 처리 방법으로서, 상기 방법은, 학습용 CBCT(Cone-Beam CT) 영상 및 상기 학습용 CBCT 영상에 대응되는 학습용 골밀도 연관 영상의 매핑 정보를 학습하여, 상기 학습용 CBCT 영상으로부터 상기 학습용 골밀도 연관 영상을 추론하도록, 제 1 모델을 학습시키는 단계, 및 상기 학습용 CBCT 영상 및 상기 제 1 모델에서 출력된 초기 골밀도 연관 영상으로부터 상기 학습용 골밀도 연관 영상을 추론하도록, 제 2 모델을 학습시키는 단계를 포함할 수 있다. A method of processing a medical image performed by a computing device according to some embodiments of the present invention, wherein the method learns mapping information of a Cone-Beam CT (CBCT) image for learning and a bone density-related image for learning corresponding to the CBCT image for learning Thus, learning a first model to infer the bone density-related image for learning from the CBCT image for learning, and inferring the bone density-related image for learning from the initial bone density-related image output from the CBCT image for learning and the first model , training the second model.
상기 학습용 골밀도 연관 영상은, QCT(quantitative CT) 또는 CT 영상 중 어느 하나일 수 있다. The bone density-related image for learning may be either a quantitative CT (QCT) or a CT image.
상기 제 1 모델은, 상기 매핑 정보를 학습하여 CBCT 영상으로부터 골밀도 연관 영상을 추론하는, 제 1 생성기, 및 입력된 영상이 실제 골밀도 연관 영상인지 또는 합성된 골밀도 연관 영상인지 여부를 판별하는, 제 1 판별기를 포함하고, 상기 제 1 판별기의 판별 성공 확률에 기초하여 산출되는 상기 제 1 모델의 적대적 손실(adversarial loss) 함수를 포함하는 제 1 손실함수의 결과를 최소화하는 방향으로 학습될 수 있다.The first model includes a first generator that learns the mapping information to infer a bone density-related image from a CBCT image, and a first generator that determines whether an input image is a real bone density-related image or a synthesized bone density-related image. It may be learned in a direction of minimizing a result of a first loss function including a discriminator and an adversarial loss function of the first model calculated based on a discriminating success probability of the first discriminator.
상기 제 1 모델은, 상기 매핑 정보를 학습하여 상기 골밀도 연관 영상으로부터 상기 CBCT 영상을 추론하는, 제 2 생성기, 및 입력된 영상이 실제 CBCT 영상인지 또는 합성된 CBCT 영상인지 여부를 판별하는, 제 2 판별기를 더 포함하고, 상기 제 1 판별기 및 상기 제 2 판별기의 판별 성공 확률에 기초하여 산출되는 상기 제 1 모델의 상기 적대적 손실 함수, 그리고 상기 제 1 모델의 사이클 일관성 손실(cycle consistency loss) 함수의 조합을 포함하는 상기 제 1 손실함수의 결과를 최소화하는 방향으로 최적화될 수 있다.The first model includes a second generator that learns the mapping information and infers the CBCT image from the bone density-related image, and a second generator that determines whether an input image is a real CBCT image or a synthesized CBCT image. Further comprising a discriminator, the adversarial loss function of the first model calculated based on the discriminant success probabilities of the first discriminator and the second discriminator, and a cycle consistency loss of the first model It may be optimized in a direction of minimizing a result of the first loss function including a combination of functions.
상기 제 1 모델은, Cycle-GAN(Generative Adversarial Network) 구조를 포함하도록 구현될 수 있다.The first model may be implemented to include a Cycle-GAN (Generative Adversarial Network) structure.
상기 골밀도 연관영상은, 상기 학습용 CBCT 영상 및 상기 학습용 골밀도 연관 영상 간의 복셀 강도(voxel intensity) 관계에 관한 상기 매핑 정보를 포함하고, 상기 학습용 CBCT 영상에 대하여 골 대조도(bone contrast)가 증가된 QCT-유사(QCT-like) 또는 CT-유사(CT-like) 영상일 수 있다.The bone density-related image includes the mapping information on the voxel intensity relationship between the CBCT image for learning and the bone density-related image for learning, and bone contrast is increased with respect to the CBCT image for learning QCT -It may be a QCT-like or CT-like image.
상기 제 2 모델은, 상기 학습용 CBCT 영상에 대응되는 골밀도(Bone Mineral Density; BMD) 데이터를 포함하는 최종 골밀도 연관 영상을 출력할 수 있다.The second model may output a final bone density related image including bone mineral density (BMD) data corresponding to the CBCT image for training.
상기 제 2 모델은, 상기 학습용 골밀도 연관 영상 및 상기 최종 골밀도 연관 영상 간의 차이에 관한 함수를 포함하는 제 2 손실함수의 결과를 최소화하는 방향으로 최적화될 수 있다.The second model may be optimized in a direction of minimizing a result of a second loss function including a function related to a difference between the bone density related image for training and the final bone density related image.
상기 차이에 관한 함수는, 상기 학습용 골밀도 연관 영상 및 상기 최종 골밀도 연관 영상 간의 복셀 강도 차이에 대한 평균값 절대 편차(Mean Absolute Difference; MAD) 및 상기 학습용 골밀도 연관 영상 및 상기 최종 골밀도 연관 영상 간의 구조적 유사도(Structural similarity; SSIM) 중 적어도 하나를 포함할 수 있다.The function related to the difference is the mean absolute difference (MAD) of the voxel intensity difference between the training bone density related image and the final bone density related image and the structural similarity between the training bone density related image and the final bone density related image ( Structural similarity; SSIM).
상기 제 2 모델은, 다중 채널(multi-channel)을 통해 상기 학습용 CBCT 영상 및 상기 초기 골밀도 연관 영상을 입력 받는, 다중 채널 U-Net 구조를 포함하도록 구현될 수 있다.The second model may be implemented to include a multi-channel U-Net structure that receives the CBCT image for learning and the initial bone density-related image through multi-channel.
상기 최종 골밀도 연관 영상은, 상기 초기 골밀도 연관 영상에 포함된 상기 매핑 정보 그리고 상기 학습용 CBCT 영상의 해부학적 구조 정보를 포함하며 아티팩트(artifact)가 억제된 QCT-유사 영상 또는 CT-유사 영상일 수 있다.The final bone density related image may be a QCT-like image or a CT-like image including the mapping information included in the initial bone density related image and anatomical structure information of the CBCT image for learning and suppressing artifacts. .
상기 방법은, 피검체에 대하여 촬영된 로우(raw) CBCT 영상 및 상기 로우 CBCT 영상과 대응되는 로우 CT 영상을 각각 획득하는 단계;상기 로우 CT 영상을 상기 로우 CT 영상과 대응되는 조건에서 촬영된 골밀도 보정용 팬텀(BMD calibration phantom) CT 영상을 사용하여 보정하고, 보정된 상기 로우 CT 영상을 로우 QCT 영상으로 획득하는 단계, 상기 로우 CBCT 영상 및 상기 로우 QCT 영상에서 비해부학적 영역을 제거하는 단계, 상기 로우 CBCT 영상 및 상기 로우 QCT 영상을 정합(registration) 하는 단계, 및 상기 로우 CBCT 영상을 상기 학습용 CBCT 영상으로 획득하고, 상기 로우 QCT 영상을 상기 학습용 골밀도 연관 영상으로 획득하는 단계를 더 포함할 수 있다. The method includes obtaining a raw CBCT image of a subject and a raw CT image corresponding to the raw CBCT image, respectively; Bone density of the raw CT image taken under conditions corresponding to the raw CT image Calibrating using a BMD calibration phantom CT image and acquiring the corrected raw CT image as a raw QCT image, removing a non-anatomical region from the raw CBCT image and the raw QCT image, The method may further include registering a raw CBCT image and the raw QCT image, acquiring the raw CBCT image as the CBCT image for training, and obtaining the raw QCT image as the bone density-related image for training. .
상기 방법은, 피검체에 대하여 촬영된 로우(raw) CBCT 영상 및 상기 로우 CBCT 영상과 대응되는 로우 CT 영상을 각각 획득하는 단계, 상기 로우 CBCT 영상 및 상기 로우 CT 영상에서 비해부학적 영역을 제거하는 단계, 상기 로우 CBCT 영상 및 상기 로우 CT 영상을 정합(registration) 하는 단계, 및 상기 로우 CBCT 영상을 상기 학습용 CBCT 영상으로 획득하고, 상기 로우 CT 영상을 상기 학습용 골밀도 연관 영상으로 획득하는 단계를 더 포함할 수 있다. The method includes obtaining a raw CBCT image of a subject and a raw CT image corresponding to the raw CBCT image, respectively, removing a non-anatomical region from the raw CBCT image and the raw CT image The step of registering the raw CBCT image and the raw CT image, and acquiring the raw CBCT image as the CBCT image for training and acquiring the raw CT image as the bone density-related image for training can do.
본 발명의 몇몇 실시예에 따른 컴퓨팅 장치에 의해 수행되는 의료 영상 처리 방법으로서, 상기 방법은, CBCT(Cone-Beam CT) 영상을 획득하는 단계, 및 기 학습된 딥 러닝 모델에 상기 CBCT 영상을 입력하여, 상기 CBCT 영상에 대응되는 골밀도(Bone Mineral Density; BMD) 데이터를 포함하는 최종 골밀도 연관 영상을 획득하는 단계를 포함할 수 있다. A medical image processing method performed by a computing device according to some embodiments of the present invention, the method comprising: acquiring a Cone-Beam CT (CBCT) image; and inputting the CBCT image to a pre-learned deep learning model. and obtaining a final bone density-related image including Bone Mineral Density (BMD) data corresponding to the CBCT image.
상기 최종 골밀도 연관 영상은, QCT-유사 영상 또는 CT-유사 영상일 수 있다.The final bone density related image may be a QCT-like image or a CT-like image.
상기 방법은, 상기 최종 골밀도 연관 영상에 기초하여 상기 CBCT 영상에 대응되는 상기 골밀도 데이터를 획득하는 단계를 더 포함하고, 상기 골밀도 데이터를 획득하는 단계는, 상기 최종 골밀도 연관 영상이 상기 QCT-유사 영상인 경우, 상기 최종 골밀도 연관 영상으로부터 상기 골밀도 데이터를 즉각적으로 획득하는 단계, 또는 상기 최종 골밀도 연관 영상이 상기 CT-유사 영상인 경우, 상기 최종 골밀도 연관 영상을 보정하여 상기 CT-유사 영상에 대응되는 QCT 영상을 생성하고, 생성된 상기 QCT 영상으로부터 상기 골밀도 데이터를 획득하는 단계를 포함할 수 있다.The method further comprises acquiring the bone density data corresponding to the CBCT image based on the final bone density associated image, wherein the acquiring the bone density data comprises: the final bone density associated image is the QCT-like image If , immediately acquiring the bone density data from the final bone density related image; or, if the final bone density related image is the CT-like image, correcting the final bone density related image to correspond to the CT-like image. The method may include generating a QCT image and acquiring the bone density data from the generated QCT image.
본 개시의 몇몇 실시예에 따르면, 일반적 CT 영상과 달리 골밀도 값과 비선형적 관계의 복셀 값을 갖는 CBCT 영상으로부터, 골밀도를 정량적으로 그리고 즉각적으로 측정할 수 있다.According to some embodiments of the present disclosure, unlike general CT images, bone density can be quantitatively and immediately measured from a CBCT image having voxel values in a non-linear relationship with bone density values.
본 개시의 몇몇 실시예에 따르면, Cycle-GAN 구조와 다중 채널 U-Net 구조의 결합을 포함하는 딥 러닝 모델을 통해 CBCT 영상을 처리함으로써, 영상 내 골 대조도 및 균일도가 향상된 합성 QCT 영상을 얻을 수 있다. According to some embodiments of the present disclosure, a synthesized QCT image with improved bone contrast and uniformity in the image is obtained by processing a CBCT image through a deep learning model including a combination of a Cycle-GAN structure and a multi-channel U-Net structure. can
도 1은 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법을 제공하는 컴퓨팅 장치를 나타낸 블록도이다.1 is a block diagram illustrating a computing device providing a medical image processing method according to some embodiments of the present disclosure.
도 2는 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법의 개략도이다. 2 is a schematic diagram of a medical image processing method according to some embodiments of the present disclosure.
도 3은 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법의 순서도이다.3 is a flowchart of a medical image processing method according to some embodiments of the present disclosure.
도 4는 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법을 제공하기 위한 딥 러닝 모델 학습 방법의 순서도이다.4 is a flowchart of a deep learning model learning method for providing a medical image processing method according to some embodiments of the present disclosure.
도 5는 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법을 제공하기 위한 딥 러닝 모델 학습 방법의 개략도이다. 5 is a schematic diagram of a deep learning model learning method for providing a medical image processing method according to some embodiments of the present disclosure.
도 6은 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법을 제공하기 위한 딥 러닝 모델 학습 방법을 예시적으로 도시한 도면이다. 6 is a diagram exemplarily illustrating a deep learning model training method for providing a medical image processing method according to some embodiments of the present disclosure.
도 7은 본 개시의 의료 영상 처리 방법에 따른 골밀도 데이터 측정 성능을 도시한 도면이다. 7 is a diagram illustrating bone density data measurement performance according to the medical image processing method of the present disclosure.
도 8은 본 개시의 의료 영상 처리 방법에 따른 골밀도 데이터 측정 성능을 도시한 도면이다. 8 is a diagram illustrating bone density data measurement performance according to the medical image processing method of the present disclosure.
도 9는 본 개시의 의료 영상 처리 방법에 따른 골밀도 데이터 측정 성능을 도시한 도면이다. 9 is a diagram illustrating bone density data measurement performance according to the medical image processing method of the present disclosure.
도 10은 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법에서 학습용 데이터를 획득하는 방법의 순서도이다.10 is a flowchart of a method of obtaining training data in a medical image processing method according to some embodiments of the present disclosure.
아래에서는 첨부한 도면을 참고로 하여 본 개시의 실시예에 대하여 본 개시가 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 그러나 본 개시는 여러 가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. 그리고 도면에서 본 개시를 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 유사한 부분에 대해서는 유사한 도면 부호를 붙였다.Hereinafter, with reference to the accompanying drawings, embodiments of the present disclosure will be described in detail so that those skilled in the art can easily carry out the present disclosure. However, the present disclosure may be embodied in many different forms and is not limited to the embodiments described herein. And in order to clearly describe the present disclosure in the drawings, parts irrelevant to the description are omitted, and similar reference numerals are attached to similar parts throughout the specification.
본 개시에서, 어떤 부분이 어떤 구성요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 포함할 수 있는 것을 의미한다. 네트워크를 구성하는 장치들은 하드웨어나 소프트웨어 또는 하드웨어 및 소프트웨어의 결합으로 구현될 수 있다.In the present disclosure, when a part "includes" a certain component, it means that it may further include other components without excluding other components unless otherwise stated. Devices constituting the network may be implemented as hardware, software, or a combination of hardware and software.
본 개시내용에서 “영상” 이라는 용어는, 피검체에 대한 의료 영상으로, 특히 본 개시내용에서 딥 러닝 모델에 대한 입출력 및 처리의 대상이 되는 의료 영상을 의미하는 용어로 사용될 수 있다. 예컨대 영상은, 피검체의 상악(Maxilla) 영역에 대하여 촬영된 CT 영상, CBCT 영상, 및/또는 QCT 영상일 수 있다. 또는 영상은, 피검체에 대하여 촬영된 상기 CBCT 영상을 딥 러닝 모델에 입력하여 출력되는 합성 QCT 영상일 수 있다. 이 때 출력되는 영상은, 입력된 CBCT 영상에 대응되는 골밀도 데이터를 포함하는 영상일 수 있다. In the present disclosure, the term “image” may be used as a medical image of a subject, particularly a medical image that is a target of input/output and processing for a deep learning model in the present disclosure. For example, the image may be a CT image, a CBCT image, and/or a QCT image captured of the maxilla region of the subject. Alternatively, the image may be a synthesized QCT image that is output by inputting the CBCT image captured of the subject to a deep learning model. At this time, the output image may be an image including bone density data corresponding to the input CBCT image.
다만 이에 한정되는 것은 아니며, 본 개시내용에서 영상은 피검체의 임의의 신체 부위에 대하여 촬영된 CT 영상, CBCT 영상, QCT 영상, 자기 공명 영상(Magnetic Resonance Imaging; MRI), 초음파 영상, 내시경 검사 영상, 열화상(thermography) 영상, 핵의학영상(nuclear medicine imaging), 및/또는 이들을 딥 러닝 모델에 입력하여 중간 산물 내지는 최종 산물로 출력되는 영상일 수 있다. However, it is not limited thereto, and images in the present disclosure include CT images, CBCT images, QCT images, magnetic resonance imaging (MRI), ultrasound images, and endoscopic examination images of any body part of the subject. , thermography images, nuclear medicine images, and/or images output as intermediate or final products by inputting them to a deep learning model.
본 개시내용에서 “골밀도 연관 영상”이라는 용어는, 피검체에 대한 골밀도 데이터를 포함하는 영상일 수 있다. 예컨대, 피검체에 대한 골밀도 데이터를 측정 가능한 영상 및/또는 골밀도 데이터를 측정하기 위해 사용 가능한 영상을 의미할 수 있다. 특히 본 개시내용에서 골밀도 연관 영상은, 일반 CT 영상 또는 일반 CT 영상을 보정용 팬텀에 기초하여 보정한 QCT(quantitative CT) 영상일 수 있다. In the present disclosure, the term “image associated with bone density” may be an image including bone density data of a subject. For example, it may mean an image capable of measuring bone density data of a subject and/or an image usable for measuring bone density data. In particular, in the present disclosure, a bone density related image may be a general CT image or a quantitative CT (QCT) image obtained by correcting a general CT image based on a phantom for correction.
본 개시내용에서 “모델” 이라는 용어는, 하나 이상의 뉴럴 네트워크 구조를 포함하는 딥 러닝 모델을 의미하는 용어로 사용될 수 있다. 특히 본 개시내용에서 모델은, 의료 영상을 처리하여 데이터를 획득하기 위한 딥 러닝 모델을 의미할 수 있다. 예컨대 모델은, 피검체에 대하여 촬영된 CBCT 영상을 입력 받고, 입력된 CBCT 영상에 대한 골밀도 데이터를 포함하는 영상을 출력하도록 학습된 딥 러닝 모델일 수 있다. 다만 이에 한정되는 것은 아니며, 모델은 CBCT 영상 이외에도 상술한 다양한 의료 영상을 입력 받고, 입력 받은 의료 영상에 대한 정량화 데이터, 보정된 영상 데이터, 세그멘테이션 데이터 등 임의의 데이터를 출력하도록 학습된 딥 러닝 모델일 수 있다. In the present disclosure, the term “model” may be used to mean a deep learning model including one or more neural network structures. In particular, in the present disclosure, a model may mean a deep learning model for obtaining data by processing a medical image. For example, the model may be a deep learning model trained to receive a CBCT image captured of a subject and output an image including bone density data for the input CBCT image. However, it is not limited to this, and the model is a deep learning model trained to receive various medical images other than CBCT images and output arbitrary data such as quantification data, corrected image data, and segmentation data for the input medical images. can
또한 본 개시내용에서 모델은, 하나 이상의 뉴럴 네트워크 구조의 결합으로 구현될 수 있으며, 예컨대 Cycle-GAN(Generative Adversarial Network) 구조와 U-Net 구조의 결합으로 구현될 수 있다. 다만 이에 한정되는 것은 아니며, 본 개시내용에서 모델은 Cycle-GAN 구조 및 U-Net 구조 중 어느 하나로만 구현될 수도 있고, 이 밖에 CNN(Convolutional Neural Network; CNN), RNN(Recurrent Neural Network), LSTM(Long Short-Term Memory), 오토인코더(Autoencoder), DRN(Deep Residual Network) 등 의료 영상 처리에 사용 가능한 임의의 뉴럴 네트워크 구조를 사용하여 구현될 수 있다. Also, in the present disclosure, a model may be implemented as a combination of one or more neural network structures, for example, a combination of a Cycle-GAN (Generative Adversarial Network) structure and a U-Net structure. However, it is not limited thereto, and in the present disclosure, the model may be implemented with only one of a Cycle-GAN structure and a U-Net structure, and in addition, CNN (Convolutional Neural Network; CNN), RNN (Recurrent Neural Network), LSTM It can be implemented using any neural network structure usable for medical image processing, such as Long Short-Term Memory (Long Short-Term Memory), Autoencoder, and Deep Residual Network (DRN).
본 개시내용에서 “로우(raw) 영상” 이라는 용어는, 피검체에 대하여 촬영된 의료 영상으로서, 특히 촬영된 후 이미지 전처리를 거치지 않은 상태의 의료 영상을 의미하는 용어로 사용될 수 있다. 예컨대, 피검체에 대하여 촬영된 후에 노이즈 제거, 정합(registration), 크롭(crop), 리사이즈(resize) 등의 이미지 전처리 과정을 거치지 않은 의료 영상을 의미할 수 있다. 본 개시내용에서 로우 영상은, 촬영된 의료영상의 종류에 따라 로우 CBCT 영상, 로우 CT 영상, 로우 QCT 영상 등으로 분류될 수 있으며, 다만 이에 한정되는 것은 아니다. 한편 본 개시내용에서 로우 영상에 대한 전처리를 통해, 이하의 학습용 데이터를 획득할 수 있다. In the present disclosure, the term “raw image” may be used as a term referring to a medical image captured of an object under examination, and in particular, a medical image in a state in which image preprocessing is not performed after being captured. For example, it may refer to a medical image that has not been subjected to image preprocessing such as noise removal, registration, cropping, and resize after being photographed with respect to the subject. In the present disclosure, a raw image may be classified into a raw CBCT image, a raw CT image, a raw QCT image, etc. according to the type of a photographed medical image, but is not limited thereto. Meanwhile, in the present disclosure, the following learning data may be obtained through pre-processing of raw images.
본 개시내용에서 “학습용 데이터” 라는 용어는, 전술한 모델의 학습에 사용되는 데이터를 의미하는 용어로 사용될 수 있다. 본 개시내용에서 학습용 데이터는 CBCT 영상 및 이에 대응되는 - 즉, 상기 CBCT 영상과 대응되는 피검체 내지 대응되는 촬영 조건에서 획득되는 - CT 영상 또는 QCT 영상을 포함하는 하나 이상의 영상 쌍들로 구성될 수 있다. In the present disclosure, the term “learning data” may be used as a term meaning data used for learning the above-described model. In the present disclosure, data for learning may be composed of one or more image pairs including a CBCT image and a CT image or QCT image corresponding to the CBCT image, that is, obtained from a subject corresponding to the CBCT image or a corresponding imaging condition. .
또한 본 개시내용에서 학습용 데이터는, 전술한 로우 영상에 대한 전처리를 통해 획득되는 학습용 영상으로 구성될 수 있다. 가령, 로우 CBCT 영상을 대응되는 로우 QCT 영상과 정합하고, 정합된 로우 CBCT 영상을 CBCT 영상으로 획득할 수 있다. 또는, 로우 CBCT 영상에서 노이즈를 제거한 후 해당 로우 CBCT 영상을 대응되는 로우 QCT 영상과 정합하여, 정합된 로우 CBCT 영상을 CBCT 영상으로 획득할 수 있다. 마찬가지로, 로우 CT 영상 및 대응되는 골밀도 보정용 팬텀 CT 영상으로부터 획득되는 로우 QCT 영상에 대하여도, 노이즈 제거 및/또는 대응되는 로우 CBCT 영상과의 정합을 거쳐, 정합된 로우 QCT 영상을 QCT 영상으로 획득할 수 있다. 다만 이에 한정되는 것은 아니다. In addition, in the present disclosure, data for learning may be composed of an image for learning obtained through pre-processing of the aforementioned raw image. For example, a raw CBCT image may be matched with a corresponding raw QCT image, and the matched raw CBCT image may be obtained as a CBCT image. Alternatively, after noise is removed from the raw CBCT image, the corresponding raw CBCT image may be matched with a corresponding raw QCT image, and the matched raw CBCT image may be obtained as a CBCT image. Similarly, for the raw QCT image obtained from the raw CT image and the corresponding phantom CT image for bone density correction, the matched raw QCT image can be obtained as a QCT image through noise removal and/or matching with the corresponding raw CBCT image. can However, it is not limited thereto.
본 개시내용에서 “골밀도 데이터” 라는 용어는, 피검체의 골(bone)의 밀도에 관한 데이터를 의미하며, 특히 본 개시내용에서는 딥 러닝 모델에 의료 영상을 입력하여 획득하고자 하는 데이터 중 하나를 의미할 수 있다. 구체적으로 본 개시내용에서 골밀도 데이터라는 용어는, 정량적인 골밀도를 측정 가능한 형태의 데이터를 의미할 수 있다. 가령, 본 개시내용에서 골밀도 데이터는, 피검체에 대한 CBCT 영상을 딥 러닝 모델에 입력하여 획득되는 합성된 QCT 영상이거나, 또는 상기 합성된 QCT 영상의 복셀 강도에 기초하여 획득되는 골밀도 관련 데이터일 수 있다. 다만 이에 한정되지 않으며, 골밀도 데이터는 영상 외에도 그래프, 히트맵, 테이블 등 임의의 형태의 데이터로 획득될 수 있다. In the present disclosure, the term “bone density data” refers to data related to bone density of a subject, and in particular, in the present disclosure, it means one of data to be acquired by inputting a medical image to a deep learning model. can do. Specifically, in the present disclosure, the term bone density data may refer to data in a form capable of quantitatively measuring bone density. For example, bone density data in the present disclosure may be a synthesized QCT image obtained by inputting a CBCT image of a subject to a deep learning model, or may be bone density related data obtained based on voxel intensity of the synthesized QCT image. there is. However, it is not limited thereto, and the bone density data may be obtained in any form of data, such as a graph, a heat map, or a table, in addition to an image.
도 1은 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법을 제공하는 컴퓨팅 장치를 나타낸 블록도이다.1 is a block diagram illustrating a computing device providing a medical image processing method according to some embodiments of the present disclosure.
도 1을 참조하면, 본 개시에 따른 컴퓨팅 장치(10)는 하나 이상의 프로세서(11), 프로세서(11)에 의하여 수행되는 프로그램을 로드하는 메모리(12), 프로그램 및 각종 데이터를 저장하는 스토리지(13), 및 통신 인터페이스(14)를 포함할 수 있다. 다만, 상술한 구성 요소들은 본 개시에 따른 컴퓨팅 장치(10)를 구현하는데 있어서 필수적인 것은 아니어서, 컴퓨팅 장치(10)는 위에서 열거된 구성요소들 보다 많거나, 또는 적은 구성요소들을 가질 수 있다. 예컨대 컴퓨팅 장치(10)는 출력부 및/또는 입력부(미도시)를 더 포함하거나, 또는 스토리지(13)가 생략될 수도 있다. Referring to FIG. 1 , a computing device 10 according to the present disclosure includes one or more processors 11, a memory 12 for loading programs executed by the processors 11, and a storage 13 for storing programs and various data. ), and a communication interface 14. However, since the above-described components are not essential to implement the computing device 10 according to the present disclosure, the computing device 10 may have more or fewer components than the components listed above. For example, the computing device 10 may further include an output unit and/or an input unit (not shown), or the storage 13 may be omitted.
프로그램은 기능을 기준으로 묶인 일련의 컴퓨터 판독가능 명령어들로 구성되고, 프로세서에 의해 실행되는 것을 가리킨다. 프로그램은 메모리(12)에 로드될 때 프로세서(11)로 하여금 본 개시의 다양한 실시예에 따른 방법/동작을 수행하게끔 하는 명령어들(instructions)을 포함할 수 있다. 즉, 프로세서(11)는 명령어들을 실행함으로써, 본 개시의 다양한 실시예에 따른 방법/동작들을 수행할 수 있다. A program consists of a series of computer readable instructions grouped together on a functional basis and is executed by a processor. The program may include instructions that, when loaded into memory 12, cause processor 11 to perform methods/operations in accordance with various embodiments of the present disclosure. That is, the processor 11 may perform methods/operations according to various embodiments of the present disclosure by executing instructions.
프로세서(11)는 컴퓨팅 장치(10)의 각 구성의 전반적인 동작을 제어한다. 프로세서(11)는 CPU(Central Processing Unit), MPU(Micro Processor Unit), MCU(Micro Controller Unit), GPU(Graphic Processing Unit) 또는 본 개시의 기술 분야에 잘 알려진 임의의 형태의 프로세서 중 적어도 하나를 포함하여 구성될 수 있다. 또한, 프로세서(11)는 본 개시의 다양한 실시예들에 따른 방법/동작을 실행하기 위한 적어도 하나의 애플리케이션 또는 프로그램에 대한 연산을 수행할 수 있다. The processor 11 controls the overall operation of each component of the computing device 10 . The processor 11 may include at least one of a Central Processing Unit (CPU), a Micro Processor Unit (MPU), a Micro Controller Unit (MCU), a Graphic Processing Unit (GPU), or any type of processor well known in the art of the present disclosure. can be configured to include Also, the processor 11 may perform an operation for at least one application or program for executing a method/operation according to various embodiments of the present disclosure.
예컨대 본 개시에 따른 프로세서(11)는, 학습용 CBCT(Cone-Beam CT) 영상 및 학습용 골밀도 연관 영상의 매핑 정보를 학습하여 학습용 CBCT 영상으로부터 학습용 골밀도 연관 영상을 추론하도록 제 1 모델을 학습시키고, 학습용 CBCT 영상 및 제 1 모델에서 출력된 초기 골밀도 연관 영상으로부터 학습용 골밀도 연관 영상을 추론하도록 제 2 모델을 학습시킬 수 있다. For example, the processor 11 according to the present disclosure learns mapping information of a Cone-Beam CT (CBCT) image for learning and a bone density-related image for learning, and learns a first model to infer a bone density-related image for learning from the CBCT image for learning, and The second model may be trained to infer a bone density related image for learning from the CBCT image and the initial bone density related image output from the first model.
또는 본 개시에 따른 프로세서(11)는, 후술할 통신 인터페이스(14)를 제어하여 학습용 CBCT 영상 및 학습용 골밀도 연관 영상을 수신하게끔 하거나, 또는 통신 인터페이스(14)를 통해 수신된 로우 CBCT 영상 및 로우 CT 영상 또는 로우 QCT 영상을 처리하여 학습용 데이터를 생성할 수 있다. Alternatively, the processor 11 according to the present disclosure controls the communication interface 14 to be described later to receive a CBCT image for learning and a bone density-related image for learning, or a raw CBCT image and a raw CT received through the communication interface 14 Data for learning may be generated by processing an image or a raw QCT image.
또는 본 개시에 따른 프로세서(11)는, 통신 인터페이스(14)를 제어하여 임의의 피검체에 대한 CBCT 영상을 수신하고, 이를 메모리 및/또는 스토리지에 저장된 기 학습된 딥 러닝 모델에 입력하여 최종 골밀도 연관 영상을 획득하고, 출력부(미도시)를 제어하여 최종 골밀도 연관 영상을 사용자 단말에 출력하게끔 할 수 있다. 상술한 예시들은 프로세서(11)가 본 개시에 따른 의료 영상 처리 방법을 제공하기 위해 컴퓨팅 장치(10)의 각 구성의 전반적인 동작을 제어하는 일 예시에 불과할 뿐, 본 개시를 제한하지 않는다. Alternatively, the processor 11 according to the present disclosure controls the communication interface 14 to receive a CBCT image of a subject, inputs the CBCT image to a pre-learned deep learning model stored in memory and/or storage, and calculates the final bone density. A related image may be obtained, and an output unit (not shown) may be controlled to output a final bone density related image to a user terminal. The foregoing examples are only examples in which the processor 11 controls the overall operation of each component of the computing device 10 to provide the medical image processing method according to the present disclosure, and do not limit the present disclosure.
메모리(12)는 각종 데이터, 명령 및/또는 정보를 저장한다. 메모리(12)는 본 개시의 다양한 실시예들에 따른 방법/동작을 실행하기 위하여 스토리지(13)로부터 하나 이상의 프로그램을 로드할 수 있다. 메모리(12)는 RAM과 같은 휘발성 메모리로 구현될 수 있을 것이나, 본 개시의 기술적 범위는 이에 한정되지 않는다. Memory 12 stores various data, commands and/or information. Memory 12 may load one or more programs from storage 13 to execute methods/operations according to various embodiments of the present disclosure. The memory 12 may be implemented with volatile memory such as RAM, but the technical scope of the present disclosure is not limited thereto.
스토리지(13)는 프로그램을 비임시적으로 저장할 수 있다. 스토리지(13)는 ROM(Read Only Memory), EPROM(Erasable Programmable ROM), EEPROM(Electrically Erasable Programmable ROM), 플래시 메모리 등과 같은 비휘발성 메모리, 하드 디스크, 착탈형 디스크, 또는 본 개시가 속하는 기술 분야에서 잘 알려진 임의의 형태의 컴퓨터로 읽을 수 있는 기록 매체를 포함하여 구성될 수 있다. The storage 13 may store programs non-temporarily. The storage 13 is a non-volatile memory such as read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, or the like, a hard disk, a removable disk, or well known in the art. It may be configured to include any known type of computer-readable recording medium.
본 개시에 따른 스토리지(13)는, 가령 상술한 프로세서(11)에 의해 학습된 본 개시에 따른 딥 러닝 모델을 저장할 수 있다. 또는 스토리지(13)는, 본 개시에 따른 딥 러닝 모델을 학습시키기 위한 명령을 포함하는 프로그램을 저장할 수 있다. 다만 이에 한정되는 것은 아니다. The storage 13 according to the present disclosure may store, for example, a deep learning model according to the present disclosure learned by the processor 11 described above. Alternatively, the storage 13 may store a program including instructions for training a deep learning model according to the present disclosure. However, it is not limited thereto.
본 개시에 따른 통신 인터페이스(14)는 유/무선 통신 모듈일 수 있다. 본 개시에 따른 통신 인터페이스(14)는, 가령 서버(미도시)로부터 임의의 피검체에 대한 CBCT 영상을 수신하거나, 수신한 CBCT 영상을 상술한 딥 러닝 모델에 입력하여 획득된 최종 골밀도 연관 영상을 서버 및/또는 사용자 단말 등으로 송신하거나, 또는 상술한 딥 러닝 모델을 학습시키기 위한 학습용 데이터를 수신할 수 있다. 다만 이에 한정되는 것은 아니다. The communication interface 14 according to the present disclosure may be a wired/wireless communication module. The communication interface 14 according to the present disclosure, for example, receives a CBCT image of an arbitrary subject from a server (not shown) or inputs the received CBCT image to the deep learning model to obtain a final bone density related image. It may be transmitted to a server and/or a user terminal, or may receive training data for learning the above-described deep learning model. However, it is not limited thereto.
도 2는 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법의 개략도이다. 2 is a schematic diagram of a medical image processing method according to some embodiments of the present disclosure.
도 2를 참조하면, 본 개시의 의료 영상 처리 방법에 따르면, 컴퓨팅 장치(10)는 기 학습된 딥 러닝 모델(100)에 CBCT 영상(200)을 입력하고, CBCT 영상에 대응되는 골밀도 데이터를 포함하는 최종 골밀도 연관 영상(500)을 획득할 수 있다. 이에 따라 궁극적으로, CBCT 영상으로부터 골밀도를 즉각적으로 그리고 정량적으로 측정할 수 있다. 이는 종래 CBCT 영상의 복셀 강도 값이 임의적인 점, 그리고 골밀도 측정을 위한 HU(Hounsfield Units) 값을 올바르게 제공하지 못하는 점에 의해, 낮은 방사선 량, 짧은 획득 시간, 그리고 높은 해상도와 같은 이점에도 불구하고 골밀도 측정에 사용되기 어려웠던 것과 대비된다. Referring to FIG. 2 , according to the medical image processing method of the present disclosure, the computing device 10 inputs a CBCT image 200 to a pre-learned deep learning model 100, and includes bone density data corresponding to the CBCT image. A final bone density-related image 500 may be obtained. Accordingly, ultimately, bone mineral density can be measured immediately and quantitatively from CBCT images. This is due to the fact that the voxel intensity value of the conventional CBCT image is arbitrary and does not correctly provide the HU (Hounsfield Units) value for bone density measurement, despite advantages such as low radiation dose, short acquisition time, and high resolution. This contrasts with those that were difficult to use for measuring bone density.
도 3은 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법의 순서도이다. 도 3은 도 2에서 전술한 본 개시의 의료 영상 처리 방법을 최종 골밀도 연관 영상의 종류에 따라 구체적으로 도시한다. 3 is a flowchart of a medical image processing method according to some embodiments of the present disclosure. FIG. 3 shows the medical image processing method of the present disclosure described in FIG. 2 in detail according to the type of the final bone density related image.
도 3을 참조하면, 먼저 컴퓨팅 장치(10)는 CBCT 영상을 획득할 수 있다(S110). 예컨대, 프로세서(11)는 카메라부(미도시)를 제어하여 피검체에 대하여 촬영한 CBCT 영상을 입력받거나, 또는 통신 인터페이스(14)를 제어하여 피검체에 대하여 촬영된 CBCT 영상을 수신할 수 있다. Referring to FIG. 3 , first, the computing device 10 may obtain a CBCT image (S110). For example, the processor 11 may control a camera unit (not shown) to receive a CBCT image captured of the subject, or control the communication interface 14 to receive a CBCT image captured of the subject. .
다음으로 컴퓨팅 장치(10)는, 기 학습된 딥 러닝 모델에 CBCT 영상을 입력하여, CBCT 영상에 대응되는 골밀도 데이터를 포함하는 최종 골밀도 연관 영상을 획득할 수 있다(S120). Next, the computing device 10 may acquire a final bone density-related image including bone density data corresponding to the CBCT image by inputting the CBCT image to the pre-learned deep learning model (S120).
여기서 최종 골밀도 연관 영상은, 앞서 입력된 CBCT 영상에 대응되는QCT-유사 영상이거나 또는 CT-유사 영상일 수 있다. 구체적으로, 딥 러닝 모델의 학습에 사용된 학습용 데이터의 종류에 따라, 기 학습된 딥 러닝 모델에 CBCT 영상을 입력하여 획득되는 최종 골밀도 연관 영상의 종류가 상이할 수 있다. Here, the final bone density related image may be a QCT-like image or a CT-like image corresponding to the previously input CBCT image. Specifically, the type of final bone density-related image obtained by inputting a CBCT image to a pre-learned deep learning model may be different according to the type of training data used to learn the deep learning model.
예컨대, 딥 러닝 모델의 학습에 CBCT 영상 및 이에 대응되는 QCT 영상이 학습용 데이터로 사용된 경우, 딥 러닝 모델은 CBCT 영상 및 QCT 영상 간의 매핑 정보를 학습하고, 이에 따라 CBCT 영상이 입력되는 경우 CBCT 영상에 대응되는 골밀도 데이터를 포함하는 QCT-유사 영상을 최종 골밀도 연관 영상으로 출력할 수 있다. For example, when a CBCT image and a corresponding QCT image are used as training data for learning a deep learning model, the deep learning model learns mapping information between the CBCT image and the QCT image, and accordingly, when the CBCT image is input, the CBCT image A QCT-like image including bone density data corresponding to may be output as a final bone density related image.
마찬가지로, 딥 러닝 모델의 학습에 CBCT 영상 및 이에 대응되는 CT 영상이 학습용 데이터로 사용된 경우, 딥 러닝 모델은 CBCT 영상 및 CT 영상 간의 매핑 정보를 학습하고, 이에 따라 CBCT 영상이 입력되는 경우 CBCT 영상에 대응되는 골밀도 데이터를 포함하는 CT-유사 영상을 최종 골밀도 연관 영상으로 출력할 수 있다. Similarly, when a CBCT image and a corresponding CT image are used as training data for learning a deep learning model, the deep learning model learns mapping information between the CBCT image and the CT image, and accordingly, when the CBCT image is input, the CBCT image A CT-like image including bone density data corresponding to may be output as a final bone density related image.
딥 러닝 모델(100)을 학습시키는 방법에 관하여는 도 4 내지 6을 통하여, 그리고 딥 러닝 모델(100)의 학습에 사용되는 학습용 데이터를 획득하는 방법에 관하여는 도 10을 통하여 보다 자세하게 후술한다.A method for learning the deep learning model 100 will be described later in detail with reference to FIGS. 4 to 6 and a method for obtaining learning data used for learning the deep learning model 100 with reference to FIG. 10 .
다음으로 컴퓨팅 장치(10)는, 최종 골밀도 연관 영상이 QCT-유사 영상인 경우(S130, Y), 최종 골밀도 연관 영상으로부터 골밀도 데이터를 즉각적으로 획득할 수 있다(S140). 예컨대, QCT-유사 영상인 최종 골밀도 연관 영상의 복셀 값으로부터 골밀도 데이터를 즉각적으로 측정 및 획득할 수 있다. Next, when the final bone density related image is a QCT-like image (S130, Y), the computing device 10 may immediately acquire bone density data from the final bone density related image (S140). For example, bone density data may be immediately measured and obtained from voxel values of a final bone density related image, which is a QCT-like image.
또는 컴퓨팅 장치(10)는, 최종 골밀도 연관 영상이 QCT-유사 영상이 아니고 CT-유사 영상인 경우(S130, N), 최종 골밀도 연관 영상을 보정하여 대응되는 QCT 영상을 생성하고, 생성된 QCT 영상으로부터 골밀도 데이터를 획득할 수 있다(S150). Alternatively, if the final bone density related image is not a QCT-like image but a CT-like image (S130, N), the computing device 10 corrects the final bone density related image to generate a corresponding QCT image, and generates the generated QCT image. Bone density data can be obtained from (S150).
구체적으로, 최종 골밀도 연관 영상이 CT-유사 영상인 경우, 최종 골밀도 연관 영상으로부터 즉각적으로 골밀도 데이터를 측정하는 대신, 골밀도 보정용 팬텀을 통해 CT 영상을 보정하여 QCT 영상을 획득하고, 해당 QCT 영상으로부터 골밀도 데이터를 측정할 수 있다. 골밀도 보정용 팬텀을 사용하여 CT 영상을 보정하는 방법은 본 개시의 기술 분야에서 알려진 방법인 바, 이에 관한 자세한 설명은 생략한다. Specifically, when the final bone density related image is a CT-like image, instead of immediately measuring bone density data from the final bone density related image, the CT image is corrected through a phantom for bone density correction to obtain a QCT image, and a QCT image is obtained from the corresponding QCT image. data can be measured. A method of correcting a CT image using a phantom for bone density correction is a method known in the art of the present disclosure, and thus a detailed description thereof will be omitted.
본 개시에 따른 딥 러닝 모델(100)을 학습시키는 방법에 관하여, 이하 도 4 내지 6을 통하여 자세히 설명한다. A method for learning the deep learning model 100 according to the present disclosure will be described in detail with reference to FIGS. 4 to 6 below.
도 4는 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법을 제공하기 위한 딥 러닝 모델 학습 방법의 순서도이다.4 is a flowchart of a deep learning model learning method for providing a medical image processing method according to some embodiments of the present disclosure.
도 4를 참고하면, 먼저 컴퓨팅 장치(10)는 학습용 CBCT 영상 및 학습용 CBCT 영상에 대응되는 학습용 골밀도 연관 영상의 매핑 정보를 학습하여 학습용 CBCT 영상으로부터 학습용 골밀도 연관 영상을 추론하도록 제 1 모델을 학습시킬 수 있다(S210). Referring to FIG. 4 , first, the computing device 10 learns a CBCT image for learning and mapping information of a bone density-related image for learning corresponding to the CBCT image for learning, and trains a first model to infer a bone density-related image for learning from the CBCT image for learning. It can (S210).
여기서 제 1 모델(110)은, 학습용 CBCT 영상으로부터 학습용 CBCT 영상과 대응되는 학습용 골밀도 연관 영상을 추론하도록 학습되고, CBCT 영상의 입력에 따라 초기 골밀도 연관 영상(400)을 출력할 수 있다. Here, the first model 110 is trained to infer a bone density-related image for learning corresponding to the CBCT image for learning from the CBCT image for learning, and outputs an initial bone density-related image 400 according to the input of the CBCT image.
여기서 초기 골밀도 연관 영상(400)은, 입력된 학습용 CBCT 영상 및 그와 대응되는 학습용 골밀도 연관 영상 간의 복셀 강도(voxel intensity) 매핑 정보를 포함할 수 있다. 여기서 학습용 CBCT 영상과 대응되는 학습용 골밀도 연관 영상은, 가령 학습용 CBCT 영상과 동일한 피검체 및/또는 동일한 촬영 조건 하에서 획득 가능한 QCT 영상, 또는 CT 영상일 수 있다. 즉 초기 골밀도 연관 영상(400)은, 동일한 피검체 및/또는 동일한 촬영 조건 하에서 획득될 수 있는 CBCT 영상과 QCT 영상 또는 CT 영상 간의 복셀 강도 매핑(또는 대응) 관계에 관한 정보를 포함할 수 있다. Here, the initial bone density related image 400 may include voxel intensity mapping information between an input CBCT image for learning and a corresponding bone density related image for learning. Here, the bone density-related image for learning corresponding to the CBCT image for learning may be, for example, a QCT image or a CT image that can be obtained from the same subject and/or under the same imaging conditions as the CBCT image for learning. That is, the initial bone density-related image 400 may include information about a voxel intensity mapping (or correspondence) relationship between a CBCT image and a QCT image or CT image that can be obtained from the same subject and/or under the same imaging conditions.
이처럼 초기 골밀도 연관 영상(400)은, CBCT 영상과 대응되는 골밀도 연관 영상의 골밀도 분포 정보가 반영되게 되므로, CBCT 영상에 대하여 골 대조도(bone contrast; 의료 영상에서 골과 주변 조직 간의 휘도 차이)가 증가되는 경향성을 가질 수 있다. 즉 초기 골밀도 연관 영상(400)은, 입력된 CBCT 영상에 대하여 골 대조도(bone contrast)가 증가된 QCT-유사(QCT-like) 영상 또는 CT 유사(CT-like) 영상일 수 있다. As such, since the bone density distribution information of the bone density related image corresponding to the CBCT image is reflected in the initial bone density related image 400, the bone contrast (the difference in luminance between the bone and the surrounding tissue in the medical image) with respect to the CBCT image is may have a tendency to increase. That is, the initial bone density related image 400 may be a QCT-like image or a CT-like image in which bone contrast is increased with respect to the input CBCT image.
예컨대 제 1 모델(110)이 학습용 CBCT 영상 및 이에 대응되는 QCT 영상인 학습용 골밀도 연관 영상을 학습용 데이터로 하여, 학습용 CBCT 영상으로부터 학습용 골밀도 연관 영상을 추론하도록 학습된 경우, 초기 골밀도 연관 영상(400)은 QCT-유사 영상일 수 있다. 마찬가지로, 제 1 모델(110)이 학습용 CBCT 영상 및 이에 대응되는 CT 영상인 학습용 골밀도 연관 영상을 학습용 데이터로 하여, 학습용 CBCT 영상으로부터 학습용 골밀도 연관 영상을 추론하도록 학습된 경우, 초기 골밀도 연관 영상(400)은 CT-유사 영상일 수 있다. For example, when the first model 110 is trained to infer a bone density related image for learning from a CBCT image for learning using a CBCT image for learning and a bone density related image for learning, which is a corresponding QCT image, as learning data, the initial bone density related image 400 may be a QCT-like image. Similarly, when the first model 110 is trained to infer a bone density related image for learning from a CBCT image for learning by using a CBCT image for learning and a bone density related image for learning, which is a corresponding CT image, as learning data, the initial bone density related image (400 ) may be a CT-like image.
다음으로 컴퓨팅 장치(10)는, 학습용 CBCT 영상 및 제 1 모델에서 출력된 초기 골밀도 연관 영상으로부터 학습용 골밀도 연관 영상을 추론하도록 제 2 모델을 학습시킬 수 있다(S220). Next, the computing device 10 may train the second model to infer a bone density related image for learning from the CBCT image for training and the initial bone density related image output from the first model (S220).
여기서 제 2 모델(120)은, 학습용 CBCT 영상과 초기 골밀도 연관 영상(400)으로부터 학습용 CBCT 영상과 대응되는 학습용 골밀도 연관 영상을 추론하도록 학습되고, 최종 골밀도 연관 영상(500)을 출력할 수 있다. 제 2 모델(120)은 제 1 모델(110)과 직렬 연결되어 일체로서 본 개시에 따른 딥 러닝 모델(100)을 구성할 수 있다. 다만 이에 한정되는 것은 아니다. Here, the second model 120 is trained to infer a bone density related image for learning corresponding to the CBCT image for learning from the CBCT image for learning and the initial bone density related image 400, and outputs the final bone density related image 500. The second model 120 may be connected in series with the first model 110 to form the deep learning model 100 according to the present disclosure as an integral body. However, it is not limited thereto.
제 2 모델(120)로부터 출력되는 최종 골밀도 연관 영상(500)은, 초기 골밀도 연관 영상(400)에 포함된 복셀 강도 매핑 정보, 그리고 입력된 CBCT 영상의 해부학적 구조 정보를 포함하며, 아티팩트(artifact)가 억제된 QCT-유사 영상 또는 CT-유사 영상일 수 있다. The final bone density related image 500 output from the second model 120 includes voxel intensity mapping information included in the initial bone density related image 400 and anatomical structure information of the input CBCT image, and artifacts ) may be suppressed QCT-like images or CT-like images.
예컨대 제 2 모델(120)이 학습용 CBCT 영상 및 QCT-유사 영상인 초기 골밀도 연관 영상(400)으로부터 학습용 CBCT 영상과 대응되는 학습용 골밀도 연관영상을 추론하도록 학습된 경우, 최종 골밀도 연관 영상(500)은 QCT-유사 영상일 수 있다. 마찬가지로, 제 2 모델(120)이 학습용 CBCT 영상 및 CT-유사 영상인 초기 골밀도 연관 영상(400)으로부터 학습용 CBCT 영상과 대응되는 학습용 골밀도 연관 영상을 추론하도록 학습된 경우, 최종 골밀도 연관 영상(500)은 CT-유사 영상일 수 있다. For example, when the second model 120 is trained to infer a bone density related image for learning corresponding to a CBCT image for learning from the initial bone density related image 400, which is a CBCT image for learning and a QCT-like image, the final bone density related image 500 is It may be a QCT-like image. Similarly, when the second model 120 is trained to infer a training bone density related image corresponding to a training CBCT image from the training CBCT image and the initial bone density related image 400 that is a CT-like image, the final bone density related image 500 may be a CT-like image.
한편 여기서 아티팩트가 억제된 영상은, 인공물이나 산란 노이즈 등의 영상의 장해 요소가 제거된 영상으로, 균일한 물체에 대한 영상의 각 픽셀 HU 값이 균일한 정도를 의미하는 균일도(uniformity)가 향상된 영상을 의미할 수 있다. On the other hand, the image with artifact suppression here is an image from which obstructive elements such as artifacts and scattering noise are removed, and an image with improved uniformity, which means the degree to which the HU value of each pixel of the image for a uniform object is uniform can mean
또한 상술한 바와 같이, 최종 골밀도 연관 영상(500)은, 입력된 CBCT 영상에 대응되는 골밀도 데이터를 포함하는 영상일 수 있다. 이에 따라, 딥 러닝 모델(100)에 CBCT 영상(200)을 입력하여 획득한 최종 골밀도 연관 영상(500)으로부터, CBCT 영상(200)에 대응되는 골밀도 데이터를 획득할 수 있다. 가령, CBCT 영상이 촬영된 피검체의 골밀도에 관한 정보를 QCT-유사 영상인 최종 골밀도 연관 영상(500)으로부터 즉각적 및 정량적으로 획득할 수 있다. Also, as described above, the final bone density-related image 500 may be an image including bone density data corresponding to the input CBCT image. Accordingly, bone density data corresponding to the CBCT image 200 may be obtained from the final bone density related image 500 obtained by inputting the CBCT image 200 to the deep learning model 100 . For example, information on the bone density of a subject whose CBCT image has been taken may be obtained immediately and quantitatively from the final bone density related image 500, which is a QCT-like image.
도 5는 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법을 제공하기 위한 딥 러닝 모델 학습 방법의 개략도이다. 구체적으로 도 4는, 도 3에서 전술한 본 개시의 제 1 모델(110) 및 제 2 모델(120)의 학습 과정을 도식적으로 나타낸다. 5 is a schematic diagram of a deep learning model learning method for providing a medical image processing method according to some embodiments of the present disclosure. Specifically, FIG. 4 schematically shows the learning process of the first model 110 and the second model 120 of the present disclosure described above with reference to FIG. 3 .
먼저 제 1 모델(110)은, 학습용 CBCT 영상(300) 및 이에 대응되는 학습용 골밀도 연관 영상(310)의 영상 쌍을 입력 받아 학습용 CBCT 영상(300) 및 학습용 골밀도 연관 영상(310) 간의 매핑 정보를 학습하고, 이에 기초하여 학습용 CBCT 영상(300)으로부터 학습용 골밀도 연관 영상(310)을 추론하도록 학습될 수 있다. 이에 따라 제 1 모델(110)은, 학습용 CBCT 영상(300)의 입력에 따라 초기 골밀도 연관 영상(400)을 출력할 수 있다. First, the first model 110 receives an image pair of a CBCT image 300 for learning and a bone density related image 310 for learning corresponding thereto, and maps information between the CBCT image 300 for learning and the bone density related image 310 for learning learning, and based on this, it may be learned to infer the bone density related image 310 for learning from the CBCT image 300 for learning. Accordingly, the first model 110 may output an initial bone density related image 400 according to the input of the CBCT image 300 for learning.
여기서 제 1 모델(110)은, 출력된 초기 골밀도 연관 영상(400)과 학습용 골밀도 연관 영상(310) 중 실제 골밀도 연관 영상을 판별하는 판별 성공 확률에 적어도 기초하여 최적화될 수 있다. 가령, 학습용 골밀도 연관 영상(310)이 학습용 CBCT 영상(300)과 대응되는 QCT 영상이고, 초기 골밀도 연관 영상(400)이 QCT-유사 영상인 경우, 제 1 모델(110)은 초기 골밀도 연관 영상(400)과 학습용 골밀도 연관 영상(310) 중 실제 QCT 영상을 판별하는 판별 성공 확률에 적어도 기초하여 최적화될 수 있다. 마찬가지로, 학습용 골밀도 연관 영상(310)이 학습용 CBCT 영상(300)과 대응되는 CT 영상이고, 초기 골밀도 연관 영상(400)이 CT-유사 영상인 경우, 제 1 모델(110)은 초기 골밀도 연관 영상(400)과 학습용 골밀도 연관 영상(310) 중 실제 CT 영상을 판별하는 판별 성공 확률에 적어도 기초하여 최적화될 수 있다. Here, the first model 110 may be optimized based on at least a determination success probability for discriminating an actual bone density related image among the output initial bone density related image 400 and the training bone density related image 310 . For example, when the learning bone density related image 310 is a QCT image corresponding to the learning CBCT image 300 and the initial bone density related image 400 is a QCT-like image, the first model 110 is an initial bone density related image ( 400) and the bone density related image 310 for learning, the actual QCT image can be optimized based on at least the probability of success in discriminating. Similarly, when the bone density related image 310 for learning is a CT image corresponding to the CBCT image 300 for learning and the initial bone density related image 400 is a CT-like image, the first model 110 is an initial bone density related image ( 400) and the bone density-related image 310 for learning, it can be optimized based on at least the probability of success in discriminating an actual CT image.
이 때 제 1 모델(110)을 최적화하기 위해 제 1 손실함수가 사용될 수 있다. 제 1 손실함수의 값은 상술한 초기 골밀도 연관 영상(400) 및 학습용 골밀도 연관 영상(310) 간의 판별 성공 확률을 포함하는 복수의 요소를 포함하여 산출될 수 있다. 제 1 손실함수의 산출과 관하여는 도 6을 통하여 후술한다. In this case, a first loss function may be used to optimize the first model 110 . The value of the first loss function may be calculated by including a plurality of factors including a probability of success in discriminating between the initial bone density related image 400 and the bone density related image 310 for learning. Calculation of the first loss function will be described later with reference to FIG. 6 .
다음으로 제 2 모델(120)은, 학습용 CBCT 영상(300) 및 제 1 모델(110)에서 출력된 초기 골밀도 연관 영상(400)을 입력 받고, 이들로부터 학습용 골밀도 연관 영상(310)을 추론하도록 학습될 수 있다. 이에 따라 제 2 모델(120)은, 학습용 CBCT 영상(300)에 대응되는 골밀도 데이터를 포함하는 최종 골밀도 연관 영상(500)을 출력할 수 있다. Next, the second model 120 receives the CBCT image 300 for learning and the initial bone density related image 400 output from the first model 110, and learns to infer the bone density related image 310 for learning from them It can be. Accordingly, the second model 120 may output a final bone density related image 500 including bone density data corresponding to the CBCT image 300 for training.
여기서 제 2 모델(120)은, 학습용 골밀도 연관 영상(310) 및 출력된 최종 골밀도 연관 영상(500) 간의 차이(또는, 손실(loss))에 기초하여 최적화될 수 있다. 이 때 제 2 모델(120)을 최적화하기 위해 제 2 손실함수가 사용될 수 있다. 제 2 손실함수의 값은 학습용 골밀도 연관 영상(310) 및 출력된 최종 골밀도 연관 영상(500) 간의 차이에 관한 함수를 포함할 수 있다. Here, the second model 120 may be optimized based on the difference (or loss) between the bone density related image 310 for training and the output final bone density related image 500 . In this case, a second loss function may be used to optimize the second model 120 . The value of the second loss function may include a function related to a difference between the bone density related image 310 for training and the output final bone density related image 500 .
구체적으로, 제 2 손실함수의 값은 학습용 골밀도 연관 영상(310) 및 출력된 최종 골밀도 연관 영상(500) 간의 복셀 강도 차이에 대한 평균값 절대 편차(Mean Absolute Difference; MAD), 그리고 학습용 골밀도 연관 영상(310) 및 출력된 최종 골밀도 연관 영상(500) 간의 구조적 유사도(Structural similarity; SSIM) 중 적어도 하나로부터 산출될 수 있다. 제 2 손실함수의 산출과 관하여는 도 6을 통하여 후술한다.Specifically, the value of the second loss function is the mean absolute difference (MAD) of the voxel intensity difference between the bone density related image 310 for training and the output final bone density related image 500, and the bone density related image for training ( 310) and structural similarity (SSIM) between the output final bone density associated image 500. Calculation of the second loss function will be described later with reference to FIG. 6 .
도 6은 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법을 제공하기 위한 딥 러닝 모델 학습 방법을 예시적으로 도시한 도면이다.6 is a diagram exemplarily illustrating a deep learning model training method for providing a medical image processing method according to some embodiments of the present disclosure.
도 6을 참조하면, 제 1 모델(110)은 Cycle-GAN(Generative Adversarial Network) 구조를 포함하도록 구현될 수 있고, 제 2 모델(120)은 다중 채널(multi-channel)을 통해 학습용 CBCT 영상(300) 및 초기 골밀도 연관 영상(400)을 입력 받는 다중 채널 U-Net 구조를 포함하도록 구현될 수 있다. Referring to FIG. 6 , the first model 110 may be implemented to include a Cycle-GAN (Generative Adversarial Network) structure, and the second model 120 is a CBCT image for learning (multi-channel) 300) and an initial bone density related image 400 may be implemented to include a multi-channel U-Net structure.
먼저 제 1 모델(110)과 관련하여, 일반적인 GAN 구조는, 동시에 훈련된 생성기 및 판별기로 구성된다. 생성기는 실제와 유사한 이미지를 생성함으로써 판별기를 속여 그 유사한 이미지를 실제 이미지로 분류하게 하는 것을 목표로 가지는 반면, 판별기는 실제 이미지와 유사한 이미지를 서로 분류해내는 것을 목표로 가진다. Regarding the first model 110 first, a general GAN structure consists of a generator and discriminator trained simultaneously. The generator aims to trick the discriminator into classifying the similar image as a real image by creating a real-like image, while the discriminator aims to classify real-like images from each other.
이와 비교하여 Cycle-GAN은, 생성된 이미지를 복원한 이미지와 실제 이미지 사이의 손실이 최소화되어야 한다는 제약 조건이 추가된다. 이에 따라 Cycle-GAN 구조는 생성된 이미지를 실제 이미지와 가깝게 복원하는, 즉 역변환(inverse transformation) 과정을 수행하는 생성기와 판별기를 더 포함하고, 따라서 총 두 개의 생성기 및 두 개의 판별기로 구성된다. 이처럼 Cycle-GAN은 역변환을 적용함으로써, 일반적인 GAN의 프로세스를 두 배로 놀려 모델을 두 배로 제한하고 출력 이미지의 정확도를 높일 수 있다. In comparison, Cycle-GAN adds a constraint condition that the loss between the generated image and the actual image must be minimized. Accordingly, the Cycle-GAN structure further includes a generator and a discriminator that restores the generated image closer to the actual image, that is, performs an inverse transformation process, and thus consists of two generators and two discriminators. In this way, Cycle-GAN can limit the model to double by doubling the process of a general GAN by applying inverse transformation and increase the accuracy of the output image.
이에 따라 제 1 모델(110)은, 매핑 정보를 학습하여 학습용 CBCT 영상(300)으로부터 학습용 골밀도 연관 영상(400)을 추론하도록 학습된 제 1 생성기(111), 학습용 골밀도 연관 영상(310)으로부터 학습용 CBCT 영상(300)을 추론하도록 학습된 제 2 생성기(112), 입력된 영상이 실제 골밀도 연관 영상인지 또는 합성된 골밀도 연관 영상인지 여부를 판별하도록 학습된 제 1 판별기(115), 그리고 입력된 영상이 실제 CBCT 영상인지 또는 합성된 CBCT 영상인지 여부를 판별하도록 학습된 제 2 판별기(116)를 포함할 수 있다. 다만 이에 한정되는 것은 아니며, 제 1 모델(110)은 상술한 구성 중 제 1 생성기(111) 및 제 1 판별기(115) 만으로 구성될 수도 있다. 이 경우 제 1 모델(110)은 Cycle-GAN 구조 대신 GAN 구조를 포함하도록 구현될 수도 있다. Accordingly, the first model 110 learns the mapping information to infer the bone density related image 400 for learning from the CBCT image 300 for learning. The second generator 112 learned to infer the CBCT image 300, the first discriminator 115 learned to determine whether the input image is an actual bone density related image or a synthesized bone density related image, and the input A second discriminator 116 trained to determine whether the image is a real CBCT image or a synthesized CBCT image may be included. However, it is not limited thereto, and the first model 110 may be composed of only the first generator 111 and the first discriminator 115 among the above-described configurations. In this case, the first model 110 may be implemented to include a GAN structure instead of a Cycle-GAN structure.
구체적으로, 도 6의 좌측 상단부를 참조하면, 제 1 생성기(111)는 학습용 CBCT 영상(300)을 입력받아 초기 골밀도 연관 영상(400)을 출력할 수 있고, 제 2 생성기(113)은 제 1 생성기(111)에서 출력된 초기 골밀도 연관 영상(400)을 입력받아 복원된 CBCT 영상(301)을 출력할 수 있다. 여기서 복원된 CBCT 영상(301)은, 초기 골밀도 연관 영상(400)을 제 1 생성기(111)에 입력된 학습용 CBCT 영상(300)과 가깝게 복원한 영상일 수 있다. Specifically, referring to the upper left part of FIG. 6 , the first generator 111 may receive the CBCT image 300 for learning and output an initial bone density related image 400, and the second generator 113 may output the first bone density related image 400. The restored CBCT image 301 may be output by receiving the initial bone density related image 400 output from the generator 111 . Here, the restored CBCT image 301 may be an image obtained by reconstructing the initial bone density related image 400 to be close to the CBCT image 300 for learning input to the first generator 111 .
다음으로, 도 6의 좌측 하단부를 참조하면, 제 2 생성기(112)는 학습용 골밀도 연관 영상(310)을 입력받아 합성 CBCT 영상(311)을 출력할 수 있고, 제 1 생성기(114)는 제 2 생성기(112)에서 출력된 합성 CBCT 영상(311)을 입력받아 복원된 골밀도 연관 영상(312)을 출력할 수 있다. 여기서 복원된 골밀도 연관 영상(312)은, 합성 CBCT 영상(311)을 제 2 생성기(112)에 입력된 학습용 골밀도 연관 영상(310)과 가깝게 복원한 영상일 수 있다. Next, referring to the lower left portion of FIG. 6 , the second generator 112 may receive the bone density related image 310 for learning and output a synthesized CBCT image 311, and the first generator 114 may output the second The synthesized CBCT image 311 output from the generator 112 may be input and a restored bone density related image 312 may be output. Here, the restored bone density related image 312 may be an image obtained by reconstructing the synthesized CBCT image 311 to be close to the bone density related image 310 for learning input to the second generator 112 .
다음으로, 도 6의 좌측 중앙부를 참조하면, 제 1 판별기(115)는 초기 골밀도 연관 영상(400) 및/또는 학습용 골밀도 연관 영상(310)을 입력받고, 입력된 영상이 실제 골밀도 연관 영상인지 또는 합성된 골밀도 연관 영상인지 여부를 판별할 수 있다. Next, referring to the central left part of FIG. 6 , the first discriminator 115 receives an initial bone density related image 400 and/or a bone density related image 310 for learning, and determines whether the input image is an actual bone density related image. Alternatively, it may be determined whether the image is a synthesized bone density related image.
가령, 학습용 골밀도 연관 영상(310)이 학습용 CBCT 영상(300)과 대응되는 QCT 영상이고, 초기 골밀도 연관 영상(400)이 QCT-유사 영상인 경우, 제 1 모델(110)은 초기 골밀도 연관 영상(400)과 학습용 골밀도 연관 영상(310)을 입력받고, 입력된 영상이 실제 QCT 영상인지 또는 합성된 QCT 영상인지 여부를 판별할 수 있다. For example, when the learning bone density related image 310 is a QCT image corresponding to the learning CBCT image 300 and the initial bone density related image 400 is a QCT-like image, the first model 110 is an initial bone density related image ( 400) and the bone density related image 310 for learning are input, and it is possible to determine whether the input image is an actual QCT image or a synthesized QCT image.
마찬가지로, 학습용 골밀도 연관 영상(310)이 학습용 CBCT 영상(300)과 대응되는 CT 영상이고, 초기 골밀도 연관 영상(400)이 CT-유사 영상인 경우, 제 1 모델(110)은 초기 골밀도 연관 영상(400)과 학습용 골밀도 연관 영상(310)을 입력받고, 입력된 영상이 실제 CT 영상인지 또는 합성된 CT 영상인지 여부를 판별할 수 있다. Similarly, when the bone density related image 310 for learning is a CT image corresponding to the CBCT image 300 for learning and the initial bone density related image 400 is a CT-like image, the first model 110 is an initial bone density related image ( 400) and the bone density related image 310 for learning are input, and it is possible to determine whether the input image is an actual CT image or a synthesized CT image.
한편 제 1 판별기(115)는, 입력된 영상을 실제 골밀도 연관 영상인 것으로 판별한 경우 1을 출력하고, 입력된 영상을 합성된 골밀도 연관 영상인 것으로 판별한 경우 0을 출력할 수 있다. 이에 따라 제 1 판별기(115)는, 초기 골밀도 연관 영상(400)이 입력되는 경우에 0을 출력하고, 학습용 골밀도 연관 영상(310)이 입력되는 경우에 1을 출력하면, 판별이 성공한 것으로 판단될 수 있다. Meanwhile, the first discriminator 115 may output 1 when it determines that the input image is an actual bone density-related image, and outputs 0 when it determines that the input image is a synthesized bone density-related image. Accordingly, if the first discriminator 115 outputs 0 when the initial bone density-related image 400 is input and outputs 1 when the bone density-related image 310 for learning is input, it is determined that the discrimination was successful. It can be.
한편 제 2 판별기(116)는 합성 CBCT 영상(311) 및/또는 학습용 CBCT 영상(300)을 입력받고, 입력된 영상이 실제 CBCT 영상인지 또는 합성된 CBCT 영상인지 여부를 판별할 수 있다. 구체적으로 제 2 판별기(116)는, 입력된 영상을 실제 CBCT 영상인 것으로 판별한 경우 0을 출력하고, 입력된 영상을 합성된 CBCT 영상인 것으로 판별한 경우 1을 출력할 수 있다. 이에 따라 제 2 판별기(116)는, 합성 CBCT 영상(311)이 입력되는 경우에 1을 출력하고, 학습용 CBCT 영상(300)이 입력되는 경우에 0을 출력하면, 판별이 성공한 것으로 판단될 수 있다. Meanwhile, the second discriminator 116 may receive the synthesized CBCT image 311 and/or the CBCT image 300 for learning, and determine whether the input image is an actual CBCT image or a synthesized CBCT image. Specifically, the second discriminator 116 may output 0 when it determines that the input image is an actual CBCT image, and outputs 1 when it determines that the input image is a synthesized CBCT image. Accordingly, if the second discriminator 116 outputs 1 when the synthesized CBCT image 311 is input and outputs 0 when the CBCT image 300 for learning is input, it can be determined that discrimination has succeeded. there is.
한편 제 1 모델(110)을 최적화하기 위해, 상술한 제 1 판별기(115) 및 제 2 판별기(116)의 판별 성공 확률에 기초하여 산출되는 적대적 손실 함수(adversarial loss) 함수가 사용될 수 있다. 즉, 판별기의 출력에 기초하여 산출되는 적대적 손실 함수는, 후술할 사이클 일관성 손실(cycle consistency loss) 함수와 함께, 제 1 모델(110)을 최적화하기 위한 제 1 손실 함수를 구성할 수 있다. On the other hand, in order to optimize the first model 110, an adversarial loss function calculated based on the discrimination success probabilities of the above-described first discriminator 115 and the second discriminator 116 may be used. . That is, the adversarial loss function calculated based on the output of the discriminator may constitute a first loss function for optimizing the first model 110 together with a cycle consistency loss function to be described later.
제 1 손실 함수를 구성하는 적대적 손실 함수는, 제 1 판별기(115) 및 제 2 판별기(116) 각각에 대하여 다음과 같이 정의될 수 있다. 여기서 DQCT는 제 1 판별기(115), DCBCT는 제 2 판별기(116), IQCT는 학습용 골밀도 연관 영상(310), ICBCT는 학습용 CBCT 영상(300), GCBCT→QCT는 제 1 생성기(111), GQCT→CBCT는 제 2 생성기(112)를 각각 지칭한다:The adversarial loss function constituting the first loss function may be defined as follows for each of the first discriminator 115 and the second discriminator 116 . Here, D QCT is the first discriminator 115, D CBCT is the second discriminator 116, I QCT is the bone density-related image 310 for learning, I CBCT is the CBCT image 300 for learning, and G CBCT→QCT is the first 1 generator 111, G QCT→CBCT refer to the second generator 112, respectively:
[수학식 1][Equation 1]
LADV(GCBCT→QCT)=DQCT(IQCT)2+(DQCT(GCBCT→QCT(ICBCT))-1)2
L ADV (G CBCT→QCT )=D QCT (I QCT ) 2 +(D QCT (G CBCT→QCT (I CBCT ))-1) 2
[수학식 2][Equation 2]
LADV(GQCT→CBCT)=DCBCT(ICBCT)2+(DCBCT(GQCT→CBCT(IQCT))-1)2
L ADV (G QCT→CBCT )=D CBCT (I CBCT ) 2 +(D CBCT (G QCT→CBCT (I QCT ))-1) 2
제 1 판별기(115)의 판별 성공 확률이 가장 높은 경우, 제 1 판별기(115)에 대한 적대적 손실 함수(수학식 1) 값은 0이 될 수 있다. 제 2 판별기(116)의 판별 성공 확률이 가장 높은 경우, 제 2 판별기(116)에 대한 적대적 손실 함수(수학식 2) 값은 0이 될 수 있다. When the first discriminator 115 has the highest probability of success in discriminating, the value of the adversarial loss function (Equation 1) for the first discriminator 115 may be zero. When the second discriminator 116 has the highest probability of success in discriminating, the value of the adversarial loss function (Equation 2) for the second discriminator 116 may be zero.
한편 제 1 모델(110)을 최적화하기 위한 제 1 손실 함수는, 상술한 적대적 손실 함수 외에 사이클 일관성 손실 함수를 포함할 수 있다. Meanwhile, the first loss function for optimizing the first model 110 may include a cycle coherence loss function in addition to the above-described adversarial loss function.
사이클 일관성 손실 함수는, ‘합성된 영상을 실제 영상과 가깝게 복원한 영상’과 기 입력된 ‘실제 영상’을 비교하여 손실을 산출하는 함수일 수 있다. 다시 말해, 실제 영상을 생성기에 입력하여 출력된 합성 영상을, 실제 영상과 얼마나 가깝게 복원하였는지를 나타내는 함수일 수 있다. The cycle coherence loss function may be a function that calculates a loss by comparing an 'image obtained by reconstructing a synthesized image close to a real image' and a previously input 'real image'. In other words, it may be a function indicating how closely the synthesized image output by inputting the real image to the generator is restored to the real image.
제 1 모델(110)의 사이클 일관성 손실 함수는, 복원된 CBCT 영상(301) 및 학습용 CBCT 영상(300) 간의 손실(또는, 차이), 그리고 복원된 골밀도 연관 영상(312) 및 학습용 골밀도 연관 영상(310) 간의 손실에 기초하여 산출될 수 있다. The cycle consistency loss function of the first model 110 is the loss (or difference) between the restored CBCT image 301 and the training CBCT image 300, and the restored bone density related image 312 and the training bone density related image ( 310) can be calculated based on the loss between
제 1 손실 함수를 구성하는 사이클 일관성 손실 함수는, 다음과 같이 정의될 수 있다:The cyclic coherence loss function constituting the first loss function may be defined as:
[수학식 3][Equation 3]
LCYC=GQCT→CBCT(GCBCT→QCT(ICBCT))-ICBCT
+GCBCT→QCT(GQCT→CBCT(IQCT))-IQCT
LCYC = G QCT→CBCT (G CBCT→QCT (I CBCT) )-I CBCT + G CBCT→QCT (G QCT→CBCT (I QCT ))-I QCT
상기 수학식 3을 살펴보면, 등호 우측의 첫번째 항은 복원된 CBCT 영상(301) 및 학습용 CBCT 영상(300) 간의 손실, 그리고 두번째 항은 복원된 골밀도 연관 영상(312) 및 학습용 골밀도 연관 영상(310) 간의 손실을 나타낸다. Looking at Equation 3, the first term to the right of the equal sign is the loss between the restored CBCT image 301 and the learning CBCT image 300, and the second term is the restored bone density related image 312 and the learning bone density related image 310 indicates liver loss.
제 2 생성기(113)가 초기 골밀도 연관 영상(400)을 제 1 생성기(111)에 입력된 학습용 CBCT 영상(300)과 가깝게 복원할수록 상기 수학식 3의 첫번째 항의 값은 0에 가까워질 것이며, 마찬가지로 제 1 생성기(114)가 합성 CBCT 영상(311)을 제 2 생성기(112)에 입력된 학습용 골밀도 연관 영상(310)과 가깝게 복원할수록, 상기 수학식 3의 두번째 항의 값 또한 0에 가까워질 수 있다.As the second generator 113 restores the initial bone density-related image 400 closer to the training CBCT image 300 input to the first generator 111, the value of the first term of Equation 3 will approach 0, similarly As the first generator 114 reconstructs the synthesized CBCT image 311 closer to the bone density-related image 310 for training input to the second generator 112, the value of the second term of Equation 3 may also approach 0. .
종합할 때, 제 1 모델(110)의 제 1 손실 함수는, 적대적 손실 함수 및 사이클 일관성 손실 함수의 조합으로서, 하기와 같이 정의될 수 있다. 여기서 λ는 가중치로, 제 1 모델(110)의 적대적 손실 함수의 상대적 중요도를 제어한다. 예컨대 λ는 10일 수 있다. 다만 이에 한정되는 것은 아니며, 적합한 임의의 λ 값이 인실리코(in silico) 또는 실험적으로 선택될 수 있다:Taken together, the first loss function of the first model 110, as a combination of the adversarial loss function and the cyclic coherence loss function, can be defined as follows. Here, λ is a weight, and controls the relative importance of the adversarial loss function of the first model 110. For example, λ may be 10. However, it is not limited thereto, and any suitable λ value can be selected in silico or experimentally:
[수학식 4][Equation 4]
LGAN=LADV(GCBCT→QCT)+LADV(GQCT→CBCT)+λLCYC
L GAN =L ADV (G CBCT→QCT) +L ADV (G QCT→CBCT )+λL CYC
제 1 모델(110)은, 적대적 손실 함수 및 사이클 일관성 손실 함수의 조합을 포함하는 제 1 손실함수의 결과를 기초로 최적화될 수 있다. 상술한 바와 같이, 제 1 판별기(115) 및 제 2 판별기(116)의 판별 성공 확률이 높을수록, 그리고 제 1 생성기(111, 114) 및 제 2 생성기(112, 113)의 성능이 높을수록(여기서는 특히, 복원 성능이 높을수록), 제 1 모델(110)의 제 1 손실함수의 결과 값은 작아질 수 있다. 다시 말해, 제 1 모델(110)은, 제 1 손실함수의 결과가 최소화 되는 방향으로 최적화 될 수 있다. The first model 110 may be optimized based on the result of a first loss function comprising a combination of an adversarial loss function and a cyclic coherence loss function. As described above, the higher the discrimination success probability of the first discriminator 115 and the second discriminator 116, and the higher the performance of the first generators 111 and 114 and the second generators 112 and 113 The higher the restoration performance (herein, in particular, the higher the restoration performance), the resultant value of the first loss function of the first model 110 may be reduced. In other words, the first model 110 may be optimized in a direction in which the result of the first loss function is minimized.
한편 제 1 모델(110)을 구성하는 각각의 생성기(111, 112, 113, 114)에서, 컨볼루션 블록(convolution block)은 배치 정규화(batch normalization) 및 ReLU 활성화(activation)가 적용된 7x7 및 3x3 컨볼루션 레이어들로 구성되고, 다운-샘플링(down-sampling) 레이어들과 업-샘플링(up-sampling) 레이어들 사이에는 잔차 블록(Residual blocks)이 삽입될 수 있다. Meanwhile, in each of the generators 111, 112, 113, and 114 constituting the first model 110, convolution blocks are 7x7 and 3x3 convolutions to which batch normalization and ReLU activation are applied. It is composed of solution layers, and residual blocks may be inserted between down-sampling layers and up-sampling layers.
제 1 모델(110)을 구성하는 각각의 생성기(111, 112, 113, 114)는, 가령 9개의 잔차 블록들을 포함하는 ResNet 구조로 구현될 수 있다. 각각의 생성기에 포함된 잔차 블록을 통해, 네트워크는 소스 및 대상 간의 차이를 학습하고, 이를 통해 학습용 CBCT 영상(300) 및 학습용 골밀도 연관 영상(400) 간의 보다 정확한 복셀 강도 매핑 정보를 포함하는 초기 골밀도 연관 영상(400)을 생성할 수 있다. 다만 이에 한정되는 것은 아니다.Each of the generators 111, 112, 113, and 114 constituting the first model 110 may be implemented as a ResNet structure including, for example, 9 residual blocks. Through the residual block included in each generator, the network learns the difference between the source and the target, and through this, the initial bone density including more accurate voxel intensity mapping information between the CBCT image 300 for training and the bone density related image 400 for training A related image 400 may be generated. However, it is not limited thereto.
한편 각각의 판별기(115, 116)에서, 컨볼루션 블록은 배치 정규화 및 리키렐루(Leaky ReLu) 활성화가 적용된 4x4 컨볼루션 레이어들과, 후속하는 다운-샘플링 레이어들로 구성될 수 있다. 각각의 판별기는, 예컨대 특정 크기의 패치(patch) 단위로 생성기가 생성한 이미지의 진위 여부를 판별하는 PatchGAN으로 구현될 수 있다. 가령 70 x 70패치의 PatchGAN으로 구현될 수 있다. 다만 이에 한정되는 것은 아니다. Meanwhile, in each of the discriminators 115 and 116, the convolution block may be composed of 4x4 convolution layers to which batch normalization and Leaky ReLu activation are applied, and subsequent down-sampling layers. Each discriminator may be implemented as PatchGAN, which discriminates whether an image generated by a generator is genuine or not, in units of patches of a specific size, for example. For example, it can be implemented with PatchGAN of 70 x 70 patches. However, it is not limited thereto.
종합하자면, 본 개시의 몇몇 실시예에 따른 제 1 모델(110), 즉 Cycle-GAN 구조는, 학습용 CBCT 영상(300) 및 그에 대응되는 학습용 골밀도 연관 영상(310)을 포함하는 학습용 데이터와 잔차 블록을 사용하여 구현됨으로써, 학습용 CBCT 영상(300) 및 학습용 골밀도 연관 영상(310) 간의 복셀 강도 매핑 정보를 학습할 수 있다. 이에 따라 초기 골밀도 연관 영상(400)은, 학습용 CBCT 영상과 대응되는 학습용 골밀도 연관 영상(310)의 골밀도 분포 정보가 반영되게 되므로, 학습용 CBCT 영상(300)에 대하여 골 대조도(bone contrast)가 증가될 수 있다. In summary, the first model 110 according to some embodiments of the present disclosure, that is, the Cycle-GAN structure, includes training data including a CBCT image 300 for training and a bone density related image 310 for training corresponding thereto, and a residual block Since it is implemented using , it is possible to learn voxel intensity mapping information between the CBCT image 300 for learning and the bone density-related image 310 for learning. Accordingly, the bone density distribution information of the learning bone density related image 310 corresponding to the learning CBCT image is reflected in the initial bone density related image 400, so the bone contrast increases with respect to the learning CBCT image 300 It can be.
이와 같은 제 1 모델(110)은, 후술할 제 2 모델(120)과 상호보완적으로 작용하여, 궁극적으로 CBCT 영상으로부터 골 대조도 및 균일도가 향상되고 정확도 높은 골밀도 데이터를 포함하는 최종 골밀도 연관 영상(500)을 획득하게끔 할 수 있다. The first model 110 as described above works complementarily with the second model 120 to be described later, ultimately resulting in improved bone contrast and uniformity from the CBCT image and a final bone density-related image containing highly accurate bone density data. (500) can be obtained.
다음으로 제 2 모델(120)과 관련하여, 일반적인 U-Net 구조는, U-자 형태의 인코더(Encoder) 및 디코더(Decoder) 구조를 포함한다. 구체적으로 U-Net은, 인코더의 수축 경로(contracting path)에서는 입력된 이미지의 전반적인 컨텍스트(context)를 추출하고, 한편 디코더의 확장 경로(expanding path)에서는 스킵 커넥션(skip connection)을 통해 수축 경로의 동일 레벨에서 추출된 컨텍스트를 픽셀 위치정보와 결합한다. 즉, 인코딩 영역에서 출력된 결과물의 일부를 디코딩 영역과 결합하여 국소화(localization)의 정확도를 높일 수 있다.Next, in relation to the second model 120, a general U-Net structure includes a U-shaped encoder and decoder structure. Specifically, U-Net extracts the overall context of the input image in the contracting path of the encoder, and on the other hand, in the expanding path of the decoder, the contracting path of the contracting path through skip connection. The context extracted at the same level is combined with the pixel location information. That is, localization accuracy may be increased by combining a part of the result output from the encoding region with the decoding region.
본 개시에 따른 제 2 모델(120)은, 학습용 CBCT 영상(300) 및 제 1 모델(110)로부터 출력된 초기 골밀도 연관 영상(400)을 입력받기 위한 다중 채널 입력을 포함할 수 있다. 제 2 모델(120)은 학습용 CBCT 영상(300) 및 초기 골밀도 연관 영상(400)으로부터 학습용 골밀도 연관 영상(310)을 추론하도록 학습될 수 있으며, 학습용 CBCT 영상(300)에 대응되는 골밀도 데이터를 포함하는 최종 골밀도 연관 영상(500)을 출력할 수 있다.The second model 120 according to the present disclosure may include a multi-channel input for receiving the CBCT image 300 for training and the initial bone density related image 400 output from the first model 110 . The second model 120 can be trained to infer the training bone density related image 310 from the training CBCT image 300 and the initial bone density related image 400, and includes bone density data corresponding to the training CBCT image 300 A final bone density related image 500 may be output.
본 개시에 따른 제 2 모델(120)은, 다중 채널 입력을 통해 학습용 CBCT 영상(300) 및 이에 대응하는 초기 골밀도 연관 영상(400)의 공간 정보를 동시에 학습할 수 있다. 구체적으로, 제 2 모델(120)은 CBCT 영상(300)의 해부학적 구조를 유지하면서 초기 골밀도 연관 영상(400)에 포함된 CBCT 영상 및 골밀도 연관 영상 간 복셀 강도 매핑 정보를 학습할 수 있고, 이에 따라 아티팩트 및 산란 노이즈가 억제되어 균일도가 향상된 최종 골밀도 연관 영상(500)을 출력할 수 있다. The second model 120 according to the present disclosure can simultaneously learn spatial information of the CBCT image 300 for learning and the initial bone density-related image 400 corresponding to the CBCT image 300 through multi-channel input. Specifically, the second model 120 can learn voxel intensity mapping information between the CBCT image and the bone density related image included in the initial bone density related image 400 while maintaining the anatomical structure of the CBCT image 300. Accordingly, artifacts and scattering noise are suppressed, and the final bone density-related image 500 with improved uniformity can be output.
본 개시에 따른 제 2 모델(120)은, 학습용 골밀도 연관 영상(310) 및 최종 골밀도 연관 영상(500) 간의 차이에 관한 함수를 포함하는 제 2 손실함수의 결과에 기초하여 최적화될 수 있다. 구체적으로, 학습용 골밀도 연관 영상(310) 및 최종 골밀도 연관 영상(500) 간의 차이에 관한 함수는 학습용 골밀도 연관 영상(310)및 최종 골밀도 연관 영상(500) 간의 복셀 강도 차이에 대한 평균값 절대 편차(Mean Absolute Difference; MAD) 및 구조적 유사도(Structural similarity; SSIM) 중 적어도 하나를 포함할 수 있다. The second model 120 according to the present disclosure may be optimized based on a result of a second loss function including a function of a difference between the bone density related image 310 for learning and the final bone density related image 500. Specifically, the function for the difference between the learning bone density related image 310 and the final bone density related image 500 is the average value absolute deviation (Mean It may include at least one of Absolute Difference (MAD) and Structural Similarity (SSIM).
여기서 평균값 절대 편차(MAD)는, 학습용 골밀도 연관 영상(310) 및 최종 골밀도 연관 영상(500)의 강도 차이 절대값의 평균으로 정의되며, 하기와 같이 표현될 수 있다. 여기서 IQCBCT는 최종 골밀도 연관 영상(500)을 지칭한다:Here, the average absolute deviation (MAD) is defined as an average of absolute differences in intensity between the training bone density-related image 310 and the final bone density-related image 500, and can be expressed as follows. where I QCBCT refers to the final bone density associated image 500:
[수학식 5][Equation 5]
한편 구조적 유사도(SSIM)는, 두 이미지 간의 유사성을 측정하는 데에 사용되는 함수로서, 하기와 같이 표현될 수 있다. 여기서 μ는 평균(Mean), σ2는 분산(Variance), C1 및 C2는 약한 분모를 안정화하기 위한 변수에 각각 대응된다:On the other hand, the structural similarity (SSIM) is a function used to measure the similarity between two images and can be expressed as follows. where μ is the mean, σ 2 is the variance, and C 1 and C 2 correspond to variables for stabilizing the weak denominator, respectively:
[수학식 6][Equation 6]
이에 따라, 제 2 모델(120)의 제 2 손실함수는 하기와 같이 정의될 수 있다. 여기서 α는 가중치로, 예컨대 α는 0.6일 수 있다. 다만 이에 한정되는 것은 아니며, 적합한 임의의 α값이 인실리코 또는 실험적으로 선택될 수 있다:Accordingly, the second loss function of the second model 120 may be defined as follows. Here, α is a weight, for example, α may be 0.6. However, it is not limited thereto, and any suitable value of α can be selected in silico or experimentally:
[수학식 7][Equation 7]
이처럼 평균값 절대 편차(MAD) 및 구조적 유사도(SSIM)의 조합을 포함하는 제 2 손실 함수를 사용해 제 2 모델(120)을 학습시킴으로써, 구조적 유사도 및 픽셀 단위 오류(pixel-wise errors)를 고려하여 더 빠른 컨버전스(Convergence)과 더 높은 정확도를 얻을 수 있다. As such, by training the second model 120 using the second loss function including a combination of mean value absolute deviation (MAD) and structural similarity (SSIM), considering structural similarity and pixel-wise errors, more You can get faster convergence and higher accuracy.
한편 본 개시에 따른 제 2 모델(120)의 다중 채널 U-Net 구조는, 상술한 다중 채널 입력 외에도, 배치 정규화 및 ReLU 활성화가 적용된 3 x 3 컨볼루션 레이어로 구성된 인코더 및 디코더, 그리고 각 레이어 레벨에서의 스킵 커넥션을 더 포함할 수 있다. 도 6은 인코더와 디코더의 각 레벨에 대해 네 개의 스킵 커넥션을 포함하도록 구현된 다중 채널 U-Net 구조의 일례를 도시한다. 다만 이에 한정되는 것은 아니다. Meanwhile, the multi-channel U-Net structure of the second model 120 according to the present disclosure includes, in addition to the above-described multi-channel input, an encoder and a decoder composed of 3 x 3 convolutional layers to which batch normalization and ReLU activation are applied, and each layer level A skip connection in may be further included. 6 shows an example of a multi-channel U-Net structure implemented to include four skip connections for each level of an encoder and a decoder. However, it is not limited thereto.
본 개시에 따른 제 2 모델(120)은, 다중 채널 U-Net 구조를 포함하도록 구현되어 영상의 공간 영역에 대한 전역적 및 국소적 피처(feature)를 모두 추출할 수 있는 바, 초기 골밀도 연관 영상(400)에 포함된 매핑 정보 그리고 학습용 CBCT 영상(300)의 해부학적 구조 정보를 포함하며 아티팩트(artifact)가 억제된 최종 골밀도 연관 영상(500)을 출력할 수 있다. The second model 120 according to the present disclosure is implemented to include a multi-channel U-Net structure and is capable of extracting both global and local features for a spatial region of an image, which is an initial bone density related image. The final bone density-related image 500 including the mapping information included in 400 and the anatomical structure information of the CBCT image 300 for learning and suppressing artifacts can be output.
이에 따라, 제 1 모델(110) 및 제 2 모델(120)의 결합을 포함하는 본 개시에 따른 딥 러닝 모델(100)은, 입력된 CBCT 영상에 대해 골 대조도 및 균일도가 향상되며, 한편 입력된 CBCT 영상에 대한 골밀도 데이터를 포함하는 최종 골밀도 연관 영상(500)을 출력할 수 있다. 이에 따라 본 개시에 따른 의료 영상 처리 방법은, 궁극적으로 CBCT 영상으로부터 골밀도를 정량적으로 그리고 즉각적으로 측정하는 방법을 제공할 수 있다.Accordingly, the deep learning model 100 according to the present disclosure including the combination of the first model 110 and the second model 120 improves bone contrast and uniformity with respect to the input CBCT image, while the input A final bone density-related image 500 including bone density data for the CBCT image may be output. Accordingly, the medical image processing method according to the present disclosure may ultimately provide a method of quantitatively and immediately measuring bone mineral density from a CBCT image.
이하 도 7 내지 9를 통해 후술하는 실험적 데이터들은, 제 1 모델(110; Cycle-GAN) 및 제 2 모델(120; 다중 채널 U-Net)이 결합된 딥 러닝 모델(100)을 사용하여 제공되는 본 개시에 따른 의료 영상 처리 방법이, Cycle-GAN과 U-Net 중 어느 하나 만으로 구현된 모델 및/또는 BMD 보정용 팬텀 등을 사용하여 CBCT 영상을 보정하는 종래의 방법에 대하여 가지는 성능의 차이를 나타낸다. Experimental data described later through FIGS. 7 to 9 are provided using the deep learning model 100 in which the first model 110 (Cycle-GAN) and the second model 120 (multi-channel U-Net) are combined. The difference in performance of the medical image processing method according to the present disclosure with respect to the conventional method of correcting a CBCT image using a model implemented with only one of Cycle-GAN and U-Net and/or a phantom for BMD correction is shown. .
도 7 내지 9는 본 개시의 의료 영상 처리 방법에 따른 골밀도 데이터 측정 성능을 도시한 도면이다. 도 7 내지 9는 특히 본 개시의 최종 골밀도 연관 영상이 QCT-유사 영상인 몇몇 실시예에 대한 골밀도 데이터 측정 성능을 도시한다. 7 to 9 are diagrams illustrating performance of measuring bone density data according to the medical image processing method of the present disclosure. Figures 7-9 show, in particular, the bone density data measurement performance for some embodiments of the present disclosure where the final bone density related image is a QCT-like image.
여기에서 골밀도 데이터 측정 성능은, 각각의 의료 영상에 대한 골밀도 데이터 측정 성능을 의미할 수 있다. 예컨대 본 개시의 서두에 기재된 바와 같이, 일반적 CT 영상에서 골밀도가 HU 값의 보정을 통해 정량적으로 측정될 수 있는 것에 비하여, 종래의 CBCT 영상의 경우 복셀 값의 불균일적 특성과 HU 값과의 비선형적 관계로 인해 종래의 CBCT 영상으로부터 즉각적으로 유의미한 골밀도 데이터를 측정하는 것은 불가능할 수 있다. 이러한 경우, 종래의 CBCT 영상에 대한 골밀도 데이터 측정 성능은, 일반적 CT 영상에 대한 골밀도 데이터 측정 성능보다 낮은 것으로 볼 수 있다. Here, bone density data measurement performance may mean bone density data measurement performance for each medical image. For example, as described at the beginning of the present disclosure, compared to the fact that bone mineral density can be quantitatively measured through the correction of HU values in general CT images, in the case of conventional CBCT images, non-linearity between the non-uniform characteristics of voxel values and HU values Because of this relationship, it may not be possible to measure immediately meaningful bone density data from conventional CBCT images. In this case, the performance of measuring bone density data for a conventional CBCT image can be regarded as lower than the performance of measuring bone density data for a general CT image.
먼저 도 7을 참고하면, 도 7은 본 개시의 의료 영상 처리 방법에 따른 골밀도 데이터 측정 성능을 정량적으로 도시한 도면으로서, 구체적으로 본 개시에 따른 최종 골밀도 연관 영상(500, 이하 “QCBCT 영상”)이, 실제 QCT 영상(즉, 참값(Ground Truth)), Cycle-GAN 모델에 CBCT 영상을 입력하여 획득되는 영상(이하, “CYC_CBCT 영상”), U-Net 모델에 CBCT 영상을 입력하여 획득되는 영상(이하, “U_CBCT 영상”), BMD 보정용 팬텀을 사용하여 보정된 CBCT 영상(이하, “CAL_CBCT 영상”)에 대하여 가지는 정량적 성능 차이를 도시한다. First, referring to FIG. 7 , FIG. 7 is a diagram quantitatively illustrating the performance of measuring bone density data according to the medical image processing method of the present disclosure, and specifically, a final bone density related image (500, hereinafter “QCBCT image”) according to the present disclosure. This, actual QCT image (ie, ground truth), image obtained by inputting CBCT image to Cycle-GAN model (hereinafter, “CYC_CBCT image”), image obtained by inputting CBCT image to U-Net model (hereinafter, “U_CBCT image”) and a quantitative performance difference with respect to a CBCT image (hereinafter, “CAL_CBCT image”) corrected using a phantom for BMD correction.
정량적 성능 비교를 위해 도 7에서는 평균값 절대 편차(MAD), 피크 신호 대 잡음비(Peak Signal to Noise Ratio; PSNR), 정규화된 교차 상관(Normalized Cross Correlation; NCC), 구조적 유사도(SSIM), 공간적 불균일성(Spatial Non-Uniformity; SNU), 그리고 기울기(선형 회귀 기울기; Slope of Linear regression) 값이 사용되었다. For quantitative performance comparison, in FIG. 7 , Mean Absolute Deviation (MAD), Peak Signal to Noise Ratio (PSNR), Normalized Cross Correlation (NCC), Structural Similarity (SSIM), and Spatial Non-uniformity ( Spatial Non-Uniformity (SNU), and slope (Slope of Linear regression) values were used.
여기서 피크 신호 대 잡음비(PSNR)는, 실제 QCT 영상과 비교 대상인 영상 간 평균 제곱근 오차(Root mean Squared Error; MSE)에 대해 가능한 최대 강도(Maximum Possible Intensity; MAX)의 로그로서 정의되며, 하기와 같이 표현될 수 있다:Here, the peak signal-to-noise ratio (PSNR) is defined as the logarithm of the Maximum Possible Intensity (MAX) for the Root Mean Squared Error (MSE) between the actual QCT image and the image to be compared, as follows: can be expressed as:
[수학식 8][Equation 8]
여기서 정규화된 교차 상관(NCC)는, 실제 QCT 영상과 비교 대상인 영상의 강도를 곱한 값을 각각의 표준편차로 나눈 값으로 정의되며, 하기와 같이 표현될 수 있다:Here, the normalized cross-correlation (NCC) is defined as a value obtained by dividing the product of the actual QCT image by the intensity of the image to be compared by the standard deviation of each, and can be expressed as follows:
[수학식 9][Equation 9]
여기서 공간적 불균일성(SNU)는, 영상 내에 설정한 직사각형 ROI(관심영역; Region Of Interest)들에서 강도 값의 최대값과 최소값 간 차이의 절대값으로 정의된다. Here, spatial non-uniformity (SNU) is defined as an absolute value of a difference between a maximum value and a minimum intensity value in rectangular ROIs (Regions Of Interest) set in an image.
여기서 기울기(Slope)는, 골밀도의 선형성을 평가하기 위한 지표로서, 영상내 강도 값의 선형 회귀를 통해, 실제 QCT 영상과 비교 대상인 영상 간 복셀 강도의 관계를 분석한 값이다.Here, the slope, as an index for evaluating the linearity of bone density, is a value obtained by analyzing the relationship between voxel intensity between an actual QCT image and an image to be compared through linear regression of intensity values within an image.
피크 신호 대 잡음비(PSNR), 구조적 유사도(SSIM), 정규화된 교차 상관(NCC) 및 기울기(Slope)는 높은 값일수록, 그리고 평균값 절대 편차(MAD) 및 공간적 불균일성(SNU)는 낮은 값일수록 골밀도 측정 성능이 더 우수한 것으로 해석될 수 있다. Peak signal-to-noise ratio (PSNR), structural similarity (SSIM), normalized cross-correlation (NCC) and slope (Slope) are higher, mean absolute deviation (MAD) and spatial non-uniformity (SNU) are lower values, bone density measurement It can be interpreted as better performance.
다시 도 7을 참고하면, 본 개시에 따른 최종 골밀도 연관 영상(도 7에서 QCBCT 영상, 3행 및 9행 참조)은 실제 QCT 영상과 비교할 때, 평균값 절대 편차(MAD), 피크 신호 대 잡음비(PSNR), 구조적 유사도(SSIM), 정규화된 교차 상관(NCC), 및 기울기(Slope) 값에 있어서, 촬영된 골의 위치(예를 들어, 상악 또는 하악)에 무관하게 CYC_CBCT 영상, U_CBCT 영상, 및 CAL_CBCT 영상을 크게 능가하는 것으로 나타난다. Referring back to FIG. 7 , the final bone density-related image (QCBCT image in FIG. 7 , see rows 3 and 9) according to the present disclosure is compared with the actual QCT image, the average absolute deviation (MAD), peak signal-to-noise ratio (PSNR) ), structural similarity (SSIM), normalized cross correlation (NCC), and slope (Slope) values, CYC_CBCT images, U_CBCT images, and CAL_CBCT regardless of the location of the captured bone (eg, maxilla or mandible) It appears to greatly outperform the video.
즉, 본 개시에 따른 의료 영상 처리 방법의 골밀도 측정 성능은, Cycle-GAN과 U-Net 중 어느 하나 만으로 구현된 모델 및/또는 BMD 보정용 팬텀 등을 사용하여 CBCT 영상을 보정하는 종래의 방법의 골밀도 측정 성능과 비교하여 유의한 차이를 보인다. That is, the bone density measurement performance of the medical image processing method according to the present disclosure is equivalent to the bone density of the conventional method of correcting a CBCT image using a model implemented with only one of Cycle-GAN and U-Net and/or a phantom for BMD correction. Compared with the measurement performance, it shows a significant difference.
다음으로 도 8을 참고하면, 도 8은 본 개시에 따른 골밀도 데이터 측정 성능을 정성적으로 도시한 도면으로서, 구체적으로 상악과 하악 각각에 대한 실제 QCT 영상, 최종 골밀도 연관 영상(또는, 도 8에서 QCBCT 영상), CYC_CBCT 영상, U_CBCT 영상 및 CAL_CBCT 영상(이하 도 8의 1행 및 3행 참조), 그리고 실제 QCT 영상에서 각각의 영상을 감산한 영상(Subtraction images, 도 8의 2행 및 4행 참조)을 도시한다. Next, referring to FIG. 8, FIG. 8 is a diagram qualitatively showing the performance of measuring bone density data according to the present disclosure, and specifically, an actual QCT image for each of the maxilla and mandible, and a final bone density related image (or, in FIG. QCBCT image), CYC_CBCT image, U_CBCT image and CAL_CBCT image (see rows 1 and 3 of FIG. 8 below), and images obtained by subtracting each image from the actual QCT image (subtraction images, see rows 2 and 4 of FIG. 8 ) is shown.
도 8의 감산된 영상들에서 나타나는 바와 같이, 본 개시에 따른 최종 골밀도 연관 영상(2열 참조)의 골밀도 영상 품질은, 실제 QCT 영상(1열 참조)과 비교하여 볼 때, CYC_CBCT 영상(3열 참조), U_CBCT 영상(4열 참조) 및 CAL_CBCT 영상(5열 참조)에 대하여 상당한 개선을 보인다. 특히 CAL_CBCT 영상과 비교하여 볼 때, 본 개시에 따른 최종 골밀도 연관 영상에서는, CAL_CBCT 영상에서 발견되는 치아 영역에서의 큰 골밀도(복셀 강도 값) 차이 및 높은 골밀도 값의 밀집된 본딩(dense bond)이 크게 감소된 것을 볼 수 있다. As shown in the subtracted images of FIG. 8, the quality of the bone density image of the final bone density related image (see column 2) according to the present disclosure is compared with the actual QCT image (see column 1), the CYC_CBCT image (see column 3) ), shows significant improvement for U_CBCT images (see column 4) and CAL_CBCT images (see column 5). In particular, when compared with the CAL_CBCT image, in the final BMD-related image according to the present disclosure, a large bone density (voxel intensity value) difference in the tooth region found in the CAL_CBCT image and a dense bond of high BMD values are greatly reduced. can see what has happened
다음으로 도 9를 참고하면, 도 9는 실제 QCT 영상과 본 개시에 따른 최종 골밀도 연관 영상(1열 참조), CYC_CBCT 영상(2열 참조), U_CBCT 영상(3열 참조) 및 CAL_CBCT 영상(4열 참조) 각각 간의 선형 관계를 도시하며, 구체적으로 도 9의 1행(a~d)은 80kVp - 8mA 조건에서 상악 영역에 대해 획득된 영상, 2행(e~h)은 80kVp - 8mA 조건에서 하악 영역에 대해 획득된 영상, 3행(i~l)은 90kVp - 10mA 조건에서 상악 영역에 대해 획득된 영상, 4행(m~p)은 90kVp - 10mA 조건에서 하악 영역에 대해 획득된 영상을 도시한다. Next, referring to FIG. 9, FIG. 9 shows an actual QCT image, a final bone density related image according to the present disclosure (see column 1), a CYC_CBCT image (see column 2), a U_CBCT image (see column 3), and a CAL_CBCT image (see column 4). Reference) shows a linear relationship between each. Specifically, the first row (a to d) of FIG. 9 is an image obtained for the upper jaw region under the condition of 80 kVp - 8 mA, and the second row (e to h) is the image obtained for the lower jaw region under the condition of 80 kVp - 8 mA. Images acquired for the region, 3 rows (i to l) show images acquired for the upper jaw region under the condition of 90 kVp - 10 mA, and 4 rows (m to p) show images acquired for the lower jaw region under the condition of 90 kVp - 10 mA. do.
도 9에서 나타나는 바와 같이, 실제 QCT 영상과 본 개시에 따른 최종 골밀도 연관 영상 간의 선형 관계는, 실제 QCT 영상과 다른 영상들 간의 선형 관계에 비하여 높은 기울기 및 적합성(Goodness of fit)와 함께, 더 높은 대조도와 상관도를 보인다. 또한 이러한 선형관계의 경향성은, 촬영되는 골의 위치(예를 들어, 상악 또는 하악)나 촬영 조건(예를 들어, 80kVp - 18mA 또는 90kVp - 10mA) 등과는 무관히 일관되게 나타난다. As shown in FIG. 9 , the linear relationship between the actual QCT image and the final bone density-related image according to the present disclosure has a higher slope and goodness of fit than the linear relationship between the actual QCT image and other images. Show contrast and correlation. In addition, the tendency of this linear relationship appears consistently regardless of the location of the bone to be imaged (eg, upper jaw or lower jaw) or imaging conditions (eg, 80 kVp-18 mA or 90 kVp-10 mA).
도 2 내지 9를 통하여 상술한 내용을 종합할 때, 본 개시에 따른 의료 영상 처리 방법은, Cycle-GAN과 다중 채널 U-Net의 결합을 포함하는 딥 러닝 모델을 통해, CBCT 영상으로부터 골밀도를 정량적으로 그리고 즉각적으로 측정 가능한 방법을 제공할 수 있다. When synthesizing the above contents through FIGS. 2 to 9, the medical image processing method according to the present disclosure quantitatively evaluates bone density from CBCT images through a deep learning model including a combination of Cycle-GAN and multi-channel U-Net. It can provide a measurable method that can be measured both locally and immediately.
구체적으로 본 개시에 따른 제 1 모델(즉, Cycle-GAN 구조)은, CBCT 영상 및 QCT 영상 또는 CT 영상인 골밀도 연관 영상 간의 매핑 정보를 포함하며, 제 1 모델에 입력된 CBCT 영상에 대하여 골 대조도(bone contrast)가 증가된 QCT-유사 영상 또는 CT-유사 영상인, 초기 골밀도 연관 영상을 출력할 수 있다. 제 1 모델과 결합된 본 개시의 제 2 모델(즉, 다중 채널 U-Net 구조)은, 초기 골밀도 연관 영상 및 CBCT 영상을 입력 받고, 초기 골밀도 연관 영상에 포함된 매핑 정보 그리고 CBCT 영상의 해부학적 구조 정보를 포함하며 아티팩트(artifact)가 억제되고 균일도가 증가된 QCT-유사 영상 또는 CT-유사 영상인, 최종 골밀도 연관 영상을 출력할 수 있다. Specifically, the first model (ie, Cycle-GAN structure) according to the present disclosure includes mapping information between a CBCT image and a bone density-related image that is a QCT image or a CT image, and bone collation is performed for the CBCT image input to the first model. An initial bone density related image, which is a QCT-like image or a CT-like image with increased bone contrast, may be output. The second model (i.e., multi-channel U-Net structure) of the present disclosure combined with the first model receives an initial bone density related image and a CBCT image, and provides mapping information included in the initial bone density related image and anatomical analysis of the CBCT image A final bone density related image, which is a QCT-like image or a CT-like image including structural information, suppressed artifacts and increased uniformity, may be output.
이에 따라 본 개시에 따른 의료 영상 처리 방법은, CBCT 영상으로부터 골 영상의 균일도, 대조도는, 해부학적 정확도와 정량적 정확도를 크게 향상시킨 최종 골밀도 연관 영상을 획득할 수 있으며, 궁극적으로 낮은 방사선량, 짧은 획득시간, 높은 해상도와 같은 이점을 가진 CBCT 영상 기법을 사용하면서도, 동시에 CBCT 영상으로부터 골밀도를 정량적으로 그리고 즉각적으로 측정 가능한 방법을 제공할 수 있다. Accordingly, the medical image processing method according to the present disclosure can obtain a final bone density-related image with significantly improved anatomical accuracy and quantitative accuracy in the uniformity and contrast of the bone image from the CBCT image, ultimately reducing radiation dose, It is possible to provide a method capable of quantitatively and immediately measuring bone mineral density from CBCT images while using CBCT imaging techniques having advantages such as short acquisition time and high resolution.
도 10은 본 개시의 몇몇 실시예에 따른 의료 영상 처리 방법에서 학습용 데이터를 획득하는 방법의 순서도이다. 구체적으로 도 10은, 도 4 내지 6에서 전술한 바와 같이 본 개시에 따른 딥 러닝 모델(100)을 학습시키기 위해 학습용 CBCT 영상(300) 및 학습용 골밀도 연관 영상(310) 각각을 획득하는 방법을 도시한다. 10 is a flowchart of a method of obtaining training data in a medical image processing method according to some embodiments of the present disclosure. Specifically, FIG. 10 shows a method of acquiring a CBCT image 300 for learning and a bone density related image 310 for learning, respectively, in order to train the deep learning model 100 according to the present disclosure as described above with reference to FIGS. 4 to 6 do.
도 10을 참고하면, 먼저 컴퓨팅 장치는, 피검체에 대하여 촬영된 로우 CBCT 영상 및 로우 CBCT 영상과 대응되는 로우 CT 영상을 각각 획득할 수 있다(S310).Referring to FIG. 10 , the computing device may first acquire a raw CBCT image of a subject under examination and a raw CT image corresponding to the raw CBCT image (S310).
여기서 피검체는, 예컨대 두개골 팬텀(phantoms of human skulls)일 수 있으며, 이 때 두개골 팬텀은 아티팩트를 일으키는 금속 도재관(metal restoration)이 있는 것과 없는 것 중 적어도 하나를 포함할 수 있다. 다만 이에 한정되는 것은 아니며, 학습용 데이터 획득을 위한 피검체는 임의의 부위에 대한 하나 혹은 셋 이상의 팬텀들일 수도 있다. Here, the subject may be, for example, phantoms of human skulls, and in this case, the skull phantom may include at least one of those with and without a metal restoration that causes artifacts. However, the present invention is not limited thereto, and an object for learning data acquisition may be one or three or more phantoms for an arbitrary part.
상기 피검체에 대하여, 로우 CBCT 영상 및 로우 CT 영상이 각각 촬영될 수 있다. 이 때 로우 CBCT 영상 및 로우 CT 영상은, 동일한 조건 - 전압 및 전류 등 - 에서 촬영될 수 있다. 다만 이에 한정되는 것은 아니며, 로우 CBCT 영상 및 로우 CT 영상은 상기 피검체에 대해 서로 상이한 조건에서 촬영된 다음, 정합 등의 단계를 통해 정렬(aligned)될 수도 있다. For the subject, a raw CBCT image and a raw CT image may be respectively captured. In this case, the raw CBCT image and the raw CT image can be captured under the same conditions, such as voltage and current. However, the present invention is not limited thereto, and a raw CBCT image and a raw CT image may be captured under different conditions for the subject and then aligned through a step such as matching.
예컨대, 로우 CT 영상은 피검체에 대하여 복셀 사이즈 0.469 x 0.469 x 0.5 mm3, 치수 512 x 512 픽셀(pixels), 깊이 16 비트(bit), 전압 120(kVp) 및 전류 130(mA)의 조건으로 촬영되고, 한편 로우 CBCT 영상은 피검체에 대하여 복셀 사이즈 0.3 x 0.3 x 0.3 mm3, 치수 559 x 559 픽셀(pixels), 깊이 16 비트(bit), 전압 80 또는 90(kVp) 및 전류 8 또는 10(mA)의 조건으로 촬영될 수 있다. For example, a raw CT image is a voxel size of 0.469 x 0.469 x 0.5 mm 3 , dimensions of 512 x 512 pixels, depth of 16 bits, voltage of 120 (kVp), and current of 130 (mA). On the other hand, a raw CBCT image is obtained for the subject with a voxel size of 0.3 x 0.3 x 0.3 mm 3 , dimensions of 559 x 559 pixels, depth of 16 bits, voltage of 80 or 90 (kVp), and current of 8 or 10 (mA) can be taken under the condition.
학습용 CBCT 영상과 대응되는 QCT 영상을 학습용 골밀도 연관 영상으로 사용하는 일 실시예에서, 컴퓨팅 장치(10)는 다음으로, 상기 로우 CT 영상을 보정하여 로우 QCT 영상을 획득할 수 있다(S320).In an embodiment in which a QCT image corresponding to a CBCT image for training is used as a bone density-related image for training, the computing device 10 may next obtain a raw QCT image by correcting the raw CT image (S320).
전술한 바와 같이, QCT 영상은 일반 CT 영상에서 HU 값의 보정을 통해 획득되는 영상으로, 이 때 보정을 위하여 골밀도 보정용 팬텀에 대한 CT 영상을 사용할 수 있다. As described above, the QCT image is an image obtained through correction of the HU value in a general CT image, and in this case, a CT image of a phantom for bone density correction may be used for correction.
보다 구체적으로, 로우 CT 영상을 상기 로우 CT 영상과 대응되는 조건에서 촬영된 골밀도 보정용 팬텀 CT 영상을 사용하여 보정하고, 상기 보정된 로우 CT 영상을 로우 QCT 영상으로 획득할 수 있다. 즉, 상술한 예시를 참고하면, 골밀도 보정용 팬텀에 대하여 전압 120(kVp) 및 전류 130(mA)의 조건으로 촬영된 골밀도 보정용 팬텀 CT 영상을 사용하여, 전압 120(kVp) 및 전류 130(mA)의 조건으로 촬영된 로우 CT을 보정하고 로우 QCT 영상을 획득할 수 있다. 다만 이에 한정되는 것은 아니다. More specifically, a raw CT image may be corrected using a phantom CT image for bone density correction taken under conditions corresponding to the raw CT image, and the corrected raw CT image may be obtained as a raw QCT image. That is, referring to the above example, using the phantom CT image for bone density correction taken under the conditions of voltage 120 (kVp) and current 130 (mA) with respect to the bone density correction phantom, voltage 120 (kVp) and current 130 (mA) It is possible to correct the raw CT imaged under the condition of and obtain a raw QCT image. However, it is not limited thereto.
다음으로, 컴퓨팅 장치(10)는, 로우 CBCT 영상 및 로우 QCT 영상에서 비해부학적 요소를 제거할 수 있다(S330). 학습용 영상들에서 비해부학적 영역(non-anatomical regions)을 제거함으로써, 학습 과정에서 비해부학적 영역에 의한 정확도 감소 등의 부정적 영향(adverse impacts)을 방지할 수 있다. Next, the computing device 10 may remove non-anatomical elements from the raw CBCT image and the raw QCT image (S330). By removing non-anatomical regions from the learning images, it is possible to prevent adverse impacts such as a decrease in accuracy due to the non-anatomical regions in the learning process.
보다 구체적으로, 이진 마스크(Binary mask)를 로우 CBCT 영상 및 로우 QCT 영상에 각각 적용하여, 로우 CBCT 영상 및 로우 QCT 영상에서 비해부학적 영역을 제거할 수 있다. More specifically, a non-anatomical region may be removed from the raw CBCT image and the raw QCT image by applying a binary mask to the raw CBCT image and the raw QCT image, respectively.
여기서 이진 마스크 이미지는, 임계화(thresholding) 및 형태학적 연산(morphological operations)을 사용하여 생성될 수 있다. 구체적으로, 학습용 영상 쌍을 이루는 학습용 CBCT 영상 및 대응되는 학습용 QCT 영상 각각에 국소 범위 필터(local range filter)를 적용하여 해부학적 영역(anatomical regions)의 가장자리(edges)를 추출할 수 있다. 다음으로, 작은 블랍들(blobs)을 제거하고 내부 영역을 채우기 위하여 임계화를 통해 획득한 이진화 된(binarized) 가장자리에 오프닝 및 플러드 필(opening and flood fill)의 형태학적 연산을 적용할 수 있다. Here, the binary mask image can be created using thresholding and morphological operations. Specifically, edges of anatomical regions may be extracted by applying a local range filter to each of the training CBCT image and the corresponding training QCT image forming a training image pair. Next, morphological operations of opening and flood fill can be applied to the binarized edges obtained through thresholding to remove small blobs and fill the inner regions.
상술한 연산을 통해 로우 CBCT 영상 및 로우 QCT 영상으로부터 생성된 두 이진 마스크의 교집합(intersection)을, 로우 CBCT 영상 및 로우 QCT 영상에 곱할 수 있다. 한편 마스크된(masked) 영역 외부의 복셀 값들은 -1000 HU로 대체될 수 있다. 상술한 내용은 학습용 영상들로부터 비해부학적 영역을 제거하기 위한 방법의 일 예시에 불과하며, 본 개시를 제한하지 않는다. The raw CBCT image and the raw QCT image may be multiplied by the intersection of two binary masks generated from the raw CBCT image and the raw QCT image through the above-described operation. Meanwhile, voxel values outside the masked area may be replaced with -1000 HU. The above is only an example of a method for removing non-anatomical regions from learning images, and the present disclosure is not limited thereto.
다음으로, 컴퓨팅 장치(10)는, 로우 CBCT 영상 및 로우 QCT 영상을 정합할 수 있다(S340). Next, the computing device 10 may match the raw CBCT image and the raw QCT image (S340).
구체적으로, 로우 QCT 영상은 점대점 정합(paired-point registration)에 의해 로우 CBCT 영상에 매칭될 수 있다. 이 때, 정합을 위해 복수 개의 랜드마크들이 설정될 수 있다. 예를 들어, 측절치의 정점(vertex on the lateral incisors), 첫 번째 소구치의 협측 교두(buccal cusps of the first premolars), 그리고 첫 번째 대구치의 원심협측 교두(distobuccal cusps of the first molars)를 포함하여 6개의 랜드마크들이 설정될 수 있다. 다만 이에 한정되는 것은 아니다.Specifically, a raw QCT image may be matched to a raw CBCT image by paired-point registration. At this time, a plurality of landmarks may be set for matching. 6, including, for example, the vertex on the lateral incisors, the buccal cusps of the first premolars, and the distobuccal cusps of the first molars. Dog landmarks may be set. However, it is not limited thereto.
한편, 컴퓨팅 장치는, 정합된 로우 CBCT 영상 및 로우 QCT 영상을 임의의 크기의 이미지로 크롭한 다음, 기 설정된 크기의 이미지로 리사이즈하는 단계를 추가적으로 수행할 수 있다. 가령, 정합된 로우 CBCT 영상 및 로우 QCT 영상으로부터, 상악 영역을 중심으로 559 x 559 x 200 픽셀(pixel)의 이미지를 크롭한 다음, 256 x 256 x 200 픽셀(pixel)의 이미지로 리사이즈 할 수 있다. Meanwhile, the computing device may additionally perform a step of cropping the matched raw CBCT image and the raw QCT image into an image of an arbitrary size, and then resizing the matched raw CBCT image into an image of a preset size. For example, from the matched low CBCT image and low QCT image, a 559 x 559 x 200 pixel image centered on the maxillary region can be cropped and then resized to a 256 x 256 x 200 pixel image. .
다음으로 컴퓨팅 장치(10)는, 정합된 로우 CBCT 영상을 학습용 CBCT 영상으로 획득하고, 정합된 로우 QCT 영상을 학습용 골밀도 연관 영상으로 획득할 수 있다(S350).Next, the computing device 10 may acquire the matched raw CBCT image as a CBCT image for learning, and acquire the matched raw QCT image as a bone density-related image for learning (S350).
또는, 상술한 바와 같이 정합된 로우 CBCT 영상 및 로우 QCT 영상에 대하여 크롭 및/또는 리사이즈를 추가적으로 수행하는 경우에는, 크롭 및/또는 리사이즈 된 로우 CBCT 영상 및 로우 QCT 영상이 각각 학습용 CBCT 영상 및 학습용 골밀도 연관 영상으로 획득될 수 있다. Alternatively, when cropping and/or resizing are additionally performed on the matched raw CBCT image and raw QCT image as described above, the cropped and/or resized raw CBCT image and raw QCT image are the CBCT image for learning and the bone density for learning, respectively. It can be obtained as a related image.
이와 달리, 학습용 CBCT 영상과 대응되는 CT 영상을 학습용 골밀도 연관 영상으로 사용하는 일 실시예에서, 컴퓨팅 장치(10)는 획득된 로우 CBCT 영상 및 로우 CT에서 비해부학적 영역을 제거하고(S360), 비해부학적 영역이 제거된 로우 CBCT 영상 및 로우 CT 영상을 정합하고(S370), 그리고 정합된 CBCT 영상을 학습용 CBCT 영상으로 획득하고, 정합된 로우 CT 영상을 학습용 골밀도 연관 영상으로 획득할 수 있다(S380). 각 영상에 대해 비해부학적 영역을 제거하는 구체적인 방법 및 각 영상을 서로 정합하는 구체적인 방법에 대하여는 전술한 바, 중복 설명을 피하기 위해 여기에서는 자세한 기재를 생략한다. In contrast, in an embodiment in which a CT image corresponding to a CBCT image for learning is used as a bone density-related image for learning, the computing device 10 removes a non-anatomical region from the acquired raw CBCT image and raw CT (S360), The raw CBCT image from which the non-anatomical region has been removed and the raw CT image may be matched (S370), and the registered CBCT image may be acquired as a CBCT image for learning, and the registered raw CT image may be obtained as a bone density-related image for learning ( S380). Since the specific method of removing the non-anatomical region for each image and the specific method of matching each image with each other have been described above, detailed descriptions are omitted here to avoid redundant description.
이와 같이 획득된 학습용 CBCT 영상 및 이에 대응되는 학습용 골밀도 연관 영상은 함께 학습용 영상 쌍을 이룰 수 있으며, 하나 이상의 학습용 영상 쌍들의 집합이 본 개시에 따른 학습용 데이터를 구성할 수 있다. 이와 같이 획득된 학습용 데이터는, 도 4 내지 6을 통하여 전술한 본 개시에 따른 딥 러닝 모델(100)의 학습을 위해 사용될 수 있다. The acquired CBCT images for learning and the corresponding BMD-related images for learning may form a pair of training images together, and a set of one or more pairs of training images may constitute training data according to the present disclosure. The learning data obtained in this way may be used for learning the deep learning model 100 according to the present disclosure described above through FIGS. 4 to 6 .
이상에서 설명한 본 개시의 실시예는 장치 및 방법을 통해서만 구현이 되는 것은 아니며, 본 개시의 실시예의 구성에 대응하는 기능을 실현하는 프로그램 또는 그 프로그램이 기록된 기록 매체를 통해 구현될 수도 있다.The embodiments of the present disclosure described above are not implemented only through devices and methods, and may be implemented through a program that realizes functions corresponding to the configuration of the embodiments of the present disclosure or a recording medium on which the program is recorded.
이상에서 본 개시의 실시예에 대하여 상세하게 설명하였지만 본 개시의 권리범위는 이에 한정되는 것은 아니고 다음의 청구범위에서 정의하고 있는 본 개시의 기본 개념을 이용한 당업자의 여러 변형 및 개량 형태 또한 본 개시의 권리범위에 속하는 것이다.Although the embodiments of the present disclosure have been described in detail above, the scope of the present disclosure is not limited thereto, and various modifications and improvements of those skilled in the art using the basic concepts of the present disclosure defined in the following claims are also included in the present disclosure. that fall within the scope of the right.
Claims (16)
- 컴퓨팅 장치에 의해 수행되는 의료 영상 처리 방법으로서, A medical image processing method performed by a computing device,학습용 CBCT(Cone-Beam CT) 영상 및 상기 학습용 CBCT 영상에 대응되는 학습용 골밀도 연관 영상의 매핑 정보를 학습하여, 상기 학습용 CBCT 영상으로부터 상기 학습용 골밀도 연관 영상을 추론하도록, 제 1 모델을 학습시키는 단계, 및Learning a first model to infer the bone density-related image for learning from the CBCT image for learning by learning mapping information of a Cone-Beam CT (CBCT) image for learning and a bone density-related image for learning corresponding to the CBCT image for learning; and상기 학습용 CBCT 영상 및 상기 제 1 모델에서 출력된 초기 골밀도 연관 영상으로부터 상기 학습용 골밀도 연관 영상을 추론하도록, 제 2 모델을 학습시키는 단계를 포함하는,Learning a second model to infer the bone density related image for learning from the CBCT image for learning and the initial bone density related image output from the first model,의료 영상 처리 방법.Medical image processing method.
- 제 1항에서, In claim 1,상기 학습용 골밀도 연관 영상은,The bone density related image for learning,QCT(quantitative CT) 또는 CT 영상 중 어느 하나인,Either quantitative CT (QCT) or CT images,의료 영상 처리 방법.Medical image processing method.
- 제 1항에서,In claim 1,상기 제 1 모델은,The first model,상기 매핑 정보를 학습하여 CBCT 영상으로부터 골밀도 연관 영상을 추론하는, 제 1 생성기, 및 입력된 영상이 실제 골밀도 연관 영상인지 또는 합성된 골밀도 연관 영상인지 여부를 판별하는, 제 1 판별기를 포함하고,A first generator for inferring a bone density-related image from a CBCT image by learning the mapping information, and a first discriminator for determining whether an input image is an actual bone density-related image or a synthesized bone density-related image,상기 제 1 판별기의 판별 성공 확률에 기초하여 산출되는 상기 제 1 모델의 적대적 손실(adversarial loss) 함수를 포함하는 제 1 손실함수의 결과를 최소화하는 방향으로 학습되는,Learning in the direction of minimizing the result of the first loss function including the adversarial loss function of the first model calculated based on the discrimination success probability of the first discriminator,의료 영상 처리 방법.Medical image processing method.
- 제 3항에서,In paragraph 3,상기 제 1 모델은,The first model,상기 매핑 정보를 학습하여 상기 골밀도 연관 영상으로부터 상기 CBCT 영상을 추론하는, 제 2 생성기, 및 입력된 영상이 실제 CBCT 영상인지 또는 합성된 CBCT 영상인지 여부를 판별하는, 제 2 판별기를 더 포함하고,A second generator for inferring the CBCT image from the bone density-related image by learning the mapping information, and a second discriminator for determining whether an input image is an actual CBCT image or a synthesized CBCT image,상기 제 1 판별기 및 상기 제 2 판별기의 판별 성공 확률에 기초하여 산출되는 상기 제 1 모델의 상기 적대적 손실 함수, 그리고 상기 제 1 모델의 사이클 일관성 손실(cycle consistency loss) 함수의 조합을 포함하는 상기 제 1 손실함수의 결과를 최소화하는 방향으로 최적화되는,Including a combination of the adversarial loss function of the first model and the cycle consistency loss function of the first model calculated based on the discrimination success probabilities of the first discriminator and the second discriminator Optimized in the direction of minimizing the result of the first loss function,의료 영상 처리 방법.Medical image processing method.
- 제 1항에서, In claim 1,상기 제 1 모델은,The first model,Cycle-GAN(Generative Adversarial Network) 구조를 포함하도록 구현되는,Implemented to include a Cycle-GAN (Generative Adversarial Network) structure,의료 영상 처리 방법.Medical image processing method.
- 제 1항에서,In claim 1,상기 골밀도 연관영상은,The bone density related image,상기 학습용 CBCT 영상 및 상기 학습용 골밀도 연관 영상 간의 복셀 강도(voxel intensity) 관계에 관한 상기 매핑 정보를 포함하고, 상기 학습용 CBCT 영상에 대하여 골 대조도(bone contrast)가 증가된 QCT-유사(QCT-like) 또는 CT-유사(CT-like) 영상인,Includes the mapping information on the voxel intensity relationship between the learning CBCT image and the learning bone density related image, and has increased bone contrast with respect to the learning CBCT image QCT-like ) or CT-like images,의료 영상 처리 방법.Medical image processing method.
- 제 1항에서, In claim 1,상기 제 2 모델은,The second model,상기 학습용 CBCT 영상에 대응되는 골밀도(Bone Mineral Density; BMD) 데이터를 포함하는 최종 골밀도 연관 영상을 출력하는, Outputting a final bone density related image including Bone Mineral Density (BMD) data corresponding to the CBCT image for learning,의료 영상 처리 방법.Medical image processing method.
- 제 7항에서, In paragraph 7,상기 제 2 모델은,The second model,상기 학습용 골밀도 연관 영상 및 상기 최종 골밀도 연관 영상 간의 차이에 관한 함수를 포함하는 제 2 손실함수의 결과를 최소화하는 방향으로 최적화되는,Optimized in the direction of minimizing the result of a second loss function including a function related to the difference between the learning bone density related image and the final bone density related image,의료 영상 처리 방법.Medical image processing method.
- 제 8항에서, In paragraph 8,상기 차이에 관한 함수는, 상기 학습용 골밀도 연관 영상 및 상기 최종 골밀도 연관 영상 간의 복셀 강도 차이에 대한 평균값 절대 편차(Mean Absolute Difference; MAD) 및 상기 학습용 골밀도 연관 영상 및 상기 최종 골밀도 연관 영상 간의 구조적 유사도(Structural similarity; SSIM) 중 적어도 하나를 포함하는,The function related to the difference is the mean absolute difference (MAD) of the voxel intensity difference between the training bone density related image and the final bone density related image and the structural similarity between the training bone density related image and the final bone density related image ( Structural similarity; SSIM), including at least one of의료 영상 처리 방법.Medical image processing method.
- 제 1항에서, In claim 1,상기 제 2 모델은,The second model,다중 채널(multi-channel)을 통해 상기 학습용 CBCT 영상 및 상기 초기 골밀도 연관 영상을 입력 받는, 다중 채널 U-Net 구조를 포함하도록 구현되는, Implemented to include a multi-channel U-Net structure that receives the CBCT image for learning and the initial bone density related image through multi-channel,의료 영상 처리 방법.Medical image processing method.
- 제 7항에서,In paragraph 7,상기 최종 골밀도 연관 영상은,The final bone density related image,상기 초기 골밀도 연관 영상에 포함된 상기 매핑 정보 그리고 상기 학습용 CBCT 영상의 해부학적 구조 정보를 포함하며 아티팩트(artifact)가 억제된 QCT-유사 영상 또는 CT-유사 영상인,QCT-like image or CT-like image including the mapping information included in the initial bone density related image and anatomical structure information of the CBCT image for learning and artifacts suppressed,의료 영상 처리 방법. Medical image processing method.
- 제 1항에서,In claim 1,피검체에 대하여 촬영된 로우(raw) CBCT 영상 및 상기 로우 CBCT 영상과 대응되는 로우 CT 영상을 각각 획득하는 단계;상기 로우 CT 영상을 상기 로우 CT 영상과 대응되는 조건에서 촬영된 골밀도 보정용 팬텀(BMD calibration phantom) CT 영상을 사용하여 보정하고, 보정된 상기 로우 CT 영상을 로우 QCT 영상으로 획득하는 단계,Obtaining a raw CBCT image of a subject and a raw CT image corresponding to the raw CBCT image; A phantom for bone density correction (BMD) photographed under conditions corresponding to the raw CT image Calibration phantom) Calibration using a CT image, and obtaining the corrected raw CT image as a raw QCT image;상기 로우 CBCT 영상 및 상기 로우 QCT 영상에서 비해부학적 영역을 제거하는 단계,removing non-anatomical regions from the raw CBCT image and the raw QCT image;상기 로우 CBCT 영상 및 상기 로우 QCT 영상을 정합(registration) 하는 단계, 및registering the raw CBCT image and the raw QCT image; and상기 로우 CBCT 영상을 상기 학습용 CBCT 영상으로 획득하고, 상기 로우 QCT 영상을 상기 학습용 골밀도 연관 영상으로 획득하는 단계를 더 포함하는,Acquiring the raw CBCT image as the CBCT image for learning and acquiring the raw QCT image as the bone density related image for learning,의료 영상 처리 방법. Medical image processing method.
- 제 1항에서,In claim 1,피검체에 대하여 촬영된 로우(raw) CBCT 영상 및 상기 로우 CBCT 영상과 대응되는 로우 CT 영상을 각각 획득하는 단계,Obtaining a raw CBCT image of a subject under examination and a raw CT image corresponding to the raw CBCT image, respectively;상기 로우 CBCT 영상 및 상기 로우 CT 영상에서 비해부학적 영역을 제거하는 단계,removing a non-anatomical region from the raw CBCT image and the raw CT image;상기 로우 CBCT 영상 및 상기 로우 CT 영상을 정합(registration) 하는 단계, 및registering the raw CBCT image and the raw CT image; and상기 로우 CBCT 영상을 상기 학습용 CBCT 영상으로 획득하고, 상기 로우 CT 영상을 상기 학습용 골밀도 연관 영상으로 획득하는 단계를 더 포함하는,Acquiring the raw CBCT image as the CBCT image for learning and acquiring the raw CT image as the bone density-related image for learning,의료 영상 처리 방법. Medical image processing method.
- 컴퓨팅 장치에 의해 수행되는 의료 영상 처리 방법으로서, A medical image processing method performed by a computing device,CBCT(Cone-Beam CT) 영상을 획득하는 단계, 및Obtaining a Cone-Beam CT (CBCT) image, and기 학습된 딥 러닝 모델에 상기 CBCT 영상을 입력하여, 상기 CBCT 영상에 대응되는 골밀도(Bone Mineral Density; BMD) 데이터를 포함하는 최종 골밀도 연관 영상을 획득하는 단계를 포함하는,Acquiring a final bone density-related image including bone mineral density (BMD) data corresponding to the CBCT image by inputting the CBCT image to a pre-learned deep learning model,의료 영상 처리 방법.Medical image processing method.
- 제 14항에서,In claim 14,상기 최종 골밀도 연관 영상은,The final bone density related image,QCT-유사 영상 또는 CT-유사 영상인,QCT-like images or CT-like images,의료 영상 처리 방법. Medical image processing method.
- 제 15항에서,In paragraph 15,상기 방법은, The method,상기 최종 골밀도 연관 영상에 기초하여 상기 CBCT 영상에 대응되는 상기 골밀도 데이터를 획득하는 단계를 더 포함하고,Acquiring the bone density data corresponding to the CBCT image based on the final bone density related image;상기 골밀도 데이터를 획득하는 단계는,Obtaining the bone density data,상기 최종 골밀도 연관 영상이 상기 QCT-유사 영상인 경우, 상기 최종 골밀도 연관 영상으로부터 상기 골밀도 데이터를 즉각적으로 획득하는 단계, 또는If the final bone density related image is the QCT-like image, immediately acquiring the bone density data from the final bone density related image; or상기 최종 골밀도 연관 영상이 상기 CT-유사 영상인 경우, 상기 최종 골밀도 연관 영상을 보정하여 상기 CT-유사 영상에 대응되는 QCT 영상을 생성하고, 생성된 상기 QCT 영상으로부터 상기 골밀도 데이터를 획득하는 단계를 포함하는,If the final bone density related image is the CT-like image, generating a QCT image corresponding to the CT-like image by correcting the final bone density related image, and obtaining the bone density data from the generated QCT image including,의료 영상 처리 방법. Medical image processing method.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020220018256A KR102477991B1 (en) | 2022-02-11 | 2022-02-11 | Medical image processing method and apparatus |
KR10-2022-0018256 | 2022-02-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023153564A1 true WO2023153564A1 (en) | 2023-08-17 |
Family
ID=84439613
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2022/010816 WO2023153564A1 (en) | 2022-02-11 | 2022-07-22 | Medical image processing method and apparatus |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102477991B1 (en) |
WO (1) | WO2023153564A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024186053A1 (en) * | 2023-03-03 | 2024-09-12 | 주식회사 메디컬에이아이 | Method, program, and device for interpreting artificial intelligence model |
CN117115046B (en) * | 2023-10-24 | 2024-02-09 | 中日友好医院(中日友好临床医学研究所) | Method, system and device for enhancing sparse sampling image of radiotherapy CBCT |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009183468A (en) * | 2008-02-06 | 2009-08-20 | Toshiba Corp | Radiotherapeutic dose distribution measuring device and radiotherapeutic dose distribution measurement program |
JP2015019788A (en) * | 2013-07-18 | 2015-02-02 | 日立アロカメディカル株式会社 | X-ray measurement apparatus |
KR20180059327A (en) * | 2016-11-25 | 2018-06-04 | 서울대학교산학협력단 | Quantitative system and method for alveolar bone dnesity using dual energy computed tomography |
US20190333219A1 (en) * | 2018-04-26 | 2019-10-31 | Elekta, Inc. | Cone-beam ct image enhancement using generative adversarial networks |
KR20200062538A (en) * | 2018-11-27 | 2020-06-04 | 주식회사 레이 | Estimation method and estimation apparatus of alveolar bone density using cone-beam computerized tomography system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102382602B1 (en) * | 2020-05-07 | 2022-04-05 | 연세대학교 산학협력단 | 3D convolutional neural network based cone-beam artifact correction system and method |
-
2022
- 2022-02-11 KR KR1020220018256A patent/KR102477991B1/en active IP Right Grant
- 2022-07-22 WO PCT/KR2022/010816 patent/WO2023153564A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009183468A (en) * | 2008-02-06 | 2009-08-20 | Toshiba Corp | Radiotherapeutic dose distribution measuring device and radiotherapeutic dose distribution measurement program |
JP2015019788A (en) * | 2013-07-18 | 2015-02-02 | 日立アロカメディカル株式会社 | X-ray measurement apparatus |
KR20180059327A (en) * | 2016-11-25 | 2018-06-04 | 서울대학교산학협력단 | Quantitative system and method for alveolar bone dnesity using dual energy computed tomography |
US20190333219A1 (en) * | 2018-04-26 | 2019-10-31 | Elekta, Inc. | Cone-beam ct image enhancement using generative adversarial networks |
KR20200062538A (en) * | 2018-11-27 | 2020-06-04 | 주식회사 레이 | Estimation method and estimation apparatus of alveolar bone density using cone-beam computerized tomography system |
Also Published As
Publication number | Publication date |
---|---|
KR102477991B1 (en) | 2022-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023153564A1 (en) | Medical image processing method and apparatus | |
CA1288176C (en) | Method and apparatus for improving the alignment of radiographic images | |
WO2019168298A1 (en) | Method and apparatus for correcting computed tomography image | |
JP6483273B2 (en) | Automatic selection and locking of intraoral images | |
WO2015122698A1 (en) | Computed tomography apparatus and method of reconstructing a computed tomography image by the computed tomography apparatus | |
JP6830082B2 (en) | Dental analysis system and dental analysis X-ray system | |
WO2016117906A1 (en) | Tomography imaging apparatus and method | |
US11250580B2 (en) | Method, system and computer readable storage media for registering intraoral measurements | |
BR112020021508A2 (en) | AUTOMATED CORRECTION OF VOXEL REPRESENTATIONS AFFECTED BY METAL OF X-RAY DATA USING DEEP LEARNING TECHNIQUES | |
WO2015076607A1 (en) | Apparatus and method for processing a medical image of a body lumen | |
WO2019098780A1 (en) | Diagnostic image conversion apparatus, diagnostic image conversion module generating apparatus, diagnostic image recording apparatus, diagnostic image conversion method, diagnostic image conversion module generating method, diagnostic image recording method, and computer readable recording medium | |
KR100684301B1 (en) | Image processing apparatus and method | |
WO2020076133A1 (en) | Validity evaluation device for cancer region detection | |
JP3662283B2 (en) | Visualization of diagnostically unrelated areas in the display of radiographic images | |
KR101911327B1 (en) | Oral scanning method and oral scanner for reducing cumulative registration error, and recording medium recorded program for implement thereof | |
WO2018117360A1 (en) | Medical imaging device and medical image processing method | |
US11576638B2 (en) | X-ray imaging apparatus and X-ray image processing method | |
WO2024111915A1 (en) | Method for converting medical images by means of artificial intelligence by using image quality conversion and device therefor | |
WO2019221586A1 (en) | Medical image management system, method, and computer-readable recording medium | |
WO2016148350A1 (en) | Device and method for reconstructing medical image | |
JP4210464B2 (en) | X-ray diagnostic imaging equipment | |
CN117197270A (en) | Metal artifact correction method and device for nonlinear projection decomposition and imaging equipment | |
JP2001000427A (en) | Apparatus, system, and method of image processing, and storage media | |
WO2020159008A1 (en) | Ultrasound imaging device and ultrasound image generation method | |
WO2021242053A1 (en) | Three-dimensional data acquisition method and device, and computer-readable storage medium storing program for performing same method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22926162 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |