WO2023101203A1 - Method for estimating lesion volume using x-ray image, and analysis device - Google Patents

Method for estimating lesion volume using x-ray image, and analysis device Download PDF

Info

Publication number
WO2023101203A1
WO2023101203A1 PCT/KR2022/015555 KR2022015555W WO2023101203A1 WO 2023101203 A1 WO2023101203 A1 WO 2023101203A1 KR 2022015555 W KR2022015555 W KR 2022015555W WO 2023101203 A1 WO2023101203 A1 WO 2023101203A1
Authority
WO
WIPO (PCT)
Prior art keywords
lesion
ray image
area
volume information
detection probability
Prior art date
Application number
PCT/KR2022/015555
Other languages
French (fr)
Korean (ko)
Inventor
정명진
차윤기
임채영
Original Assignee
사회복지법인 삼성생명공익재단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 사회복지법인 삼성생명공익재단 filed Critical 사회복지법인 삼성생명공익재단
Publication of WO2023101203A1 publication Critical patent/WO2023101203A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image

Definitions

  • a technique described below relates to a technique for calculating information on a 3D volume of a specific lesion using a 2D X-ray image.
  • An X-ray image is a basic medical image for examination of various diseases.
  • a medical staff may check a patient's pulmonary nodule using a chest X-ray. When a lesion such as a pulmonary nodule is found, the medical staff monitors the size of the lesion through periodic X-ray imaging.
  • a medical staff checks the size of a crystal by sight or by using a ruler to determine the size of a lesion in an X-ray image. Furthermore, the medical staff may check the area by drawing the outer edge of the crystal using computer aided diagnosis (CAD).
  • CAD computer aided diagnosis
  • a lesion is an object in a 3D space, it is difficult to accurately observe a change in size only with a 2D X-ray image.
  • X-ray images are inferior in sharpness to CT (computer tomography), and thus, there is a limitation in that it is difficult to accurately determine the periphery of the nodule shadow.
  • the technology described below provides a method for estimating volume information of lesions based on X-ray images based on the research results that the lesion detection probability identified as a learning model in 2D X-ray images and the area of lesions have a constant correlation with the volume of lesions. want to do
  • the method for estimating lesion volume information using an X-ray image includes the step of receiving a 2D X-ray image of a patient by an analysis device, and determining the detection probability of a specific lesion by the analysis device by inputting the X-ray image to a machine learning model. and estimating, by the analysis device, 3D volume information of the lesion based on the area of the lesion detected in the X-ray image and the detection probability.
  • An analysis device that estimates lesion volume information using an X-ray image is an input device that receives a patient's 2D X-ray image, and a machine learning model pretrained to classify the possibility of a specific lesion based on the patient's X-ray image.
  • the technology described below enables an accurate diagnosis of a patient by calculating volume information of a lesion based on a 2D X-ray image. Based on the diagnostic results, medical staff can determine appropriate treatment guidelines for the patient's condition.
  • 1 is an example of a system for estimating lesion volume information using an X-ray image.
  • Figure 2 is a result showing the correlation between area, volume and detection probability for thoracic pulmonary nodules.
  • FIG. 3 is a result of fitting (fttting) the area and detection probability to the volume on a CT image for a thoracic pulmonary nodule.
  • 5 is another example of a result of fitting an area and a detection probability to a volume on a CT image.
  • 6 is an example of a process of estimating lesion volume information using an X-ray image.
  • FIG. 7 is an example of an analysis device for estimating lesion volume information using an X-ray image.
  • first, second, A, B, etc. may be used to describe various elements, but the elements are not limited by the above terms, and are merely used to distinguish one element from another. used only as For example, without departing from the scope of the technology described below, a first element may be referred to as a second element, and similarly, the second element may be referred to as a first element.
  • the terms and/or include any combination of a plurality of related recited items or any of a plurality of related recited items.
  • each component to be described below may be combined into one component, or one component may be divided into two or more for each more subdivided function.
  • each component to be described below may additionally perform some or all of the functions of other components in addition to its main function, and some of the main functions of each component may be performed by other components. Of course, it may be dedicated and performed by .
  • each process constituting the method may occur in a different order from the specified order unless a specific order is clearly described in context. That is, each process may occur in the same order as specified, may be performed substantially simultaneously, or may be performed in the reverse order.
  • a technique to be described below is a technique of estimating information on the size of a lesion of a subject using a medical image.
  • Medical images may be various, such as X-ray images, ultrasound images, CT (Computer Tomography) images, MRI (Magnetic Resonance Imaging) images, and the like.
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging
  • the technique described below estimates the size of a lesion using a 2D medical image.
  • the following medical image may be any one of various medical images in a 2D format. For example, it may be an X-ray image or a 2D composite image extracted from a 3D medical image. However, for convenience of explanation, description will be made focusing on X-ray images.
  • X-ray images may be images of various body parts according to the type of disease.
  • the lesion corresponds to a lesion whose shade increases with disease progression.
  • the target lesion will be described focusing on the pulmonary nodule. That is, the following X-ray image corresponds to a chest X-ray image.
  • a machine learning model that receives an X-ray image and outputs a probability of occurrence of a specific lesion for lesion detection probability.
  • a machine learning model can be one of many types, including decision trees, random forests, K-nearest neighbors (KNNs), Naive Bayes, support vector machines (SVMs), and artificial neural networks (ANNs).
  • KNNs K-nearest neighbors
  • SVMs support vector machines
  • ANNs artificial neural networks
  • the analysis device estimates volumetric information of a lesion from an X-ray image.
  • the analysis device may be implemented with various devices capable of processing data.
  • the analysis device may be implemented as a PC, a server on a network, a smart device, a chipset in which a dedicated program is embedded, and the like.
  • the volume information may be a lesion volume in 3D space or information having a certain correlation (proportional relationship) with the lesion volume. That is, the following volume information may be the volume of the lesion itself or may be information (value) capable of quantifying the size of the lesion even if it is not an exact volume.
  • 1 is an example of a system 100 for estimating lesion volume information using an X-ray image. 1 illustrates an example in which the analysis device is a computer terminal 130 and a server 140 .
  • the X-ray equipment 110 generates a chest X-ray image of a subject.
  • a chest X-ray image of a subject may be stored in an Electronic Medical Record (EMR) 120.
  • EMR Electronic Medical Record
  • the user A may use the computer terminal 130 to calculate the volume information of the lesion using an X-ray image of the subject's chest.
  • the computer terminal 130 receives an X-ray image of the chest of the subject.
  • the computer terminal 130 may receive a chest X-ray image from the X-ray equipment 110 or the EMR 120 through a wired or wireless network.
  • the computer terminal 130 may be a device physically connected to the X-ray equipment 110 .
  • the computer terminal 130 receives the 2D area of the lesion determined based on the subject's chest X-ray image. Alternatively, the computer terminal 130 may determine the 2D area of the lesion using a chest X-ray image of the subject.
  • the computer terminal 130 calculates a detection probability of a lesion by inputting the chest X-ray image to a pre-learned machine learning model.
  • the machine learning model is a model that calculates the probability of occurrence of a specific lesion (pulmonary nodule) in an X-ray image.
  • the machine learning model must be trained in a supervised learning method in advance. If a pulmonary nodule is a target lesion, the machine learning model corresponds to a model that calculates a classification result (probability) as to whether or not a pulmonary nodule exists in the X-ray image when an X-ray image is input.
  • the machine learning model can be pre-learned specifically for a specific lesion.
  • the computer terminal 130 may estimate the volume information of the lesion using the lesion area in the X-ray image and the detection probability described above. User A can check the analysis result.
  • the server 140 may receive an X-ray image of the subject's chest from the X-ray equipment 110 or the EMR 120 .
  • the server 140 may receive the 2D area of the lesion determined based on the subject's chest X-ray image. In this case, the server 140 may receive information on the area of the lesion from the terminal used by the medical staff. Alternatively, the server 140 may determine the 2D area of the lesion using a chest X-ray image of the subject.
  • the server 140 calculates a detection probability of a lesion by inputting the chest X-ray image to a pre-learned machine learning model.
  • the machine learning model is a model that calculates the probability of occurrence of a specific lesion (pulmonary nodule) in an X-ray image.
  • the server 140 may estimate the volume information of the lesion using the lesion area in the X-ray image and the detection probability described above.
  • the server 140 may transmit the analysis result to the terminal of user A. User A can check the analysis result.
  • the computer terminal 130 and/or the server 140 may store the analysis result in the EMR 120.
  • the researcher used X-ray images to determine volume information, and used detection probability information calculated by a machine learning model (AI model) in X-ray images.
  • AI model machine learning model
  • the researcher collected image information acquired at regular time intervals for patients with pulmonary nodules.
  • the researcher utilized the image information of a total of 315 patients at the affiliated medical institution. In this process, all patient information and image identification information were anonymized to protect personal information.
  • 72 subjects with changes in the size of pulmonary nodules over a certain period of time and who had both X-ray and CT images were selected and studied. The results derived from the research process and the results of regression analysis are described below.
  • Figure 2 is a result showing the correlation between area, volume and detection probability for thoracic pulmonary nodules.
  • Figure 2 shows the results of linear regression and non-linear regression (quadratic).
  • Some figures also show Root Mean Square Error (RMSE) values for each regression analysis.
  • the researcher determined the area of the nodule in the X-ray image using a CAD program.
  • 2(A) is a result showing the correlation between the area of the nodule on the X-ray image and the volume on the CT. Referring to FIG. 2(A), it can be seen that the area and the volume are somewhat correlated with a correlation coefficient of 0.58, and the difference between the linear regression and the nonlinear regression is not large.
  • 2(B) is a result showing the correlation between the average detection probability (Prob.Mean) of a nodule in an X-ray image and the volume on a CT image.
  • the detection probability is a probability value that a pre-learned machine learning model receives and outputs an X-ray image.
  • the researcher calculated the average value of the detection probability. That is, the detection probability was calculated several times for the same X-ray image and the average value was used as the detection probability.
  • the correlation coefficient between the detection probability and the volume on the CT showed a low correlation of 0.22.
  • 2(C) is a result showing the correlation between the area of the nodule and the average detection probability of the nodule in the X-ray image.
  • the correlation coefficient between the area of the nodule and the average detection probability of the nodule was 0.73, showing a high correlation.
  • the nodule area and average detection probability of crystals showed high linearity.
  • 3 shows the result of fitting the area and detection probability to the volume on CT for the thoracic pulmonary nodule.
  • 3 shows the result of fitting the nodule area and detection probability to a 2D plane using linear regression with respect to the CT image volume.
  • 3(A), 3(B) and 3(C) show the same result at different angles.
  • the CT volume is relatively large, and points raised above the plane are observed (circled area in Fig. 2(C)).
  • the area and detection probability of the nodule are generally located on a plane with respect to the volume of the CT image. Therefore, it can be seen that the “nodule area and detection probability of the X-ray image” has a certain correlation with the volume on the CT image.
  • FIG. 4 is a result showing the correlation between the area and the volume of the thoracic pulmonary nodule. 4 corresponds to the result of linear fitting to the CT volume with only the area.
  • FIG. 4 is an example in which a model-based non-linear regression result (order 1.5) is added for the correlation between area and volume for the thoracic pulmonary nodule shown in FIG. 2(A).
  • model-based nonlinear regression the model establishes the relationship between volume and area as "CT volume ⁇ ⁇ * area 0.5 + ⁇ * area + ⁇ * area 1.5 + ⁇ ".
  • linear regression and nonlinear regression have similar RMSEs of 9445.95 and 9449.88, respectively, but model-based nonlinear regression has RMSE of 9395.73, which is slightly lower than linear regression or nonlinear regression.
  • FIG. 5 is another example of a result of fitting an area and a detection probability to a volume on a CT image. Unlike FIG. 4 , FIG. 4 is a result of 2D fitting the area of the nodule and the probability of detecting the nodule to the volume of the CT image. 5(A) is a result of 2D fitting through linear regression, FIG. 5(B) is a result of 2D fitting through nonlinear regression, and FIG. 5(C) is a result of 2D fitting through model-based nonlinear regression.
  • the RMSE was 9445.95, but looking at FIG. 5(A), the RMSE improved to 8781.24.
  • the RMSE was 9449.88, but the result improved to 8351.0 in FIG. 5(B).
  • the RMSE was 9395.73, but in FIG. 5(A), the RMSE was improved to 7975.55.
  • the volume on the CT image is matched when the detection probability is used together with the lesion area, rather than when only the lesion area of the X-ray image is used. That is, information on the crystal volume can be more accurately estimated by using information such as the area of the nodule and the probability of detecting the nodule.
  • FIG. 6 is an example of a process 200 for estimating lesion volume information using an X-ray image.
  • the analysis device receives an X-ray image of the patient captured by the X-ray equipment (210). It is assumed that the X-ray image includes a certain lesion area.
  • the analysis device determines the lesion area based on the input X-ray image (220).
  • the lesion area may be a value calculated by displaying the lesion area by a medical staff using CAD.
  • the analysis device may input or receive the lesion area calculated through the CAD program.
  • the analysis device may directly calculate the lesion area using an X-ray image.
  • the analysis device may classify the lesion area using an image processing technique and calculate the area of the corresponding area.
  • the analysis device may input an X-ray image into a previously learned segmentation model to classify the lesion area and calculate the area of the segmented area.
  • the segmentation model may be implemented with U-net, fully convolutional networks (FCN), and the like.
  • the analysis device classifies the lesion by inputting the patient's X-ray image to the machine learning model (230).
  • 6 illustrates an artificial neural network such as CNN as a machine learning model.
  • An artificial neural network model trained in advance to classify a specific lesion in an X-ray image calculates a probability value (detection probability) that a lesion exists in the corresponding image when an X-ray image is input.
  • the analysis device may determine volume information of the lesion using the lesion area and detection probability (240). For example, the analysis device may calculate the volume information by multiplying the detection probability by the lesion area. Alternatively, the analysis device may calculate the volume information using a constant function having the lesion area and the detection probability as variables.
  • the analysis device 300 corresponds to the above-described analysis devices (130 and 140 in FIG. 1).
  • the analysis device 300 may be physically implemented in various forms.
  • the analysis device 300 may have a form of a computer device such as a PC, a network server, and a chipset dedicated to data processing.
  • the analysis device 300 may include a storage device 310, a memory 320, an arithmetic device 330, an interface device 340, a communication device 350, and an output device 360.
  • the storage device 310 may store an X-ray image of the patient.
  • an analysis target is a 2D medical image of a patient. Accordingly, the analysis target may be an image of another type other than an X-ray image.
  • the storage device 310 may store a machine learning model trained for lesion detection.
  • the storage device 310 may store a program for constantly pre-processing the X-ray image.
  • the storage device 310 may store a CAD program for calculating the size of a lesion in an X-ray image and a segmentation model for classifying a lesion area in an X-ray image.
  • the storage device 310 may store lesion volume information as an analysis result.
  • the memory 320 includes data and information generated during the preprocessing of the X-ray image by the analysis device 300, the process of determining the area of the lesion, the process of determining the detection probability of the lesion, and the process of calculating the volume information of the lesion. can be saved.
  • the interface device 340 is a device that receives certain commands and data from the outside.
  • the interface device 340 may receive an X-ray image of the patient from a physically connected input device or external storage device.
  • the interface device 340 may receive an input of the lesion area of the X-ray image from an external device.
  • the interface device 340 may transmit the analysis result to an external object.
  • the communication device 350 refers to a component that receives and transmits certain information through a wired or wireless network.
  • the communication device 350 may receive an X-ray image of the patient from an external object.
  • the communication device 350 may receive the lesion area of the X-ray image from an external object.
  • the communication device 350 may transmit the analysis result to an external object such as a user terminal.
  • the interface device 340 and the communication device 350 are configured to send and receive certain data from a user or other physical object, they can also be collectively referred to as input/output devices. If the function of receiving an X-ray image is limited, the interface device 340 and the communication device 350 may be referred to as input devices.
  • the output device 360 is a device that outputs certain information.
  • the output device 360 may output interfaces and analysis results necessary for data processing.
  • the arithmetic device 330 may receive an X-ray image and estimate lesion volume information using a machine learning model or program stored in the storage device 310 .
  • the arithmetic device 330 may pre-process the received X-ray image at a constant level. For example, the arithmetic device 330 may perform tasks such as noise removal and brightness control of an X-ray image.
  • the arithmetic device 330 may calculate the lesion area from the X-ray image using a CAD program. During this process, the interface device 340 may receive a command to select a lesion area from the user.
  • the arithmetic device 330 may classify the lesion area by inputting the X-ray image to the segmentation model. The arithmetic device 330 may calculate the area of the divided lesion area.
  • the arithmetic device 330 may calculate a lesion detection probability by inputting the X-ray image to a pre-learned machine learning model.
  • the arithmetic device 330 may estimate the volume information using the lesion area and detection probability.
  • the calculator 330 may calculate the volume information by multiplying the lesion area and the detection probability.
  • the calculation device 330 may calculate the volume information through a mathematical calculation process having the lesion area and detection probability as variables.
  • the arithmetic device 330 may be a device such as a processor, an AP, or a chip in which a program is embedded that processes data and performs certain arithmetic operations.
  • the method for estimating lesion volume information as described above may be implemented as a program (or application) including an executable algorithm that may be executed on a computer.
  • the program may be stored and provided in a temporary or non-transitory computer readable medium.
  • a non-transitory readable medium is not a medium that stores data for a short moment, such as a register, cache, or memory, but a medium that stores data semi-permanently and can be read by a device.
  • the various applications or programs described above are CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM (read-only memory), PROM (programmable read only memory), EPROM (Erasable PROM, EPROM)
  • ROM read-only memory
  • PROM programmable read only memory
  • EPROM Erasable PROM, EPROM
  • it may be stored and provided in a non-transitory readable medium such as EEPROM (Electrically EPROM) or flash memory.
  • Temporary readable media include static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), and enhanced SDRAM (Enhanced SDRAM). SDRAM, ESDRAM), Synchronous DRAM (Synclink DRAM, SLDRAM) and Direct Rambus RAM (DRRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • Enhanced SDRAM Enhanced SDRAM
  • SDRAM ESDRAM
  • Synchronous DRAM Synchronous DRAM
  • SLDRAM Direct Rambus RAM
  • DRRAM Direct Rambus RAM

Abstract

A method for estimating lesion volume using an X-ray image comprises the steps in which an analysis device: receives a 2D X-ray image of a patient as input; determines a detection probability for a specific lesion by inputting the X-ray image to a machine learning model; and estimates the 3D volume of the lesion on the basis of the area of the lesion detected from the X-ray image and the detection probability.

Description

X 레이 영상을 이용한 병소 체적 정보 추정 방법 및 분석장치Method and analysis device for estimating lesion volume information using X-ray images
이하 설명하는 기술은 2차원 X 레이 영상을 이용하여 특정 병소의 3차원 체적에 대한 정보를 산출하는 기법에 관한 것이다.A technique described below relates to a technique for calculating information on a 3D volume of a specific lesion using a 2D X-ray image.
X 레이 영상은 다양한 질병 검사를 위한 기본적인 의료 영상이다. 예컨대, 의료진은 흉부 X 레이를 이용하여 환자의 폐결절을 확인할 수 있다. 폐결절과 같은 병소가 발견되면, 의료진은 주기적인 X 레이 영상 촬영을 통해 병소의 크기를 모니터링하게 된다. An X-ray image is a basic medical image for examination of various diseases. For example, a medical staff may check a patient's pulmonary nodule using a chest X-ray. When a lesion such as a pulmonary nodule is found, the medical staff monitors the size of the lesion through periodic X-ray imaging.
일반적으로 의료진은 X 레이 영상에서 병소의 크기를 목측 또는 자를 이용하여 결정의 크기를 확인한다. 나아가, 의료진은 CAD (computer aided diagnosis)를 이용하여 결정의 외연을 그리는 방식으로 면적을 확인할 수도 있다. 그러나, 병소는 3D 공간의 객체이기에 2D인 X 레이 영상만으로 정확한 크기의 변화를 관측하기 어렵다. 나아가, X 레이 영상은 CT(computer tomography)보다 선예도가 떨어져 결절 음영의 외연을 정확하게 결정하기 어려운 한계점도 있다.In general, a medical staff checks the size of a crystal by sight or by using a ruler to determine the size of a lesion in an X-ray image. Furthermore, the medical staff may check the area by drawing the outer edge of the crystal using computer aided diagnosis (CAD). However, since a lesion is an object in a 3D space, it is difficult to accurately observe a change in size only with a 2D X-ray image. Furthermore, X-ray images are inferior in sharpness to CT (computer tomography), and thus, there is a limitation in that it is difficult to accurately determine the periphery of the nodule shadow.
이하 설명하는 기술은 2D X 레이 영상에서 학습모델로 확인되는 병소 검출 확률과 병소의 면적이 병소의 체적과 일정한 상관 관계가 있다는 연구 결과를 바탕으로 X 레이 영상에 기반한 병소의 체적 정보 추정 기법을 제공하고자 한다.The technology described below provides a method for estimating volume information of lesions based on X-ray images based on the research results that the lesion detection probability identified as a learning model in 2D X-ray images and the area of lesions have a constant correlation with the volume of lesions. want to do
X 레이 영상을 이용한 병소 체적 정보 추정 방법은 분석장치가 환자의 2D X 레이 영상을 입력받는 단계, 상기 분석장치가 상기 X 레이 영상을 기계학습모델에 입력하여 특정 병소에 대한 검출 확률을 결정하는 단계 및 상기 분석장치가 상기 X 레이 영상에서 검출되는 상기 병소의 면적 및 상기 검출 확률을 기준으로 상기 병소의 3D 체적 정보를 추정하는 단계를 포함한다.The method for estimating lesion volume information using an X-ray image includes the step of receiving a 2D X-ray image of a patient by an analysis device, and determining the detection probability of a specific lesion by the analysis device by inputting the X-ray image to a machine learning model. and estimating, by the analysis device, 3D volume information of the lesion based on the area of the lesion detected in the X-ray image and the detection probability.
X 레이 영상을 이용하여 병소 체적 정보를 추정하는 분석장치는 환자의 2D X 레이 영상을 입력받는 입력장치, 대상자의 X 레이 영상을 기준으로 특정 병소의 발생 가능성을 분류하도록 사전에 학습된 기계학습모델을 저장하는 저장장치 및 상기 환자의 X 레이 영상을 기계학습모델에 입력하여 상기 특정 병소에 대한 검출 확률을 결정하고, 상기 X 레이 영상에서 검출되는 상기 병소의 면적 및 상기 검출 확률을 기준으로 상기 병소의 3D 체적 정보를 추정하는 연산장치를 포함한다.An analysis device that estimates lesion volume information using an X-ray image is an input device that receives a patient's 2D X-ray image, and a machine learning model pretrained to classify the possibility of a specific lesion based on the patient's X-ray image. A storage device for storing and inputting an X-ray image of the patient to a machine learning model to determine a detection probability for the specific lesion, based on the area of the lesion detected in the X-ray image and the detection probability, the lesion and an arithmetic unit for estimating 3D volume information of
이하 설명하는 기술은 2D X 레이 영상을 기준으로 병소의 체적 정보를 산출하여 환자에 대한 정확한 진단을 가능하게 한다. 진단 결과에 기반하여 의료진은 환자 상태에 맞는 적절한 치료 지침을 결정할 수 있다.The technology described below enables an accurate diagnosis of a patient by calculating volume information of a lesion based on a 2D X-ray image. Based on the diagnostic results, medical staff can determine appropriate treatment guidelines for the patient's condition.
도 1은 X 레이 영상을 이용한 병소 체적 정보 추정 시스템에 대한 예이다.1 is an example of a system for estimating lesion volume information using an X-ray image.
도 2는 흉부 폐결절 대상으로 면적, 체적 및 검출 확률 사이의 상관 관계를 나타내는 결과이다.Figure 2 is a result showing the correlation between area, volume and detection probability for thoracic pulmonary nodules.
도 3은 흉부 폐결절 대상으로 면적과 검출 확률을 CT 상의 체적에 대하여 맞춤(fttting)한 결과이다.FIG. 3 is a result of fitting (fttting) the area and detection probability to the volume on a CT image for a thoracic pulmonary nodule.
도 4는 흉부 폐결절 대상으로 면적과 체적 사이의 상관 관계를 나타내는 결과이다.4 is a result showing the correlation between area and volume in the thoracic pulmonary nodules.
도 5는 면적과 검출 확률을 CT 상의 체적에 대하여 맞춤한 결과의 다른 예이다.5 is another example of a result of fitting an area and a detection probability to a volume on a CT image.
도 6은 X 레이 영상을 이용한 병소 체적 정보 추정 과정에 대한 예이다.6 is an example of a process of estimating lesion volume information using an X-ray image.
도 7은 X 레이 영상을 이용하여 병소 체적 정보를 추정하는 분석장치에 대한 예이다. 7 is an example of an analysis device for estimating lesion volume information using an X-ray image.
이하 설명하는 기술은 다양한 변경을 가할 수 있고 여러 가지 실시예를 가질 수 있는 바, 특정 실시예들을 도면에 예시하고 상세하게 설명하고자 한다. 그러나, 이는 이하 설명하는 기술을 특정한 실시 형태에 대해 한정하려는 것이 아니며, 이하 설명하는 기술의 사상 및 기술 범위에 포함되는 모든 변경, 균등물 내지 대체물을 포함하는 것으로 이해되어야 한다.Since the technology to be described below can have various changes and various embodiments, specific embodiments will be illustrated in the drawings and described in detail. However, this is not intended to limit the technology described below to specific embodiments, and it should be understood to include all modifications, equivalents, or substitutes included in the spirit and scope of the technology described below.
제1, 제2, A, B 등의 용어는 다양한 구성요소들을 설명하는데 사용될 수 있지만, 해당 구성요소들은 상기 용어들에 의해 한정되지는 않으며, 단지 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 사용된다. 예를 들어, 이하 설명하는 기술의 권리 범위를 벗어나지 않으면서 제1 구성요소는 제2 구성요소로 명명될 수 있고, 유사하게 제2 구성요소도 제1 구성요소로 명명될 수 있다. 및/또는 이라는 용어는 복수의 관련된 기재된 항목들의 조합 또는 복수의 관련된 기재된 항목들 중의 어느 항목을 포함한다.Terms such as first, second, A, B, etc. may be used to describe various elements, but the elements are not limited by the above terms, and are merely used to distinguish one element from another. used only as For example, without departing from the scope of the technology described below, a first element may be referred to as a second element, and similarly, the second element may be referred to as a first element. The terms and/or include any combination of a plurality of related recited items or any of a plurality of related recited items.
본 명세서에서 사용되는 용어에서 단수의 표현은 문맥상 명백하게 다르게 해석되지 않는 한 복수의 표현을 포함하는 것으로 이해되어야 하고, "포함한다" 등의 용어는 설명된 특징, 개수, 단계, 동작, 구성요소, 부분품 또는 이들을 조합한 것이 존재함을 의미하는 것이지, 하나 또는 그 이상의 다른 특징들이나 개수, 단계 동작 구성요소, 부분품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 배제하지 않는 것으로 이해되어야 한다.In terms used in this specification, singular expressions should be understood to include plural expressions unless clearly interpreted differently in context, and terms such as “comprising” refer to the described features, numbers, steps, operations, and components. , parts or combinations thereof, but it should be understood that it does not exclude the possibility of the presence or addition of one or more other features or numbers, step-action components, parts or combinations thereof.
도면에 대한 상세한 설명을 하기에 앞서, 본 명세서에서의 구성부들에 대한 구분은 각 구성부가 담당하는 주기능 별로 구분한 것에 불과함을 명확히 하고자 한다. 즉, 이하에서 설명할 2개 이상의 구성부가 하나의 구성부로 합쳐지거나 또는 하나의 구성부가 보다 세분화된 기능별로 2개 이상으로 분화되어 구비될 수도 있다. 그리고 이하에서 설명할 구성부 각각은 자신이 담당하는 주기능 이외에도 다른 구성부가 담당하는 기능 중 일부 또는 전부의 기능을 추가적으로 수행할 수도 있으며, 구성부 각각이 담당하는 주기능 중 일부 기능이 다른 구성부에 의해 전담되어 수행될 수도 있음은 물론이다.Prior to a detailed description of the drawings, it is to be clarified that the classification of components in the present specification is merely a classification for each main function in charge of each component. That is, two or more components to be described below may be combined into one component, or one component may be divided into two or more for each more subdivided function. In addition, each component to be described below may additionally perform some or all of the functions of other components in addition to its main function, and some of the main functions of each component may be performed by other components. Of course, it may be dedicated and performed by .
또, 방법 또는 동작 방법을 수행함에 있어서, 상기 방법을 이루는 각 과정들은 문맥상 명백하게 특정 순서를 기재하지 않은 이상 명기된 순서와 다르게 일어날 수 있다. 즉, 각 과정들은 명기된 순서와 동일하게 일어날 수도 있고 실질적으로 동시에 수행될 수도 있으며 반대의 순서대로 수행될 수도 있다.In addition, in performing a method or method of operation, each process constituting the method may occur in a different order from the specified order unless a specific order is clearly described in context. That is, each process may occur in the same order as specified, may be performed substantially simultaneously, or may be performed in the reverse order.
이하 설명하는 기술은 의료 영상을 이용하여 대상자의 병소 크기에 대한 정보를 추정하는 기법이다. 의료 영상은 X 레이 영상, 초음파 영상, CT(Computer Tomography) 영상, MRI(Magnetic Resonance Imaging) 영상 등 다양할 수 있다. 다만, 이하 설명하는 기술은 2D 의료 영상을 이용하여 병소 크기를 추정한다. 따라서, 이하 의료 영상은 2D 포맷의 다양한 의료 영상 중 어느 하나일 수 있다. 예컨대, X 레이 영상, 3D 의료 영상으로부터 추출한 2D 합성 영상 등일 수 있다. 다만, 이하 설명의 편의를 위하여 X 레이 영상을 중심으로 설명한다.A technique to be described below is a technique of estimating information on the size of a lesion of a subject using a medical image. Medical images may be various, such as X-ray images, ultrasound images, CT (Computer Tomography) images, MRI (Magnetic Resonance Imaging) images, and the like. However, the technique described below estimates the size of a lesion using a 2D medical image. Accordingly, the following medical image may be any one of various medical images in a 2D format. For example, it may be an X-ray image or a 2D composite image extracted from a 3D medical image. However, for convenience of explanation, description will be made focusing on X-ray images.
X 레이 영상은 질환의 종류에 따라 다양한 신체 부위의 영상일 수 있다. 병소는 질환 진행에 따라 음영이 증가하는 병소에 해당한다. 이하 설명의 편의를 위하여 대상 병소를 폐결절을 중심으로 설명한다. 즉, 이하 X 레이 영상은 흉부 X 레이 영상에 해당한다.X-ray images may be images of various body parts according to the type of disease. The lesion corresponds to a lesion whose shade increases with disease progression. Hereinafter, for convenience of explanation, the target lesion will be described focusing on the pulmonary nodule. That is, the following X-ray image corresponds to a chest X-ray image.
이하 설명하는 기술은 병소 검출 확률을 위하여 X 레이 영상을 입력받아 특정 병소의 발생 확률을 출력하는 기계학습모델을 이용한다. 기계 학습 모델은 결정 트리, 랜덤 포레스트(random forest), KNN(K-nearest neighbor), 나이브 베이즈(Naive Bayes), SVM(support vector machine), ANN(artificial neural network) 등 다양한 유형 중 어느 하나일 수 있다.The technology to be described below uses a machine learning model that receives an X-ray image and outputs a probability of occurrence of a specific lesion for lesion detection probability. A machine learning model can be one of many types, including decision trees, random forests, K-nearest neighbors (KNNs), Naive Bayes, support vector machines (SVMs), and artificial neural networks (ANNs). can
이하 분석장치가 X 레이 영상에서 병소의 체적 정보를 추정한다고 설명한다. 분석 장치는 데이터 처리가 가능한 다양한 장치로 구현될 수 있다. 예컨대, 분석 장치는 PC, 네트워크상의 서버, 스마트 기기, 전용 프로그램이 임베딩된 칩셋 등으로 구현될 수 있다.Hereinafter, it will be described that the analysis device estimates volumetric information of a lesion from an X-ray image. The analysis device may be implemented with various devices capable of processing data. For example, the analysis device may be implemented as a PC, a server on a network, a smart device, a chipset in which a dedicated program is embedded, and the like.
체적 정보는 3D 공간에서의 병소 체적(volume) 또는 병소 체적과 일정한 상관 관계(비례 관계)있는 정보일 수 있다. 즉, 이하 체적 정보는 병소의 체적 자체이거나, 정확한 체적은 아니어도 병소의 크기를 정량할 수 있는 정보(값)일 수 있다.The volume information may be a lesion volume in 3D space or information having a certain correlation (proportional relationship) with the lesion volume. That is, the following volume information may be the volume of the lesion itself or may be information (value) capable of quantifying the size of the lesion even if it is not an exact volume.
도 1은 X 레이 영상을 이용한 병소 체적 정보 추정 시스템(100)에 대한 예이다. 도 1에서 분석장치는 컴퓨터 단말(130) 및 서버(140)인 예를 도시하였다.1 is an example of a system 100 for estimating lesion volume information using an X-ray image. 1 illustrates an example in which the analysis device is a computer terminal 130 and a server 140 .
X-ray 장비(110)는 대상자에 대한 흉부 X 레이 영상을 생성한다. 대상자의 흉부 X 레이 영상은 EMR(Electronic Medical Record, 120)에 저장될 수 있다.The X-ray equipment 110 generates a chest X-ray image of a subject. A chest X-ray image of a subject may be stored in an Electronic Medical Record (EMR) 120.
도 1에서 사용자(A)는 컴퓨터 단말(130)을 이용하여 대상자의 흉부 X 레이 영상을 이용하여 병소의 체적 정보를 산출할 수 있다. 컴퓨터 단말(130)은 대상자의 흉부 X 레이 영상을 입력받는다. 컴퓨터 단말(130)은 유선 또는 무선 네트워크를 통해 X-ray 장비(110) 또는 EMR(120)로부터 흉부 X 레이 영상을 입력받을 수 있다. 경우에 따라 컴퓨터 단말(130)은 X-ray 장비(110)와 물리적으로 연결된 장치일 수도 있다.In FIG. 1 , the user A may use the computer terminal 130 to calculate the volume information of the lesion using an X-ray image of the subject's chest. The computer terminal 130 receives an X-ray image of the chest of the subject. The computer terminal 130 may receive a chest X-ray image from the X-ray equipment 110 or the EMR 120 through a wired or wireless network. In some cases, the computer terminal 130 may be a device physically connected to the X-ray equipment 110 .
컴퓨터 단말(130)은 대상자의 흉부 X 레이 영상을 기준으로 결정되는 병소의 2D 면적을 입력받는다. 또는 컴퓨터 단말(130)은 대상자의 흉부 X 레이 영상을 이용하여 병소의 2D 면적을 결정할 수도 있다. The computer terminal 130 receives the 2D area of the lesion determined based on the subject's chest X-ray image. Alternatively, the computer terminal 130 may determine the 2D area of the lesion using a chest X-ray image of the subject.
컴퓨터 단말(130)은 흉부 X 레이 영상을 사전에 학습된 기계학습모델에 입력하여 병소의 검출 확률을 산출한다. 기계학습모델은 X 레이 영상에서 특정 병소(폐결절)의 발생 확률을 산출하는 모델이다. 기계학습모델은 사전에 지도학습 방식으로 학습되어야 한다. 폐결절이 대상 병소라면, 기계학습모델은 X 레이 영상이 입력되면 해당 영상에서 폐결절이 존재하는지에 대한 분류 결과(확률)를 산출하는 모델에 해당한다. 기계학습모델은 특정 병소에 특이적으로 사전에 학습될 수 있다.The computer terminal 130 calculates a detection probability of a lesion by inputting the chest X-ray image to a pre-learned machine learning model. The machine learning model is a model that calculates the probability of occurrence of a specific lesion (pulmonary nodule) in an X-ray image. The machine learning model must be trained in a supervised learning method in advance. If a pulmonary nodule is a target lesion, the machine learning model corresponds to a model that calculates a classification result (probability) as to whether or not a pulmonary nodule exists in the X-ray image when an X-ray image is input. The machine learning model can be pre-learned specifically for a specific lesion.
컴퓨터 단말(130)은 X 레이 영상에서의 병소 면적과 전술한 검출 확률을 이용하여 병소의 체적 정보를 추정할 수 있다. 사용자 A는 분석 결과를 확인할 수 있다. The computer terminal 130 may estimate the volume information of the lesion using the lesion area in the X-ray image and the detection probability described above. User A can check the analysis result.
서버(140)는 X-ray 장비(110) 또는 EMR(120)로부터 대상자의 흉부 X 레이 영상을 수신할 수 있다. 서버(140)는 대상자의 흉부 X 레이 영상을 기준으로 결정되는 병소의 2D 면적을 수신할 수 있다. 이 경우, 서버(140)는 의료진이 이용하는 단말로부터 병소의 면적 정보를 수신할 수 있다. 또는 서버(140)는 대상자의 흉부 X 레이 영상을 이용하여 병소의 2D 면적을 결정할 수도 있다. The server 140 may receive an X-ray image of the subject's chest from the X-ray equipment 110 or the EMR 120 . The server 140 may receive the 2D area of the lesion determined based on the subject's chest X-ray image. In this case, the server 140 may receive information on the area of the lesion from the terminal used by the medical staff. Alternatively, the server 140 may determine the 2D area of the lesion using a chest X-ray image of the subject.
서버(140)는 흉부 X 레이 영상을 사전에 학습된 기계학습모델에 입력하여 병소의 검출 확률을 산출한다. 기계학습모델은 X 레이 영상에서 특정 병소(폐결절)의 발생 확률을 산출하는 모델이다. The server 140 calculates a detection probability of a lesion by inputting the chest X-ray image to a pre-learned machine learning model. The machine learning model is a model that calculates the probability of occurrence of a specific lesion (pulmonary nodule) in an X-ray image.
서버(140)는 X 레이 영상에서의 병소 면적과 전술한 검출 확률을 이용하여 병소의 체적 정보를 추정할 수 있다. 서버(140)는 분석 결과는 사용자 A의 단말에 전송할 수 있다. 사용자 A는 분석 결과를 확인할 수 있다. The server 140 may estimate the volume information of the lesion using the lesion area in the X-ray image and the detection probability described above. The server 140 may transmit the analysis result to the terminal of user A. User A can check the analysis result.
컴퓨터 단말(130) 및/또는 서버(140)는 분석 결과를 EMR(120)에 저장할 수도 있다. The computer terminal 130 and/or the server 140 may store the analysis result in the EMR 120.
연구자는 X 레이 영상을 이용하여 체적 정보를 결정하는데, X 레이 영상에서 기계학습모델(AI 모델)이 산출하는 검출 확률 정보를 활용하였다. 연구자는 기계학습모델이 X 레이 영상에서 페결절을 검출하는 확률이 높다면 명확한 결정이고, 명확한 결절은 결국 밀도가 치밀하거나 체적이 큰 결절이라는 가설을 세우고, 이를 검증하였다.The researcher used X-ray images to determine volume information, and used detection probability information calculated by a machine learning model (AI model) in X-ray images. The researcher hypothesized that if the machine learning model has a high probability of detecting a pulmonary nodule in an X-ray image, it is a clear decision, and a clear nodule is a dense or large nodule after all, and verified it.
연구자는 폐결절 환자를 대상으로 일정한 시간 간격으로 획득한 영상 정보를 수집하였다. 연구자는 소속 의료 기관에서 전체 315명의 환자에 대한 영상 정보를 활용하였다. 이 과정에서 개인 정보 보호를 위하여 모든 환자 정보와 영상 식별 정보를 익명화하여 사용하였다. 이중 일정한 시간 경과에 따라 폐결절의 크기 변동이 있고, X 레이 영상과 더불어 CT 영상까지 확보한 대상자 72명을 선별하여 연구하였다. 이하 연구 과정에서 도출된 결과 및 회귀분석 결과를 설명한다.The researcher collected image information acquired at regular time intervals for patients with pulmonary nodules. The researcher utilized the image information of a total of 315 patients at the affiliated medical institution. In this process, all patient information and image identification information were anonymized to protect personal information. Of these, 72 subjects with changes in the size of pulmonary nodules over a certain period of time and who had both X-ray and CT images were selected and studied. The results derived from the research process and the results of regression analysis are described below.
도 2는 흉부 폐결절 대상으로 면적, 체적 및 검출 확률 사이의 상관 관계를 나타내는 결과이다. 도 2는 선형 회귀(linear regression) 결과와 비선형 회귀(non-linear regression) 결과(2차)를 도시한다. 일부 도면은 각 회귀분석에 대한 RMSE(Root Mean Square Error)값도 표시하였다. 연구자는 CAD 프로그램을 이용하여 X 레이 영상에서 결절의 면적을 결정하였다.Figure 2 is a result showing the correlation between area, volume and detection probability for thoracic pulmonary nodules. Figure 2 shows the results of linear regression and non-linear regression (quadratic). Some figures also show Root Mean Square Error (RMSE) values for each regression analysis. The researcher determined the area of the nodule in the X-ray image using a CAD program.
도 2(A)는 X 레이 영상에서 결절의 면적(Area)과 CT 상의 체적(Volume)의 상관 관계를 나타낸 결과이다. 도 2(A)를 살펴보면, 면적과 체적은 상관 계수(corrcoef)가 0.58로 어느 정도 상관 관계를 갖고, 선형 회귀와 비선형 회귀의 차이가 크지 않다는 것을 알 수 있다.2(A) is a result showing the correlation between the area of the nodule on the X-ray image and the volume on the CT. Referring to FIG. 2(A), it can be seen that the area and the volume are somewhat correlated with a correlation coefficient of 0.58, and the difference between the linear regression and the nonlinear regression is not large.
도 2(B)는 X 레이 영상에서 결절의 평균 검출 확률(Prob.Mean)과 CT 상의 체적의 상관 관계를 나타낸 결과이다. 여기서 검출 확률은 사전에 학습한 기계학습모델이 X 레이 영상을 입력받아 출력하는 확률값이다. 연구자는 검출 확률의 평균값을 산출하였다. 즉, 동일한 X 레이 영상에 대하여 여러 번 검출 확률을 산출하여 이를 평균한 값을 검출 확률로 사용하였다. 도 2(B)에서 검출 확률과 CT 상의 체적의 상관 계수는 0.22로 낮은 상관성을 보였다.2(B) is a result showing the correlation between the average detection probability (Prob.Mean) of a nodule in an X-ray image and the volume on a CT image. Here, the detection probability is a probability value that a pre-learned machine learning model receives and outputs an X-ray image. The researcher calculated the average value of the detection probability. That is, the detection probability was calculated several times for the same X-ray image and the average value was used as the detection probability. In FIG. 2(B), the correlation coefficient between the detection probability and the volume on the CT showed a low correlation of 0.22.
도 2(C)는 X 레이 영상에서 결절의 면적과 결절의 평균 검출 확률의 상관 관계를 나타낸 결과이다. 도 2(C)를 살펴보면, 결절의 면적과 결절의 평균 검출 확률의 상관 계수는 0.73으로 높은 상관성을 보였다. 또한, 결절의 면적과 결정의 평균 검출 확률은 높은 선형성을 보였다. 2(C) is a result showing the correlation between the area of the nodule and the average detection probability of the nodule in the X-ray image. Referring to FIG. 2(C), the correlation coefficient between the area of the nodule and the average detection probability of the nodule was 0.73, showing a high correlation. In addition, the nodule area and average detection probability of crystals showed high linearity.
도 3은 흉부 폐결절 대상으로 면적과 검출 확률을 CT 상의 체적에 대하여 맞춤한 결과이다. 도 3은 결절의 면적과 검출 확률을 CT 상의 체적에 대하여 선형 회귀를 이용하여 2D 평면에 맞춤한 결과이다. 도 3(A), 도 3(B) 및 도 3(C)는 동일한 결과를 서로 다른 각도로 나타낸 것이다. 2D 평면 방정식은 "14886.637214 + (29.288656) * (면적 - (57206.109259)) * 검출 확률 = 체적"으로 설정하였다. CT 체적이 비교적 커서 평면 위로 올라온 포인트들이 관찰된다(도 2(C)에서 원 영역). 그러나, 도 3을 살펴보면, 전체적으로 결절의 면적과 검출 확률이 CT 상의 체적에 대하여 평면상에 위치하는 것을 알 수 있다. 따라서, "X 레이 영상의 결절 면적과 검출 확률"이 CT 상의 체적과 일정한 연관성을 갖는다는 것을 알 수 있다.3 shows the result of fitting the area and detection probability to the volume on CT for the thoracic pulmonary nodule. 3 shows the result of fitting the nodule area and detection probability to a 2D plane using linear regression with respect to the CT image volume. 3(A), 3(B) and 3(C) show the same result at different angles. The 2D plane equation was set as "14886.637214 + (29.288656) * (area - (57206.109259)) * detection probability = volume". The CT volume is relatively large, and points raised above the plane are observed (circled area in Fig. 2(C)). However, looking at FIG. 3 , it can be seen that the area and detection probability of the nodule are generally located on a plane with respect to the volume of the CT image. Therefore, it can be seen that the “nodule area and detection probability of the X-ray image” has a certain correlation with the volume on the CT image.
도 4는 흉부 폐결절 대상으로 면적과 체적 사이의 상관 관계를 나타내는 결과이다. 도 4는 면적만을 갖고 CT 체적에 선형 맞춤한 결과에 해당한다. 도 4는 도 2(A)에 도시한 흉부 폐결절 대상으로 면적과 체적 사이의 상관 관계에 대하여 모델 기반 비선형 회귀(model-based non-linear regression) 결과(1.5차)를 추가한 예이다. 모델 기반 비선형 회귀에서 모델은 "CT 체적 ∝ α * 면적0.5 + β * 면적 + γ * 면적1.5 + δ"로 체적과 면적의 관계를 설정하였다. 도 4를 살펴보면, 선형 회귀와 비선형 회귀는 각각 RMSE가 9445.95 및 9449.88로 비슷하지만, 모델 기반 비선형 회귀는 RMSE가 9395.73으로 선형 회귀나 비선형 회귀보다는 조금 낮아졌다.4 is a result showing the correlation between the area and the volume of the thoracic pulmonary nodule. 4 corresponds to the result of linear fitting to the CT volume with only the area. FIG. 4 is an example in which a model-based non-linear regression result (order 1.5) is added for the correlation between area and volume for the thoracic pulmonary nodule shown in FIG. 2(A). In model-based nonlinear regression, the model establishes the relationship between volume and area as "CT volume ∝ α * area 0.5 + β * area + γ * area 1.5 + δ". Referring to FIG. 4, linear regression and nonlinear regression have similar RMSEs of 9445.95 and 9449.88, respectively, but model-based nonlinear regression has RMSE of 9395.73, which is slightly lower than linear regression or nonlinear regression.
도 5는 면적과 검출 확률을 CT 상의 체적에 대하여 맞춤한 결과의 다른 예이다. 도 4는 도 4와 달리 결절의 면적과 더불어 결절의 검출 확률을 CT 상의 체적에 2D 맞춤한 결과이다. 도 5(A)는 선형 회귀를 통해 2D 맞춤한 결과이고, 도 5(B)는 비선형 회귀를 통해 2D 맞춤한 결과이고, 도 5(C)는 모델 기반 비선형 회귀를 통해 2D 맞춤한 결과이다. 5 is another example of a result of fitting an area and a detection probability to a volume on a CT image. Unlike FIG. 4 , FIG. 4 is a result of 2D fitting the area of the nodule and the probability of detecting the nodule to the volume of the CT image. 5(A) is a result of 2D fitting through linear regression, FIG. 5(B) is a result of 2D fitting through nonlinear regression, and FIG. 5(C) is a result of 2D fitting through model-based nonlinear regression.
도 4에서 선형 회귀를 맞춤한 결과 RMSE는 9445.95였는데, 도 5(A)를 살펴보면 RMSE가 8781.24로 결과가 향상되었다. 도 4에서 비선형 회귀를 맞춤한 결과 RMSE는 9449.88이었는데, 도 5(B)를 살펴보면 RMSE가 8351.0으로 결과가 향상되었다. 또한, 도 4에서 모델 기반 비선형 회귀를 맞춤한 결과 RMSE는 9395.73이었는데, 도 5(A)를 살펴보면 RMSE가 7975.55로 결과가 향상되었다.As a result of fitting the linear regression in FIG. 4, the RMSE was 9445.95, but looking at FIG. 5(A), the RMSE improved to 8781.24. As a result of fitting the nonlinear regression in FIG. 4, the RMSE was 9449.88, but the result improved to 8351.0 in FIG. 5(B). In addition, as a result of fitting the model-based nonlinear regression in FIG. 4, the RMSE was 9395.73, but in FIG. 5(A), the RMSE was improved to 7975.55.
도 4와 도 5를 참고하면, X 레이 영상의 병소 면적만을 사용하는 경우보다 병소 면적과 함께 검출 확률을 사용하는 경우 CT 상의 체적에 부합하는 것을 알 수 있다. 즉, 결절의 면적과 결절의 검출 확률이란 정보를 이용하면 결정의 체적에 관한 정보를 보다 정확하게 추정할 수 있다.Referring to FIGS. 4 and 5 , it can be seen that the volume on the CT image is matched when the detection probability is used together with the lesion area, rather than when only the lesion area of the X-ray image is used. That is, information on the crystal volume can be more accurately estimated by using information such as the area of the nodule and the probability of detecting the nodule.
도 6은 X 레이 영상을 이용한 병소 체적 정보 추정 과정(200)에 대한 예이다. 6 is an example of a process 200 for estimating lesion volume information using an X-ray image.
먼저, 분석 장치는 X 레이 장비가 촬영한 환자의 X 레이 영상을 입력받는다(210). X 레이 영상은 일정한 병소 영역을 포함한다고 가정한다.First, the analysis device receives an X-ray image of the patient captured by the X-ray equipment (210). It is assumed that the X-ray image includes a certain lesion area.
분석장치는 입력된 X 레이 영상을 기준으로 병소 면적을 결정한다(220). 병소 면적은 의료진이 CAD를 이용하여 병소 영역을 표시하여 산출된 값일 수 있다. 이 경우, 분석장치는 CAD 프로그램을 통해 산출된 병소 면적을 입력받거나 전달받을 수 있다.The analysis device determines the lesion area based on the input X-ray image (220). The lesion area may be a value calculated by displaying the lesion area by a medical staff using CAD. In this case, the analysis device may input or receive the lesion area calculated through the CAD program.
나아가 분석장치는 X 레이 영상을 이용하여 직접 병소 면적을 산출할 수도 있다. 예컨대, 분석장치가 영상 처리 기법으로 병소 영역을 구분하고, 해당 영역의 면적을 산출할 수 있다. 또는, 분석장치가 사전에 학습된 세그멘테이션(segmentation) 모델에 X 레이 영상을 입력하여 병소 영역을 구분할 수 있고, 구분된 영역의 면적을 산출할 수 있다. 세그멘테테이션 모델은 U-net, FCN(fully convolutional networks) 등으로 구현될 수 있다.Furthermore, the analysis device may directly calculate the lesion area using an X-ray image. For example, the analysis device may classify the lesion area using an image processing technique and calculate the area of the corresponding area. Alternatively, the analysis device may input an X-ray image into a previously learned segmentation model to classify the lesion area and calculate the area of the segmented area. The segmentation model may be implemented with U-net, fully convolutional networks (FCN), and the like.
분석장치는 기계학습모델에 환자의 X 레이 영상을 입력하여 병소를 분류한다(230). 도 6은 CNN과 같은 인공신경망을 기계학습모델로 예시하였다. X 레이 영상에서 특정 병소를 분류하도록 사전에 학습된 인공신경망 모델은 X 레이 영상이 입력되면 해당 영상에서 병소가 존재할 확률값(검출 확률)을 산출한다.The analysis device classifies the lesion by inputting the patient's X-ray image to the machine learning model (230). 6 illustrates an artificial neural network such as CNN as a machine learning model. An artificial neural network model trained in advance to classify a specific lesion in an X-ray image calculates a probability value (detection probability) that a lesion exists in the corresponding image when an X-ray image is input.
분석장치는 병소 면적과 검출 확률을 이용하여 병소의 체적 정보를 결정할 수 있다(240). 예컨대, 분석장치는 병소 면적에 검출 확률을 곱하여 체적 정보를 산출할 수 있다. 또는, 분석장치는 병소 명적과 검출 확률을 변수로 갖는 일정한 함수를 이용하여 체적 정보를 산출할 수도 있다.The analysis device may determine volume information of the lesion using the lesion area and detection probability (240). For example, the analysis device may calculate the volume information by multiplying the detection probability by the lesion area. Alternatively, the analysis device may calculate the volume information using a constant function having the lesion area and the detection probability as variables.
도 7은 X 레이 영상을 이용하여 병소 체적 정보를 추정하는 분석장치(300)에 대한 예이다. 분석장치(300)는 전술한 분석장치(도 1의 130 및 140)에 해당한다. 분석장치(300)는 물리적으로 다양한 형태로 구현될 수 있다. 예컨대, 분석장치(300)는 PC와 같은 컴퓨터 장치, 네트워크의 서버, 데이터 처리 전용 칩셋 등의 형태를 가질 수 있다.7 is an example of an analysis device 300 for estimating lesion volume information using an X-ray image. The analysis device 300 corresponds to the above-described analysis devices (130 and 140 in FIG. 1). The analysis device 300 may be physically implemented in various forms. For example, the analysis device 300 may have a form of a computer device such as a PC, a network server, and a chipset dedicated to data processing.
분석장치(300)는 저장장치(310), 메모리(320), 연산장치(330), 인터페이스 장치(340), 통신장치(350) 및 출력장치(360)를 포함할 수 있다.The analysis device 300 may include a storage device 310, a memory 320, an arithmetic device 330, an interface device 340, a communication device 350, and an output device 360.
저장장치(310)는 환자의 X 레이 영상을 저장할 수 있다. 전술한 바와 같이 분석 대상은 환자의 2D 의료 영상이다. 따라서, 분석 대상은 X 레이 영상이 아닌 다른 유형의 영상일 수도 있다.The storage device 310 may store an X-ray image of the patient. As described above, an analysis target is a 2D medical image of a patient. Accordingly, the analysis target may be an image of another type other than an X-ray image.
저장장치(310)는 병소 검출을 위하여 학습된 기계학습모델을 저장할 수 있다.The storage device 310 may store a machine learning model trained for lesion detection.
저장장치(310)는 X 레이 영상을 일정하게 전처리하기 위한 프로그램을 저장할 수 있다.The storage device 310 may store a program for constantly pre-processing the X-ray image.
저장장치(310)는 X 레이 영상에서 병소 크기를 산출하는 CAD 프로그램, X 레이 영상에서 병소 영역을 구분하는 세그멘테이션 모델을 저장할 수 있다.The storage device 310 may store a CAD program for calculating the size of a lesion in an X-ray image and a segmentation model for classifying a lesion area in an X-ray image.
저장장치(310)는 분석 결과인 병소 체적 정보를 저장할 수 있다. The storage device 310 may store lesion volume information as an analysis result.
메모리(320)는 분석장치(300)가 X 레이 영상을 전처리하는 과정, 병소 면적을 결정하는 과정, 병소의 검출 확률을 결정하는 과정 및 병소의 체적 정보를 산출하는 과정에서 생성되는 데이터 및 정보 등을 저장할 수 있다.The memory 320 includes data and information generated during the preprocessing of the X-ray image by the analysis device 300, the process of determining the area of the lesion, the process of determining the detection probability of the lesion, and the process of calculating the volume information of the lesion. can be saved.
인터페이스 장치(340)는 외부로부터 일정한 명령 및 데이터를 입력받는 장치이다. 인터페이스 장치(340)는 물리적으로 연결된 입력 장치 또는 외부 저장장치로부터 환자의 X 레이 영상을 입력받을 수 있다. 인터페이스 장치(340)는 외부 장치로부터 X 레이 영상의 병소 면적을 입력받을 수 있다.The interface device 340 is a device that receives certain commands and data from the outside. The interface device 340 may receive an X-ray image of the patient from a physically connected input device or external storage device. The interface device 340 may receive an input of the lesion area of the X-ray image from an external device.
인터페이스 장치(340)는 분석 결과를 외부 객체에 전달할 수도 있다.The interface device 340 may transmit the analysis result to an external object.
통신장치(350)는 유선 또는 무선 네트워크를 통해 일정한 정보를 수신하고 전송하는 구성을 의미한다. 통신장치(350)는 외부 객체로부터 환자의 X 레이 영상을 수신할 수 있다. 통신장치(350)는 외부 객체로부터 X 레이 영상의 병소 면적을 수신할 수도 있다. 또는 통신장치(350)는 분석 결과를 사용자 단말과 같은 외부 객체에 송신할 수도 있다.The communication device 350 refers to a component that receives and transmits certain information through a wired or wireless network. The communication device 350 may receive an X-ray image of the patient from an external object. The communication device 350 may receive the lesion area of the X-ray image from an external object. Alternatively, the communication device 350 may transmit the analysis result to an external object such as a user terminal.
인터페이스 장치(340) 및 통신장치(350)는 사용자 또는 다른 물리적 객체로부터 일정한 데이터를 주고받는 구성이므로, 포괄적으로 입출력장치라고도 명명할 수 있다. X 레이 영상을 입력받는 기능에 한정하면 인터페이스 장치(340) 및 통신장치(350)는 입력장치라고 할 수도 있다. Since the interface device 340 and the communication device 350 are configured to send and receive certain data from a user or other physical object, they can also be collectively referred to as input/output devices. If the function of receiving an X-ray image is limited, the interface device 340 and the communication device 350 may be referred to as input devices.
출력장치(360)는 일정한 정보를 출력하는 장치이다. 출력장치(360)는 데이터 처리 과정에 필요한 인터페이스, 분석 결과 등을 출력할 수 있다. The output device 360 is a device that outputs certain information. The output device 360 may output interfaces and analysis results necessary for data processing.
연산 장치(330)는 저장장치(310)에 저장된 기계학습모델 내지 프로그램을 이용하여 X 레이 영상을 입력받아 병소 체적 정보를 추정할 수 있다.The arithmetic device 330 may receive an X-ray image and estimate lesion volume information using a machine learning model or program stored in the storage device 310 .
연산 장치(330)는 입력받은 X 레이 영상을 일정하게 전처리할 수 있다. 예컨대, 연산 장치(330)는 X 레이 영상의 노이즈 제거, 밝기 조절과 같은 작업을 할 수 있다.The arithmetic device 330 may pre-process the received X-ray image at a constant level. For example, the arithmetic device 330 may perform tasks such as noise removal and brightness control of an X-ray image.
연산 장치(330)는 CAD 프로그램을 이용하여 X 레이 영상에서 병소 면적을 산출할 수 있다. 이 과정에서 인터페이스 장치(340)는 사용자로부터 병소 영역을 선택하는 명령을 입력받을 수도 있다. The arithmetic device 330 may calculate the lesion area from the X-ray image using a CAD program. During this process, the interface device 340 may receive a command to select a lesion area from the user.
연산 장치(330)는 X 레이 영상을 세그멘테이션 모델에 입력하여 병소 영역을 구분할 수 있다. 연산 장치(330)는 구분된 병소 영역에 대한 면적을 산출할 수 있다.The arithmetic device 330 may classify the lesion area by inputting the X-ray image to the segmentation model. The arithmetic device 330 may calculate the area of the divided lesion area.
연산 장치(330)는 X 레이 영상을 사전에 학습된 기계학습모델에 입력하여 병소 검출 확률을 산출할 수 있다.The arithmetic device 330 may calculate a lesion detection probability by inputting the X-ray image to a pre-learned machine learning model.
연산 장치(330)는 병소 면적과 검출 확률을 이용하여 체적 정보를 추정할 수 있다. 예컨대, 연산 장치(330)는 병소 면적과 검출 확률을 곱하여 체적 정보를 산출할 수 있다. 또는 연산 장치(330)는 병소 면적과 검출 확률을 변수로 갖는 수학적 연산 처리 과정을 통하여 체적 정보를 산출할 수도 있다.The arithmetic device 330 may estimate the volume information using the lesion area and detection probability. For example, the calculator 330 may calculate the volume information by multiplying the lesion area and the detection probability. Alternatively, the calculation device 330 may calculate the volume information through a mathematical calculation process having the lesion area and detection probability as variables.
연산 장치(330)는 데이터를 처리하고, 일정한 연산을 처리하는 프로세서, AP, 프로그램이 임베디드된 칩과 같은 장치일 수 있다.The arithmetic device 330 may be a device such as a processor, an AP, or a chip in which a program is embedded that processes data and performs certain arithmetic operations.
또한, 상술한 바와 같은 병소 체적 정보 추정 방법은 컴퓨터에서 실행될 수 있는 실행가능한 알고리즘을 포함하는 프로그램(또는 어플리케이션)으로 구현될 수 있다. 상기 프로그램은 일시적 또는 비일시적 판독 가능 매체(non-transitory computer readable medium)에 저장되어 제공될 수 있다.Also, the method for estimating lesion volume information as described above may be implemented as a program (or application) including an executable algorithm that may be executed on a computer. The program may be stored and provided in a temporary or non-transitory computer readable medium.
비일시적 판독 가능 매체란 레지스터, 캐쉬, 메모리 등과 같이 짧은 순간 동안 데이터를 저장하는 매체가 아니라 반영구적으로 데이터를 저장하며, 기기에 의해 판독(reading)이 가능한 매체를 의미한다. 구체적으로는, 상술한 다양한 어플리케이션 또는 프로그램들은 CD, DVD, 하드 디스크, 블루레이 디스크, USB, 메모리카드, ROM (read-only memory), PROM (programmable read only memory), EPROM(Erasable PROM, EPROM) 또는 EEPROM(Electrically EPROM) 또는 플래시 메모리 등과 같은 비일시적 판독 가능 매체에 저장되어 제공될 수 있다.A non-transitory readable medium is not a medium that stores data for a short moment, such as a register, cache, or memory, but a medium that stores data semi-permanently and can be read by a device. Specifically, the various applications or programs described above are CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM (read-only memory), PROM (programmable read only memory), EPROM (Erasable PROM, EPROM) Alternatively, it may be stored and provided in a non-transitory readable medium such as EEPROM (Electrically EPROM) or flash memory.
일시적 판독 가능 매체는 스태틱 램(Static RAM,SRAM), 다이내믹 램(Dynamic RAM,DRAM), 싱크로너스 디램 (Synchronous DRAM,SDRAM), 2배속 SDRAM(Double Data Rate SDRAM,DDR SDRAM), 증강형 SDRAM(Enhanced SDRAM,ESDRAM), 동기화 DRAM(Synclink DRAM,SLDRAM) 및 직접 램버스 램(Direct Rambus RAM,DRRAM) 과 같은 다양한 RAM을 의미한다.Temporary readable media include static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), and enhanced SDRAM (Enhanced SDRAM). SDRAM, ESDRAM), Synchronous DRAM (Synclink DRAM, SLDRAM) and Direct Rambus RAM (DRRAM).
본 실시예 및 본 명세서에 첨부된 도면은 전술한 기술에 포함되는 기술적 사상의 일부를 명확하게 나타내고 있는 것에 불과하며, 전술한 기술의 명세서 및 도면에 포함된 기술적 사상의 범위 내에서 당업자가 용이하게 유추할 수 있는 변형 예와 구체적인 실시례는 모두 전술한 기술의 권리범위에 포함되는 것이 자명하다고 할 것이다.This embodiment and the drawings accompanying this specification clearly represent only a part of the technical idea included in the foregoing technology, and those skilled in the art can easily understand it within the scope of the technical idea included in the specification and drawings of the above technology. It will be obvious that all variations and specific examples that can be inferred are included in the scope of the above-described technology.

Claims (8)

  1. 분석장치가 환자의 2D X 레이 영상을 입력받는 단계;receiving a 2D X-ray image of the patient by an analysis device;
    상기 분석장치가 상기 X 레이 영상을 기계학습모델에 입력하여 특정 병소에 대한 검출 확률을 결정하는 단계; 및determining, by the analysis device, a detection probability for a specific lesion by inputting the X-ray image to a machine learning model; and
    상기 분석장치가 상기 X 레이 영상에서 검출되는 상기 병소의 면적 및 상기 검출 확률을 기준으로 상기 병소의 3D 체적 정보를 추정하는 단계를 포함하는 Estimating, by the analysis device, 3D volume information of the lesion based on the area and the detection probability of the lesion detected in the X-ray image
    X 레이 영상을 이용한 병소 체적 정보 추정 방법.A method for estimating lesion volume information using X-ray images.
  2. 제1항에 있어서,According to claim 1,
    상기 병소의 면적은 The area of the lesion is
    상기 X 레이 영상에 대한 CAD (computer aided diagnosis) 프로그램을 이용하여 결정되거나,It is determined using a computer aided diagnosis (CAD) program for the X-ray image,
    상기 X 레이 영상을 입력받아 병소 영역을 분할하는 세그멘테이션 모델을 이용하여 결정되는 X 레이 영상을 이용한 병소 체적 정보 추정 방법.A method for estimating lesion volume information using an X-ray image determined by using a segmentation model that receives the X-ray image and divides the lesion area.
  3. 제1항에 있어서,According to claim 1,
    상기 분석장치는 상기 병소의 면적에 상기 검출 확률을 곱한 결과를 기준으로 상기 체적 정보를 추정하는 X 레이 영상을 이용한 병소 체적 정보 추정 방법.The analysis device estimates the volume information based on a result of multiplying the detection probability by the area of the lesion.
  4. 제1항에 있어서,According to claim 1,
    상기 기계학습모델은 환자의 X 레이 영상을 입력받아 상기 특정 병소의 발생 가능성을 분류하도록 사전에 학습된 모델인 X 레이 영상을 이용한 병소 체적 정보 추정 방법.The method of estimating lesion volume information using an X-ray image, wherein the machine learning model is a model learned in advance to receive an X-ray image of a patient and classify the possibility of occurrence of the specific lesion.
  5. 환자의 2D X 레이 영상을 입력받는 입력장치;an input device for receiving a 2D X-ray image of the patient;
    대상자의 X 레이 영상을 기준으로 특정 병소의 발생 가능성을 분류하도록 사전에 학습된 기계학습모델을 저장하는 저장장치; 및a storage device for storing a pre-learned machine learning model to classify the possibility of occurrence of a specific lesion based on an X-ray image of the subject; and
    상기 환자의 X 레이 영상을 기계학습모델에 입력하여 상기 특정 병소에 대한 검출 확률을 결정하고, 상기 X 레이 영상에서 검출되는 상기 병소의 면적 및 상기 검출 확률을 기준으로 상기 병소의 3D 체적 정보를 추정하는 연산장치를 포함하는 Inputting the patient's X-ray image into a machine learning model to determine the detection probability for the specific lesion, and estimating 3D volume information of the lesion based on the area and detection probability of the lesion detected in the X-ray image including an arithmetic device that
    X 레이 영상을 이용하여 병소 체적 정보를 추정하는 분석장치.An analysis device that estimates lesion volume information using X-ray images.
  6. 제5항에 있어서,According to claim 5,
    상기 입력장치는 상기 환자의 X 레이 영상을 기준으로 결정된 상기 특정 병소의 면적을 더 입력받는 X 레이 영상을 이용하여 병소 체적 정보를 추정하는 분석장치.wherein the input device estimates lesion volume information using an X-ray image further inputting an area of the specific lesion determined based on the patient's X-ray image.
  7. 제5항에 있어서,According to claim 5,
    상기 저장장치는 CAD (computer aided diagnosis) 프로그램 또는 X 레이 영상을 입력받아 병소 영역을 분할하는 세그멘테이션 모델을 더 저장하고,The storage device receives a CAD (computer aided diagnosis) program or an X-ray image and further stores a segmentation model for dividing the lesion area,
    상기 연산장치는 상기 CAD 프로그램 또는 상기 세그멘테이션 모델을 이용하여 상기 환자의 X 레이 영상을 분석하여 상기 병소의 면적을 결정하는 X 레이 영상을 이용하여 병소 체적 정보를 추정하는 분석장치.wherein the calculator analyzes the X-ray image of the patient using the CAD program or the segmentation model to determine the area of the lesion, and estimates lesion volume information using the X-ray image.
  8. 제5항에 있어서,According to claim 5,
    상기 연산장치는 상기 병소의 면적에 상기 검출 확률을 곱한 결과를 기준으로 상기 체적을 추정하는 X 레이 영상을 이용하여 병소 체적 정보를 추정하는 분석장치.wherein the calculator estimates lesion volume information using an X-ray image for estimating the volume based on a result of multiplying the detection probability by the area of the lesion.
PCT/KR2022/015555 2021-11-30 2022-10-14 Method for estimating lesion volume using x-ray image, and analysis device WO2023101203A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210168323A KR20230080825A (en) 2021-11-30 2021-11-30 Estimating method for volume of lesion using x ray image and analysis apparatus
KR10-2021-0168323 2021-11-30

Publications (1)

Publication Number Publication Date
WO2023101203A1 true WO2023101203A1 (en) 2023-06-08

Family

ID=86612549

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/015555 WO2023101203A1 (en) 2021-11-30 2022-10-14 Method for estimating lesion volume using x-ray image, and analysis device

Country Status (2)

Country Link
KR (1) KR20230080825A (en)
WO (1) WO2023101203A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160350919A1 (en) * 2015-06-01 2016-12-01 Virtual Radiologic Corporation Medical evaluation machine learning workflows and processes
KR101992057B1 (en) * 2018-08-17 2019-06-24 (주)제이엘케이인스펙션 Method and system for diagnosing brain diseases using vascular projection images
KR20200092466A (en) * 2019-01-07 2020-08-04 재단법인대구경북과학기술원 Device for training analysis model of medical image and training method thereof
KR20200117344A (en) * 2019-04-04 2020-10-14 한국과학기술원 Interactive computer-aided diagnosis method for lesion diagnosis and the system thereof
KR102237198B1 (en) * 2020-06-05 2021-04-08 주식회사 딥노이드 Ai-based interpretation service system of medical image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160350919A1 (en) * 2015-06-01 2016-12-01 Virtual Radiologic Corporation Medical evaluation machine learning workflows and processes
KR101992057B1 (en) * 2018-08-17 2019-06-24 (주)제이엘케이인스펙션 Method and system for diagnosing brain diseases using vascular projection images
KR20200092466A (en) * 2019-01-07 2020-08-04 재단법인대구경북과학기술원 Device for training analysis model of medical image and training method thereof
KR20200117344A (en) * 2019-04-04 2020-10-14 한국과학기술원 Interactive computer-aided diagnosis method for lesion diagnosis and the system thereof
KR102237198B1 (en) * 2020-06-05 2021-04-08 주식회사 딥노이드 Ai-based interpretation service system of medical image

Also Published As

Publication number Publication date
KR20230080825A (en) 2023-06-07

Similar Documents

Publication Publication Date Title
WO2017022908A1 (en) Method and program for bone age calculation using deep neural networks
WO2019103440A1 (en) Method for supporting reading of medical image of subject and device using same
US11334994B2 (en) Method for discriminating suspicious lesion in medical image, method for interpreting medical image, and computing device implementing the methods
WO2019143021A1 (en) Method for supporting viewing of images and apparatus using same
Naing et al. Advances in automatic tuberculosis detection in chest x-ray images
Ramkumar et al. Multi res U-Net based image segmentation of pulmonary tuberculosis using CT images
Shinde Deep Learning Approaches for Medical Image Analysis and Disease Diagnosis
EP3467770B1 (en) Method for analysing a medical imaging data set, system for analysing a medical imaging data set, computer program product and a computer-readable medium
WO2021054700A1 (en) Method for providing tooth lesion information, and device using same
CN111462203B (en) DR focus evolution analysis device and method
KR102245219B1 (en) Method for discriminating suspicious lesion in medical image, method for interpreting medical image, and computing device implementing the methods
WO2023101203A1 (en) Method for estimating lesion volume using x-ray image, and analysis device
WO2021246013A1 (en) Diagnostic imaging method, diagnostic imaging assisting device, and computer system
WO2022204605A1 (en) Interpretation of intraoperative sensor data using concept graph neural networks
US20200388395A1 (en) Apparatus, method, and non-transitory computer-readable storage medium
Ju et al. CODE-NET: A deep learning model for COVID-19 detection
KR102577161B1 (en) Method and system for measuring size change of target lesion in x-ray image
Tahghighi et al. Automatic classification of symmetry of hemithoraces in canine and feline radiographs
CN110689112A (en) Data processing method and device
KR101726505B1 (en) Apparatus and method for acquiring and processing tongue image
KR102231698B1 (en) Method for filtering normal medical image, method for interpreting medical image, and computing device implementing the methods
KR102650919B1 (en) Medical image classifying method using contrastive learning based on patch and analysis apparatus
US11508065B2 (en) Methods and systems for detecting acquisition errors in medical images
WO2024029697A1 (en) Method for predicting risk of brain disease and method for training risk analysis model for brain disease
KR102613537B1 (en) A brain CT image standard template generating device and a method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22901563

Country of ref document: EP

Kind code of ref document: A1