WO2023101203A1 - Procédé d'estimation de volume de lésion à l'aide d'une image radiographique et dispositif d'analyse - Google Patents

Procédé d'estimation de volume de lésion à l'aide d'une image radiographique et dispositif d'analyse Download PDF

Info

Publication number
WO2023101203A1
WO2023101203A1 PCT/KR2022/015555 KR2022015555W WO2023101203A1 WO 2023101203 A1 WO2023101203 A1 WO 2023101203A1 KR 2022015555 W KR2022015555 W KR 2022015555W WO 2023101203 A1 WO2023101203 A1 WO 2023101203A1
Authority
WO
WIPO (PCT)
Prior art keywords
lesion
ray image
area
volume information
detection probability
Prior art date
Application number
PCT/KR2022/015555
Other languages
English (en)
Korean (ko)
Inventor
정명진
차윤기
임채영
Original Assignee
사회복지법인 삼성생명공익재단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 사회복지법인 삼성생명공익재단 filed Critical 사회복지법인 삼성생명공익재단
Publication of WO2023101203A1 publication Critical patent/WO2023101203A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image

Definitions

  • a technique described below relates to a technique for calculating information on a 3D volume of a specific lesion using a 2D X-ray image.
  • An X-ray image is a basic medical image for examination of various diseases.
  • a medical staff may check a patient's pulmonary nodule using a chest X-ray. When a lesion such as a pulmonary nodule is found, the medical staff monitors the size of the lesion through periodic X-ray imaging.
  • a medical staff checks the size of a crystal by sight or by using a ruler to determine the size of a lesion in an X-ray image. Furthermore, the medical staff may check the area by drawing the outer edge of the crystal using computer aided diagnosis (CAD).
  • CAD computer aided diagnosis
  • a lesion is an object in a 3D space, it is difficult to accurately observe a change in size only with a 2D X-ray image.
  • X-ray images are inferior in sharpness to CT (computer tomography), and thus, there is a limitation in that it is difficult to accurately determine the periphery of the nodule shadow.
  • the technology described below provides a method for estimating volume information of lesions based on X-ray images based on the research results that the lesion detection probability identified as a learning model in 2D X-ray images and the area of lesions have a constant correlation with the volume of lesions. want to do
  • the method for estimating lesion volume information using an X-ray image includes the step of receiving a 2D X-ray image of a patient by an analysis device, and determining the detection probability of a specific lesion by the analysis device by inputting the X-ray image to a machine learning model. and estimating, by the analysis device, 3D volume information of the lesion based on the area of the lesion detected in the X-ray image and the detection probability.
  • An analysis device that estimates lesion volume information using an X-ray image is an input device that receives a patient's 2D X-ray image, and a machine learning model pretrained to classify the possibility of a specific lesion based on the patient's X-ray image.
  • the technology described below enables an accurate diagnosis of a patient by calculating volume information of a lesion based on a 2D X-ray image. Based on the diagnostic results, medical staff can determine appropriate treatment guidelines for the patient's condition.
  • 1 is an example of a system for estimating lesion volume information using an X-ray image.
  • Figure 2 is a result showing the correlation between area, volume and detection probability for thoracic pulmonary nodules.
  • FIG. 3 is a result of fitting (fttting) the area and detection probability to the volume on a CT image for a thoracic pulmonary nodule.
  • 5 is another example of a result of fitting an area and a detection probability to a volume on a CT image.
  • 6 is an example of a process of estimating lesion volume information using an X-ray image.
  • FIG. 7 is an example of an analysis device for estimating lesion volume information using an X-ray image.
  • first, second, A, B, etc. may be used to describe various elements, but the elements are not limited by the above terms, and are merely used to distinguish one element from another. used only as For example, without departing from the scope of the technology described below, a first element may be referred to as a second element, and similarly, the second element may be referred to as a first element.
  • the terms and/or include any combination of a plurality of related recited items or any of a plurality of related recited items.
  • each component to be described below may be combined into one component, or one component may be divided into two or more for each more subdivided function.
  • each component to be described below may additionally perform some or all of the functions of other components in addition to its main function, and some of the main functions of each component may be performed by other components. Of course, it may be dedicated and performed by .
  • each process constituting the method may occur in a different order from the specified order unless a specific order is clearly described in context. That is, each process may occur in the same order as specified, may be performed substantially simultaneously, or may be performed in the reverse order.
  • a technique to be described below is a technique of estimating information on the size of a lesion of a subject using a medical image.
  • Medical images may be various, such as X-ray images, ultrasound images, CT (Computer Tomography) images, MRI (Magnetic Resonance Imaging) images, and the like.
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging
  • the technique described below estimates the size of a lesion using a 2D medical image.
  • the following medical image may be any one of various medical images in a 2D format. For example, it may be an X-ray image or a 2D composite image extracted from a 3D medical image. However, for convenience of explanation, description will be made focusing on X-ray images.
  • X-ray images may be images of various body parts according to the type of disease.
  • the lesion corresponds to a lesion whose shade increases with disease progression.
  • the target lesion will be described focusing on the pulmonary nodule. That is, the following X-ray image corresponds to a chest X-ray image.
  • a machine learning model that receives an X-ray image and outputs a probability of occurrence of a specific lesion for lesion detection probability.
  • a machine learning model can be one of many types, including decision trees, random forests, K-nearest neighbors (KNNs), Naive Bayes, support vector machines (SVMs), and artificial neural networks (ANNs).
  • KNNs K-nearest neighbors
  • SVMs support vector machines
  • ANNs artificial neural networks
  • the analysis device estimates volumetric information of a lesion from an X-ray image.
  • the analysis device may be implemented with various devices capable of processing data.
  • the analysis device may be implemented as a PC, a server on a network, a smart device, a chipset in which a dedicated program is embedded, and the like.
  • the volume information may be a lesion volume in 3D space or information having a certain correlation (proportional relationship) with the lesion volume. That is, the following volume information may be the volume of the lesion itself or may be information (value) capable of quantifying the size of the lesion even if it is not an exact volume.
  • 1 is an example of a system 100 for estimating lesion volume information using an X-ray image. 1 illustrates an example in which the analysis device is a computer terminal 130 and a server 140 .
  • the X-ray equipment 110 generates a chest X-ray image of a subject.
  • a chest X-ray image of a subject may be stored in an Electronic Medical Record (EMR) 120.
  • EMR Electronic Medical Record
  • the user A may use the computer terminal 130 to calculate the volume information of the lesion using an X-ray image of the subject's chest.
  • the computer terminal 130 receives an X-ray image of the chest of the subject.
  • the computer terminal 130 may receive a chest X-ray image from the X-ray equipment 110 or the EMR 120 through a wired or wireless network.
  • the computer terminal 130 may be a device physically connected to the X-ray equipment 110 .
  • the computer terminal 130 receives the 2D area of the lesion determined based on the subject's chest X-ray image. Alternatively, the computer terminal 130 may determine the 2D area of the lesion using a chest X-ray image of the subject.
  • the computer terminal 130 calculates a detection probability of a lesion by inputting the chest X-ray image to a pre-learned machine learning model.
  • the machine learning model is a model that calculates the probability of occurrence of a specific lesion (pulmonary nodule) in an X-ray image.
  • the machine learning model must be trained in a supervised learning method in advance. If a pulmonary nodule is a target lesion, the machine learning model corresponds to a model that calculates a classification result (probability) as to whether or not a pulmonary nodule exists in the X-ray image when an X-ray image is input.
  • the machine learning model can be pre-learned specifically for a specific lesion.
  • the computer terminal 130 may estimate the volume information of the lesion using the lesion area in the X-ray image and the detection probability described above. User A can check the analysis result.
  • the server 140 may receive an X-ray image of the subject's chest from the X-ray equipment 110 or the EMR 120 .
  • the server 140 may receive the 2D area of the lesion determined based on the subject's chest X-ray image. In this case, the server 140 may receive information on the area of the lesion from the terminal used by the medical staff. Alternatively, the server 140 may determine the 2D area of the lesion using a chest X-ray image of the subject.
  • the server 140 calculates a detection probability of a lesion by inputting the chest X-ray image to a pre-learned machine learning model.
  • the machine learning model is a model that calculates the probability of occurrence of a specific lesion (pulmonary nodule) in an X-ray image.
  • the server 140 may estimate the volume information of the lesion using the lesion area in the X-ray image and the detection probability described above.
  • the server 140 may transmit the analysis result to the terminal of user A. User A can check the analysis result.
  • the computer terminal 130 and/or the server 140 may store the analysis result in the EMR 120.
  • the researcher used X-ray images to determine volume information, and used detection probability information calculated by a machine learning model (AI model) in X-ray images.
  • AI model machine learning model
  • the researcher collected image information acquired at regular time intervals for patients with pulmonary nodules.
  • the researcher utilized the image information of a total of 315 patients at the affiliated medical institution. In this process, all patient information and image identification information were anonymized to protect personal information.
  • 72 subjects with changes in the size of pulmonary nodules over a certain period of time and who had both X-ray and CT images were selected and studied. The results derived from the research process and the results of regression analysis are described below.
  • Figure 2 is a result showing the correlation between area, volume and detection probability for thoracic pulmonary nodules.
  • Figure 2 shows the results of linear regression and non-linear regression (quadratic).
  • Some figures also show Root Mean Square Error (RMSE) values for each regression analysis.
  • the researcher determined the area of the nodule in the X-ray image using a CAD program.
  • 2(A) is a result showing the correlation between the area of the nodule on the X-ray image and the volume on the CT. Referring to FIG. 2(A), it can be seen that the area and the volume are somewhat correlated with a correlation coefficient of 0.58, and the difference between the linear regression and the nonlinear regression is not large.
  • 2(B) is a result showing the correlation between the average detection probability (Prob.Mean) of a nodule in an X-ray image and the volume on a CT image.
  • the detection probability is a probability value that a pre-learned machine learning model receives and outputs an X-ray image.
  • the researcher calculated the average value of the detection probability. That is, the detection probability was calculated several times for the same X-ray image and the average value was used as the detection probability.
  • the correlation coefficient between the detection probability and the volume on the CT showed a low correlation of 0.22.
  • 2(C) is a result showing the correlation between the area of the nodule and the average detection probability of the nodule in the X-ray image.
  • the correlation coefficient between the area of the nodule and the average detection probability of the nodule was 0.73, showing a high correlation.
  • the nodule area and average detection probability of crystals showed high linearity.
  • 3 shows the result of fitting the area and detection probability to the volume on CT for the thoracic pulmonary nodule.
  • 3 shows the result of fitting the nodule area and detection probability to a 2D plane using linear regression with respect to the CT image volume.
  • 3(A), 3(B) and 3(C) show the same result at different angles.
  • the CT volume is relatively large, and points raised above the plane are observed (circled area in Fig. 2(C)).
  • the area and detection probability of the nodule are generally located on a plane with respect to the volume of the CT image. Therefore, it can be seen that the “nodule area and detection probability of the X-ray image” has a certain correlation with the volume on the CT image.
  • FIG. 4 is a result showing the correlation between the area and the volume of the thoracic pulmonary nodule. 4 corresponds to the result of linear fitting to the CT volume with only the area.
  • FIG. 4 is an example in which a model-based non-linear regression result (order 1.5) is added for the correlation between area and volume for the thoracic pulmonary nodule shown in FIG. 2(A).
  • model-based nonlinear regression the model establishes the relationship between volume and area as "CT volume ⁇ ⁇ * area 0.5 + ⁇ * area + ⁇ * area 1.5 + ⁇ ".
  • linear regression and nonlinear regression have similar RMSEs of 9445.95 and 9449.88, respectively, but model-based nonlinear regression has RMSE of 9395.73, which is slightly lower than linear regression or nonlinear regression.
  • FIG. 5 is another example of a result of fitting an area and a detection probability to a volume on a CT image. Unlike FIG. 4 , FIG. 4 is a result of 2D fitting the area of the nodule and the probability of detecting the nodule to the volume of the CT image. 5(A) is a result of 2D fitting through linear regression, FIG. 5(B) is a result of 2D fitting through nonlinear regression, and FIG. 5(C) is a result of 2D fitting through model-based nonlinear regression.
  • the RMSE was 9445.95, but looking at FIG. 5(A), the RMSE improved to 8781.24.
  • the RMSE was 9449.88, but the result improved to 8351.0 in FIG. 5(B).
  • the RMSE was 9395.73, but in FIG. 5(A), the RMSE was improved to 7975.55.
  • the volume on the CT image is matched when the detection probability is used together with the lesion area, rather than when only the lesion area of the X-ray image is used. That is, information on the crystal volume can be more accurately estimated by using information such as the area of the nodule and the probability of detecting the nodule.
  • FIG. 6 is an example of a process 200 for estimating lesion volume information using an X-ray image.
  • the analysis device receives an X-ray image of the patient captured by the X-ray equipment (210). It is assumed that the X-ray image includes a certain lesion area.
  • the analysis device determines the lesion area based on the input X-ray image (220).
  • the lesion area may be a value calculated by displaying the lesion area by a medical staff using CAD.
  • the analysis device may input or receive the lesion area calculated through the CAD program.
  • the analysis device may directly calculate the lesion area using an X-ray image.
  • the analysis device may classify the lesion area using an image processing technique and calculate the area of the corresponding area.
  • the analysis device may input an X-ray image into a previously learned segmentation model to classify the lesion area and calculate the area of the segmented area.
  • the segmentation model may be implemented with U-net, fully convolutional networks (FCN), and the like.
  • the analysis device classifies the lesion by inputting the patient's X-ray image to the machine learning model (230).
  • 6 illustrates an artificial neural network such as CNN as a machine learning model.
  • An artificial neural network model trained in advance to classify a specific lesion in an X-ray image calculates a probability value (detection probability) that a lesion exists in the corresponding image when an X-ray image is input.
  • the analysis device may determine volume information of the lesion using the lesion area and detection probability (240). For example, the analysis device may calculate the volume information by multiplying the detection probability by the lesion area. Alternatively, the analysis device may calculate the volume information using a constant function having the lesion area and the detection probability as variables.
  • the analysis device 300 corresponds to the above-described analysis devices (130 and 140 in FIG. 1).
  • the analysis device 300 may be physically implemented in various forms.
  • the analysis device 300 may have a form of a computer device such as a PC, a network server, and a chipset dedicated to data processing.
  • the analysis device 300 may include a storage device 310, a memory 320, an arithmetic device 330, an interface device 340, a communication device 350, and an output device 360.
  • the storage device 310 may store an X-ray image of the patient.
  • an analysis target is a 2D medical image of a patient. Accordingly, the analysis target may be an image of another type other than an X-ray image.
  • the storage device 310 may store a machine learning model trained for lesion detection.
  • the storage device 310 may store a program for constantly pre-processing the X-ray image.
  • the storage device 310 may store a CAD program for calculating the size of a lesion in an X-ray image and a segmentation model for classifying a lesion area in an X-ray image.
  • the storage device 310 may store lesion volume information as an analysis result.
  • the memory 320 includes data and information generated during the preprocessing of the X-ray image by the analysis device 300, the process of determining the area of the lesion, the process of determining the detection probability of the lesion, and the process of calculating the volume information of the lesion. can be saved.
  • the interface device 340 is a device that receives certain commands and data from the outside.
  • the interface device 340 may receive an X-ray image of the patient from a physically connected input device or external storage device.
  • the interface device 340 may receive an input of the lesion area of the X-ray image from an external device.
  • the interface device 340 may transmit the analysis result to an external object.
  • the communication device 350 refers to a component that receives and transmits certain information through a wired or wireless network.
  • the communication device 350 may receive an X-ray image of the patient from an external object.
  • the communication device 350 may receive the lesion area of the X-ray image from an external object.
  • the communication device 350 may transmit the analysis result to an external object such as a user terminal.
  • the interface device 340 and the communication device 350 are configured to send and receive certain data from a user or other physical object, they can also be collectively referred to as input/output devices. If the function of receiving an X-ray image is limited, the interface device 340 and the communication device 350 may be referred to as input devices.
  • the output device 360 is a device that outputs certain information.
  • the output device 360 may output interfaces and analysis results necessary for data processing.
  • the arithmetic device 330 may receive an X-ray image and estimate lesion volume information using a machine learning model or program stored in the storage device 310 .
  • the arithmetic device 330 may pre-process the received X-ray image at a constant level. For example, the arithmetic device 330 may perform tasks such as noise removal and brightness control of an X-ray image.
  • the arithmetic device 330 may calculate the lesion area from the X-ray image using a CAD program. During this process, the interface device 340 may receive a command to select a lesion area from the user.
  • the arithmetic device 330 may classify the lesion area by inputting the X-ray image to the segmentation model. The arithmetic device 330 may calculate the area of the divided lesion area.
  • the arithmetic device 330 may calculate a lesion detection probability by inputting the X-ray image to a pre-learned machine learning model.
  • the arithmetic device 330 may estimate the volume information using the lesion area and detection probability.
  • the calculator 330 may calculate the volume information by multiplying the lesion area and the detection probability.
  • the calculation device 330 may calculate the volume information through a mathematical calculation process having the lesion area and detection probability as variables.
  • the arithmetic device 330 may be a device such as a processor, an AP, or a chip in which a program is embedded that processes data and performs certain arithmetic operations.
  • the method for estimating lesion volume information as described above may be implemented as a program (or application) including an executable algorithm that may be executed on a computer.
  • the program may be stored and provided in a temporary or non-transitory computer readable medium.
  • a non-transitory readable medium is not a medium that stores data for a short moment, such as a register, cache, or memory, but a medium that stores data semi-permanently and can be read by a device.
  • the various applications or programs described above are CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM (read-only memory), PROM (programmable read only memory), EPROM (Erasable PROM, EPROM)
  • ROM read-only memory
  • PROM programmable read only memory
  • EPROM Erasable PROM, EPROM
  • it may be stored and provided in a non-transitory readable medium such as EEPROM (Electrically EPROM) or flash memory.
  • Temporary readable media include static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), and enhanced SDRAM (Enhanced SDRAM). SDRAM, ESDRAM), Synchronous DRAM (Synclink DRAM, SLDRAM) and Direct Rambus RAM (DRRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • Enhanced SDRAM Enhanced SDRAM
  • SDRAM ESDRAM
  • Synchronous DRAM Synchronous DRAM
  • SLDRAM Direct Rambus RAM
  • DRRAM Direct Rambus RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Un procédé d'estimation de volume de lésion à l'aide d'une image radiographique comprend les étapes dans lesquelles un dispositif d'analyse : reçoit une image radiographique 2D d'un patient en tant qu'entrée ; détermine une probabilité de détection pour une lésion spécifique par entrée de l'image radiographique dans un modèle d'apprentissage automatique ; et estime le volume 3D de la lésion sur la base de la zone de la lésion détectée à partir de l'image radiographique et de la probabilité de détection.
PCT/KR2022/015555 2021-11-30 2022-10-14 Procédé d'estimation de volume de lésion à l'aide d'une image radiographique et dispositif d'analyse WO2023101203A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210168323A KR20230080825A (ko) 2021-11-30 2021-11-30 X 레이 영상을 이용한 병소 체적 정보 추정 방법 및 분석장치
KR10-2021-0168323 2021-11-30

Publications (1)

Publication Number Publication Date
WO2023101203A1 true WO2023101203A1 (fr) 2023-06-08

Family

ID=86612549

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/015555 WO2023101203A1 (fr) 2021-11-30 2022-10-14 Procédé d'estimation de volume de lésion à l'aide d'une image radiographique et dispositif d'analyse

Country Status (2)

Country Link
KR (1) KR20230080825A (fr)
WO (1) WO2023101203A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160350919A1 (en) * 2015-06-01 2016-12-01 Virtual Radiologic Corporation Medical evaluation machine learning workflows and processes
KR101992057B1 (ko) * 2018-08-17 2019-06-24 (주)제이엘케이인스펙션 혈관 투영 영상을 이용한 뇌질환 진단 방법 및 시스템
KR20200092466A (ko) * 2019-01-07 2020-08-04 재단법인대구경북과학기술원 의료 영상의 분석 모델을 학습시키는 학습 장치 및 그 학습 방법
KR20200117344A (ko) * 2019-04-04 2020-10-14 한국과학기술원 병변 해석을 위한 상호작용이 가능한 cad 방법 및 그 시스템
KR102237198B1 (ko) * 2020-06-05 2021-04-08 주식회사 딥노이드 인공지능 기반의 의료영상 판독 서비스 시스템

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160350919A1 (en) * 2015-06-01 2016-12-01 Virtual Radiologic Corporation Medical evaluation machine learning workflows and processes
KR101992057B1 (ko) * 2018-08-17 2019-06-24 (주)제이엘케이인스펙션 혈관 투영 영상을 이용한 뇌질환 진단 방법 및 시스템
KR20200092466A (ko) * 2019-01-07 2020-08-04 재단법인대구경북과학기술원 의료 영상의 분석 모델을 학습시키는 학습 장치 및 그 학습 방법
KR20200117344A (ko) * 2019-04-04 2020-10-14 한국과학기술원 병변 해석을 위한 상호작용이 가능한 cad 방법 및 그 시스템
KR102237198B1 (ko) * 2020-06-05 2021-04-08 주식회사 딥노이드 인공지능 기반의 의료영상 판독 서비스 시스템

Also Published As

Publication number Publication date
KR20230080825A (ko) 2023-06-07

Similar Documents

Publication Publication Date Title
Jaeger et al. Detecting drug-resistant tuberculosis in chest radiographs
WO2017022908A1 (fr) Procédé et programme de calcul de l'âge osseux au moyen de réseaux neuronaux profonds
WO2019103440A1 (fr) Procédé permettant de prendre en charge la lecture d'une image médicale d'un sujet et dispositif utilisant ce dernier
US11334994B2 (en) Method for discriminating suspicious lesion in medical image, method for interpreting medical image, and computing device implementing the methods
WO2019143021A1 (fr) Procédé de prise en charge de visualisation d'images et appareil l'utilisant
WO2020232374A1 (fr) Localisation anatomique et régionale automatisée de caractéristiques de maladie dans des vidéos de coloscopie
WO2019143179A1 (fr) Procédé de détection automatique de mêmes régions d'intérêt entre des images du même objet prises à un intervalle de temps, et appareil ayant recours à ce procédé
KR102245219B1 (ko) 의료 영상에서 악성의심 병변을 구별하는 방법, 이를 이용한 의료 영상 판독 방법 및 컴퓨팅 장치
Naing et al. Advances in automatic tuberculosis detection in chest x-ray images
Shinde Deep Learning Approaches for Medical Image Analysis and Disease Diagnosis
EP3467770B1 (fr) Procédé d'analyse d'un ensemble de données d'imagerie médicale, système d'analyse d'un ensemble de données d'imagerie médicale, produit-programme d'ordinateur et support lisible par ordinateur
CN111462203B (zh) Dr病灶演化分析装置和方法
WO2023101203A1 (fr) Procédé d'estimation de volume de lésion à l'aide d'une image radiographique et dispositif d'analyse
Tahghighi et al. Automatic classification of symmetry of hemithoraces in canine and feline radiographs
WO2022204605A1 (fr) Interprétation de données de capteur peropératoire à l'aide de réseaux de neurones artificiels à graphique de concept
Zadeh et al. An analysis of new feature extraction methods based on machine learning methods for classification radiological images
US20200388395A1 (en) Apparatus, method, and non-transitory computer-readable storage medium
Ju et al. CODE-NET: A deep learning model for COVID-19 detection
WO2023113230A1 (fr) Procédé et dispositif d'analyse permettant de détecter un point de repère d'une image radiographique céphalométrique au moyen d'un apprentissage par renforcement profond
JP2021189960A (ja) 画像診断方法、画像診断支援装置、及び計算機システム
WO2024123057A1 (fr) Procédé et dispositif d'analyse pour visualiser une tumeur osseuse dans l'humérus à l'aide d'un cliché radiographique thoracique
CN110689112A (zh) 数据处理的方法及装置
KR101726505B1 (ko) 설 촬영 장치 및 설 영상의 프로세싱 방법
KR102231698B1 (ko) 정상 의료 영상을 필터링하는 방법, 이를 이용한 의료 영상 판독 방법 및 컴퓨팅 장치
KR102650919B1 (ko) 패치 단위 대조 학습 모델을 이용한 의료 영상 분석 방법 및 분석 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22901563

Country of ref document: EP

Kind code of ref document: A1