WO2024123057A1 - Procédé et dispositif d'analyse pour visualiser une tumeur osseuse dans l'humérus à l'aide d'un cliché radiographique thoracique - Google Patents

Procédé et dispositif d'analyse pour visualiser une tumeur osseuse dans l'humérus à l'aide d'un cliché radiographique thoracique Download PDF

Info

Publication number
WO2024123057A1
WO2024123057A1 PCT/KR2023/019940 KR2023019940W WO2024123057A1 WO 2024123057 A1 WO2024123057 A1 WO 2024123057A1 KR 2023019940 W KR2023019940 W KR 2023019940W WO 2024123057 A1 WO2024123057 A1 WO 2024123057A1
Authority
WO
WIPO (PCT)
Prior art keywords
humerus
chest
patch
cam
bone tumor
Prior art date
Application number
PCT/KR2023/019940
Other languages
English (en)
Korean (ko)
Inventor
김경수
정명진
오성제
Original Assignee
사회복지법인 삼성생명공익재단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 사회복지법인 삼성생명공익재단 filed Critical 사회복지법인 삼성생명공익재단
Publication of WO2024123057A1 publication Critical patent/WO2024123057A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/12Arrangements for detecting or locating foreign bodies
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the technology described below relates to a technique for classifying lesions of the humerus in thoracic medical images.
  • the technology described below relates to a technique for diagnosing a bone tumor of the humerus.
  • a chest X-ray contains information about various diseases or lesions.
  • chest x-rays also include the humerus area. Lesions in the humerus area can also be determined through chest X-ray.
  • various learning models for interpreting medical images have recently been studied.
  • Supervised learning models require large amounts of learning data to learn the model.
  • medical institutions do not have a large number of chest X-rays for diagnosing lesions of the humerus. Accordingly, it is difficult with conventional techniques to create a model that accurately classifies humeral lesions in chest X-rays.
  • the technology described below seeks to provide a technique for accurately classifying humeral lesions in chest X-rays using a learning model.
  • the method of visualizing a bone tumor of the humerus using a chest It includes generating a patch and the analysis device inputting the humerus patch into a previously learned deep learning model and outputting a CAM (Class Activation Map) for the humerus patch and whether or not it is a bone tumor.
  • a CAM Class Activation Map
  • the analysis device that classifies bone tumors of the humerus is an interface device that receives the subject's chest X-ray image, a segmentation model that distinguishes the humerus region in the chest ) and a storage device for storing a deep learning model that generates the input chest It includes a calculation device that inputs the deep learning model and calculates CAM and bone tumor for the humerus patch.
  • the technology described below can classify bone tumors of the humerus by focusing only on the humerus region in chest X-rays. Additionally, the technique described below can accurately visualize the area of the bone tumor on a chest x-ray.
  • Figure 1 is an example of a system for classifying humeral bone tumors using chest X-rays.
  • Figure 2 is a schematic example of the learning process of a learning model for classifying humeral bone tumors.
  • Figure 3 is an example of a learning data construction process or data preprocessing process.
  • Figure 4 is an example of a specific learning process of a learning model for classifying humeral bone tumors.
  • Figure 5 shows the results of evaluating the performance of a learning model for classifying humeral bone tumors.
  • Figure 6 is an example of an analysis device for classifying humeral bone tumors.
  • first, second, A, B, etc. may be used to describe various components, but the components are not limited by the terms, and are only used for the purpose of distinguishing one component from other components. It is used only as For example, a first component may be named a second component without departing from the scope of the technology described below, and similarly, the second component may also be named a first component.
  • the term and/or includes any of a plurality of related stated items or a combination of a plurality of related stated items.
  • each component is responsible for. That is, two or more components, which will be described below, may be combined into one component, or one component may be divided into two or more components for more detailed functions.
  • each component to be described below may additionally perform some or all of the functions that other components are responsible for, and some of the main functions that each component is responsible for may be performed by other components. Of course, it can also be carried out exclusively by .
  • each process that makes up the method may occur in a different order from the specified order unless a specific order is clearly stated in the context. That is, each process may occur in the same order as specified, may be performed substantially simultaneously, or may be performed in the opposite order.
  • Medical images can be diverse, such as X-ray images, ultrasound images, CT (Computer Tomography) images, and MRI (Magnetic Resonance Imaging) images.
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging
  • the technology described below is a technique to classify lesions of the humerus using chest X-ray images.
  • the technology described below can classify bone tumors of the humerus in chest X-ray images.
  • Bone tumors include primary tumors such as osteosarcoma and bone metastases.
  • the analysis device uses a learning model to classify or diagnose lesions in chest X-rays.
  • the analysis device can be implemented as a variety of devices capable of processing data.
  • an analysis device can be implemented as a PC, a server on a network, a smart device, or a chipset with a dedicated program embedded therein.
  • Machine learning models include decision trees, random forest, KNN (K-nearest neighbor), Naive Bayes, SVM (support vector machine), and ANN (artificial neural network).
  • KNN K-nearest neighbor
  • SVM support vector machine
  • ANN artificial neural network
  • Figure 1 is an example of a system 100 for classifying humeral bone tumors using chest X-rays.
  • Figure 1 shows an example in which the analysis device is a computer terminal 130 and a server 140.
  • the x-ray equipment 110 generates a chest x-ray image for the subject (patient).
  • the x-ray equipment 110 may store the generated chest x-ray image in an electronic medical record (EMR) 120 or a separate database (DB).
  • EMR electronic medical record
  • DB separate database
  • user A can analyze a chest X-ray image using the computer terminal 130.
  • the computer terminal 130 may receive a chest X-ray image from the x-ray equipment 110 or the EMR 120 through a wired or wireless network. In some cases, the computer terminal 130 may be a device physically connected to the x-ray equipment 110.
  • the computer terminal 130 can constantly preprocess chest X-ray images.
  • the computer terminal 130 can extract the humerus region from the chest X-ray and input it into a previously learned learning model.
  • the computer terminal 130 can classify whether there is a bone tumor in the input humerus based on the value output by the learning model. User A can check the analysis results on the computer terminal 130.
  • the server 140 may receive a chest X-ray image from the x-ray equipment 110 or the EMR 120.
  • the server 140 can consistently preprocess chest X-ray images.
  • the server 140 may extract the humerus region from the chest X-ray and input it into a pre-trained learning model.
  • the server 140 can classify whether there is a bone tumor in the input humerus based on the value output by the learning model.
  • the server 140 may transmit the analysis results to user A's terminal.
  • the computer terminal 130 and/or server 140 may store the analysis results in the EMR 120.
  • Figure 2 is an example of a schematic learning process 200 of a learning model for classifying humeral bone tumors.
  • the learning model building process can be divided into a learning data building process (210) and a model learning process using the learning data (220).
  • the learning data construction process and model learning process can be performed on separate devices.
  • the learning device performs the learning model construction process.
  • Learning device refers to a computer device capable of preprocessing medical images and learning models.
  • the learning device constructs learning data by regularly preprocessing the collected chest X-ray images (210).
  • the learning device acquires chest X-ray images from an image database (DB).
  • the image DB stores images of normal people and images of patients with bone tumors.
  • the learning device extracts a region of interest by inputting the entire chest X-ray of a subject from the population into a previously learned segmentation model.
  • the segmentation model is a model trained in advance to distinguish the humerus region in chest X-rays.
  • the segmentation model may be a U-net-based model.
  • the learning device can generate a humerus mask based on the humerus region. Additionally, the learning device can generate a bone tumor mask that indicates the location of the bone tumor in the humerus from the image of the bone tumor patient.
  • the learning DB can store the humerus region and mask (humerus mask and bone tumor mask) extracted from the population's chest X-ray images.
  • the learning device can prepare learning data by processing images of normal people and images of tumor patients through the same process.
  • the learning device performs a process of building (learning) a learning model using the humerus region and mask (220).
  • the learning device repeatedly performs the learning process using population learning data.
  • the learning model extracts features from the input image and outputs an image that visualizes the humerus area or bone tumor area. Additionally, the learning model can calculate a probability value for the presence or absence of a bone tumor.
  • the learning device updates the parameters of the learning model by comparing the probability value output by the learning model with the label value of the corresponding image that is known in advance.
  • the learning model can binary classify subjects as normal or tumor.
  • Figure 3 is an example of a learning data construction process or data preprocessing process.
  • Figure 3(A) is an example of a process 300 for extracting the humerus region from a chest X-ray.
  • Figure 3(A) shows the process of constructing learning data.
  • Figure 3(A) also corresponds to the data preprocessing process for chest X-rays.
  • the data preprocessing process can be performed in the same way in the bone tumor inference process using a learning model. For convenience of explanation, it is explained that the data preprocessing process is performed in the learning device.
  • the learning device receives a chest X-ray image (310).
  • the chest X-ray image may be an image of a normal person or a patient with a bone tumor.
  • the learning device can classify the humerus region by inputting the input chest X-ray image into the learned segmentation model (320).
  • Chest x-ray image includes left humerus and right humerus.
  • the learning device sets a square area of a certain size for each of the left humerus and the right humerus based on the center point of the humerus (330).
  • the learning device crops a rectangular area (left humerus patch) of a certain size based on the midpoint of the left humerus and a rectangular area (right humerus patch) of a constant size based on the midpoint of the right humerus (340).
  • the image created by cutting is called a patch.
  • the learning device can extract the region of interest (superior bone region) from the chest X-ray image.
  • the superior bone region includes the humerus.
  • the learning device extracts the left humerus region and the right humerus region from the chest X-ray image.
  • the learning device can prepare data for one direction in order to build a learning model based on one humerus region.
  • the learning device can construct learning data based on the left humerus.
  • Figure 3(B) is an example of constructing learning data based on the humerus region extracted from Figure 3(A).
  • the learning device uses the left humerus area in the chest X-ray image.
  • the learning device reverses the right humerus area left and right to unify the image to the left humerus.
  • the learning device creates a mask (humerus segmentation mask) that can extract the humerus region from the left humerus region and the left humerus region created by inversion. Additionally, the learning device creates a mask (bone tumor mask) that can extract the location of the bone tumor of the humerus based on the left humerus.
  • Figure 4 is an example of a specific learning process 400 of a learning model for classifying humeral bone tumors.
  • Figure 4 shows a CNN (Convolutional Neural Network) based model as an example.
  • the learning device receives an image of the humerus region among the learning data (410).
  • the learning device inputs the humerus region image into the deep learning model (420).
  • the CNN may be composed of multiple layers.
  • the convolution layer extracts features from the input image (creating a feature map).
  • CNN does not use a typical fully connected layer (FC) at the end, but has a layer that performs global average pooling.
  • This model corresponds to a model that generates the so-called CAM (Class Activation Map).
  • the learning device multiplies the extracted activation map by the weight of the class through matrix multiplication to perform CAM can be created.
  • the learning device inputs the humerus region image into a deep learning model and generates a CAM for the image (420).
  • the learning device receives the humerus mask for the corresponding image (430).
  • the learning device receives a bone tumor mask for the image (430).
  • the learning device multiplies the mask corresponding to the CAM and creates the masked image. creates .
  • the process of applying a mask to CAM is as shown in Equation 1 below.
  • the mask is a humerus mask, and if the humerus image has a bone tumor, the mask is a bone tumor mask. means element wise product.
  • the learning device performs learning using the L2 loss (least square error) function to treat all pixel values of the CAM as 0 for areas excluding the masked area (humerus area or tumor area) in the CAM (440).
  • lossy L FPAR processes the image so that all colors are removed from the area excluding the mask in the CAM.
  • the last layer of CNN can further use loss L CLS for classifying bone tumor in addition to the loss function for CAM.
  • L CLS corresponds to the loss function that guides the model to infer the correct answer.
  • the deep learning model processes pixel values excluding the humerus area or tumor area as 0 according to the characteristics of the input humerus image, thereby creating a CAM in which only the humerus area or tumor area stands out.
  • the deep learning model also outputs values that classify the input humerus image as normal or bone tumor.
  • the performance of the segmentation model showed high performance with an average of IOU (Intersection over Union) and Dice of 0.98, as shown in Table 2 below.
  • the researcher used 1,493 chest X-ray images with osteosarcoma and 1,500 chest X-ray images of normal people to learn the deep learning model described above.
  • the 1,493 images were images showing the location of osteosarcoma in 89 people diagnosed with osteosarcoma.
  • the researcher used 100 images of osteosarcoma patients and 119 images of normal subjects for model validation (holdout validation).
  • Comparison Model 1 is a model that classifies osteosarcoma by receiving only half of the image including the humerus from the chest X-ray.
  • Comparison Model 2 is a model that classifies osteosarcoma by receiving only the image (patch) of the humerus area from the chest X-ray.
  • the proposed model is a model that classifies osteosarcoma by applying an image and a mask cut from only the humerus region (the proposed model described in Figure 4).
  • the performance of the deep learning model is shown in Table 3 below.
  • the proposed model all showed better performance compared to the comparative models.
  • Figure 5 shows the results of evaluating the performance of a learning model for classifying humeral osteosarcoma.
  • Figure 5 shows the results of generating CAM from the comparative model and the proposed model.
  • the annotated image is a chest X-ray image and an image with the tumor area annotated in the image.
  • Figure 5(A) is the result of using EfficientNet
  • Figure 5(B) is the result of using ShuffleNetV2. Looking at Figure 5, it can be seen that only the proposed model accurately captured the location of osteosarcoma and generated CAM.
  • FIG. 6 is an example of an analysis device for classifying humeral bone tumors.
  • the analysis device 500 corresponds to the above-described analysis devices (130 and 140 in FIG. 1).
  • the analysis device 500 may be physically implemented in various forms.
  • the analysis device 500 may take the form of a computer device such as a PC, a network server, or a chipset dedicated to data processing.
  • the analysis device 500 may include a storage device 510, a memory 520, an arithmetic device 530, an interface device 540, a communication device 550, and an output device 560.
  • the storage device 510 can store chest X-ray images generated by x-ray equipment.
  • the storage device 510 may store a code or program for classifying bone tumors by analyzing chest X-ray images.
  • the storage device 510 may store a segmentation model that distinguishes a bone tumor area, which is a region of interest, in a chest X-ray.
  • the storage device 510 can receive a cut humerus image and store a deep learning model that classifies whether a bone tumor exists.
  • the memory 520 can store data and information generated while the analysis device analyzes the chest X-ray image.
  • the interface device 540 is a device that receives certain commands and data from the outside.
  • the interface device 540 can receive a chest X-ray image from a physically connected input device or an external storage device.
  • the interface device 540 may analyze the chest X-ray image and transmit the results of classifying the bone tumor to an external object.
  • the interface device 540 may receive data or information transmitted via the communication device 550 below.
  • the communication device 550 refers to a configuration that receives and transmits certain information through a wired or wireless network.
  • the communication device 550 can receive a chest X-ray image from an external object.
  • the communication device 550 may analyze the chest X-ray image and transmit the results of classifying the bone tumor to an external object such as a user terminal.
  • the output device 560 is a device that outputs certain information.
  • the output device 560 can output an interface required for the data processing process, an input chest X-ray image, a classification result, and an image in which the bone tumor location is visualized.
  • the calculation device 530 may classify the humerus region by inputting the input chest X-ray image into a segmentation model.
  • the calculation device 530 may generate a humerus patch by cutting only an area of a certain size based on the midpoint of the humerus. For convenience of explanation, we assume that the deep learning model was learned based on the left humerus image. At this time, the calculation device 530 can create a humerus patch by leaving the left humerus intact and flipping the right humerus left and right in the chest X-ray.
  • the calculation device 530 can input the left humerus patch or the left and right inverted right humerus patch into a deep learning model.
  • the deep learning model generates CAM as described above.
  • the deep learning model extracts features from the input humerus patch and creates a CAM so that the pixels of the CAM excluding the humerus or bone tumor area are 0.
  • the deep learning model can also output the results of classifying bone tumors based on the humerus patch.
  • the computing device 530 can control the CAM output from the deep learning model to be output to the output device 560. Additionally, the calculation device 530 may output information on whether the subject is normal or has a bone tumor to the output device 560 based on the probability value for the presence or absence of a bone tumor.
  • the computing device 530 may be a device such as a processor that processes data and performs certain operations, an AP, or a chip with an embedded program.
  • the medical image analysis method or bone tumor classification method as described above may be implemented as a program (or application) including an executable algorithm that can be executed on a computer.
  • the program may be stored and provided in a temporary or non-transitory computer readable medium.
  • a non-transitory readable medium refers to a medium that stores data semi-permanently and can be read by a device, rather than a medium that stores data for a short period of time, such as registers, caches, and memories.
  • the various applications or programs described above include CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM (read-only memory), PROM (programmable read only memory), and EPROM (Erasable PROM, EPROM).
  • EEPROM Electrically EPROM
  • Temporarily readable media include Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDR SDRAM), and Enhanced SDRAM (Enhanced RAM). It refers to various types of RAM, such as SDRAM (ESDRAM), synchronous DRAM (Synclink DRAM, SLDRAM), and Direct Rambus RAM (DRRAM).
  • SRAM Static RAM
  • DRAM Dynamic RAM
  • SDRAM Synchronous DRAM
  • DDR SDRAM Double Data Rate SDRAM
  • Enhanced SDRAM Enhanced SDRAM
  • ESDRAM synchronous DRAM
  • SLDRAM synchronous DRAM
  • DRRAM Direct Rambus RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Physiology (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

L'invention concerne un procédé de visualisation d'une tumeur osseuse dans l'humérus à l'aide d'un cliché radiographique thoracique comprenant les étapes suivantes : un dispositif d'analyse reçoit un cliché radiographique thoracique d'un sujet ; le dispositif d'analyse entre le cliché radiographique thoracique dans un modèle de segmentation entraîné pour distinguer une région d'humérus et génère un timbre d'humérus d'une certaine taille ; et le dispositif d'analyse entre le timbre d'humérus dans un modèle d'apprentissage profond pré-entraîné et délivre une carte d'activation de classe (CAM) pour le timbre d'humérus et indique si une tumeur osseuse est présente ou non.
PCT/KR2023/019940 2022-12-06 2023-12-06 Procédé et dispositif d'analyse pour visualiser une tumeur osseuse dans l'humérus à l'aide d'un cliché radiographique thoracique WO2024123057A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2022-0168582 2022-12-06
KR1020220168582A KR102686815B1 (ko) 2022-12-06 2022-12-06 흉부 엑스레이 영상을 이용한 상완골의 골종양 시각화 방법 및 분석장치

Publications (1)

Publication Number Publication Date
WO2024123057A1 true WO2024123057A1 (fr) 2024-06-13

Family

ID=91379716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/019940 WO2024123057A1 (fr) 2022-12-06 2023-12-06 Procédé et dispositif d'analyse pour visualiser une tumeur osseuse dans l'humérus à l'aide d'un cliché radiographique thoracique

Country Status (2)

Country Link
KR (1) KR102686815B1 (fr)
WO (1) WO2024123057A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101889725B1 (ko) * 2018-07-04 2018-08-20 주식회사 루닛 악성 종양 진단 방법 및 장치
JP2019154943A (ja) * 2018-03-15 2019-09-19 ライフサイエンスコンピューティング株式会社 人工知能を用いた病変の検知方法、及び、そのシステム
KR20200092803A (ko) * 2019-01-25 2020-08-04 주식회사 딥바이오 준-지도학습을 이용하여 질병의 발병 영역에 대한 어노테이션을 수행하기 위한 방법 및 이를 수행하는 진단 시스템
US20200337658A1 (en) * 2019-04-24 2020-10-29 Progenics Pharmaceuticals, Inc. Systems and methods for automated and interactive analysis of bone scan images for detection of metastases
KR102218900B1 (ko) * 2020-04-17 2021-02-23 주식회사 딥노이드 질환 진단 장치 및 진단 방법

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102202361B1 (ko) * 2019-01-08 2021-01-14 전남대학교산학협력단 골 종양 탐지 시스템
KR102291854B1 (ko) * 2019-07-10 2021-08-23 한국과학기술연구원 3차원 딥러닝을 이용한 어깨 질환 자동 진단 장치, 어깨 질환 진단을 위한 정보 제공 방법 및 어깨 질환 진단을 위한 정보 제공 방법을 수행하는 컴퓨터 프로그램을 기록한 전자 기록 매체

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019154943A (ja) * 2018-03-15 2019-09-19 ライフサイエンスコンピューティング株式会社 人工知能を用いた病変の検知方法、及び、そのシステム
KR101889725B1 (ko) * 2018-07-04 2018-08-20 주식회사 루닛 악성 종양 진단 방법 및 장치
KR20200092803A (ko) * 2019-01-25 2020-08-04 주식회사 딥바이오 준-지도학습을 이용하여 질병의 발병 영역에 대한 어노테이션을 수행하기 위한 방법 및 이를 수행하는 진단 시스템
US20200337658A1 (en) * 2019-04-24 2020-10-29 Progenics Pharmaceuticals, Inc. Systems and methods for automated and interactive analysis of bone scan images for detection of metastases
KR102218900B1 (ko) * 2020-04-17 2021-02-23 주식회사 딥노이드 질환 진단 장치 및 진단 방법

Also Published As

Publication number Publication date
KR20240084113A (ko) 2024-06-13
KR102686815B1 (ko) 2024-07-19

Similar Documents

Publication Publication Date Title
Hauser et al. Explainable artificial intelligence in skin cancer recognition: A systematic review
Demir DeepCoroNet: A deep LSTM approach for automated detection of COVID-19 cases from chest X-ray images
Esteva et al. Dermatologist-level classification of skin cancer with deep neural networks
WO2020207377A1 (fr) Procédé, dispositif et système de formation de modèle de reconnaissance d'image et de reconnaissance d'image
CN110390674B (zh) 图像处理方法、装置、存储介质、设备以及系统
CN111292839B (zh) 图像处理方法、装置、计算机设备和存储介质
US20230052133A1 (en) Medical image processing method and apparatus, device, storage medium, and product
Nahiduzzaman et al. ChestX-Ray6: Prediction of multiple diseases including COVID-19 from chest X-ray images using convolutional neural network
WO2023108418A1 (fr) Procédé de construction d'un atlas cérébral et de détection d'un circuit neuronal et produit associé
Causey et al. Spatial pyramid pooling with 3D convolution improves lung cancer detection
CN110427994A (zh) 消化道内镜图像处理方法、装置、存储介质、设备及系统
Araújo et al. Machine learning concepts applied to oral pathology and oral medicine: a convolutional neural networks' approach
Asswin et al. Transfer learning approach for pediatric pneumonia diagnosis using channel attention deep CNN architectures
CN116848588A (zh) 医学图像中的健康状况特征的自动标注
Elayaraja et al. An efficient approach for detection and classification of cancer regions in cervical images using optimization based CNN classification approach
Kumar et al. Medical image classification and manifold disease identification through convolutional neural networks: a research perspective
Aljawarneh et al. Pneumonia detection using enhanced convolutional neural network model on chest x-ray images
WO2024123057A1 (fr) Procédé et dispositif d'analyse pour visualiser une tumeur osseuse dans l'humérus à l'aide d'un cliché radiographique thoracique
Athina et al. Multi-classification Network for Detecting Skin Diseases using Deep Learning and XAI
Likhon et al. SkinMultiNet: advancements in skin cancer prediction using deep learning with web interface
Panda et al. Application of artificial intelligence in medical imaging
Nurtiyasari et al. Covid-19 chest x-ray classification using convolutional neural network architectures
Zhou et al. A novel approach to form Normal Distribution of Medical Image Segmentation based on multiple doctors’ annotations
Fariza et al. Mobile application for early screening of skin cancer using dermoscopy image data based on convolutional neural network
KR20220063052A (ko) 암 환자 치료에 대한 반응성을 예측하기 위한 방법 및 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23901075

Country of ref document: EP

Kind code of ref document: A1