WO2023243775A1 - Deep learning-based brain lesion detection system using slice image segmentation - Google Patents

Deep learning-based brain lesion detection system using slice image segmentation Download PDF

Info

Publication number
WO2023243775A1
WO2023243775A1 PCT/KR2022/014959 KR2022014959W WO2023243775A1 WO 2023243775 A1 WO2023243775 A1 WO 2023243775A1 KR 2022014959 W KR2022014959 W KR 2022014959W WO 2023243775 A1 WO2023243775 A1 WO 2023243775A1
Authority
WO
WIPO (PCT)
Prior art keywords
brain
brain lesion
image
volume
deep learning
Prior art date
Application number
PCT/KR2022/014959
Other languages
French (fr)
Korean (ko)
Inventor
김성헌
김윤
김철호
장재원
이승아
최현수
Original Assignee
주식회사 지오비전
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 지오비전 filed Critical 주식회사 지오비전
Publication of WO2023243775A1 publication Critical patent/WO2023243775A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4058Detecting, measuring or recording for evaluating the nervous system for evaluating the central nervous system
    • A61B5/4064Evaluating the brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Definitions

  • the present invention relates to a deep learning-based brain lesion detection system using slice image segmentation.
  • Acute Ischemic Stroke refers to necrosis of brain tissue (cerebral infarction) caused by insufficient supply of blood and oxygen to the brain due to blocked arteries.
  • Such acute ischemic stroke is a major cause of disability, and it is very important to quickly and accurately determine treatment strategies to prevent disability.
  • thrombectomy Extending the time for Thrombolysis in Emergency Neurological Deficits (EXTEND), thrombectomy in strokes within 6 to 16 hours of onset selected by perfusion imaging (Endovascular therapy following imaging evaluation for ischemic stroke 3, DEFUSE 3), between neurological deficits and cerebral infarction
  • perfusion imaging Endovascular therapy following imaging evaluation for ischemic stroke 3, DEFUSE 3
  • thrombectomy Diffusion Weighted Image or CT assessment with clinical mismatch in the triage of wake-up and late presenting strokes undergoing neurointervention with Trevo, DAWN
  • DAWN Diffusion-weighted imaging
  • Diffusion-weighted imaging is a type of magnetic resonance imaging (MRI) based on measuring the random Brownian motion of water molecules within tissue voxels.
  • MRI magnetic resonance imaging
  • tissues with cellular expansion exhibit low diffusion coefficients, which can be used to detect brain tissue where infarction has occurred or will occur.
  • One object of the present invention is to provide a brain lesion detection system that can detect brain lesion occurrence area and volume from diffusion-weighted images of the brain using a deep learning module.
  • the brain lesion detection system detects the brain lesion area from a volumetric image of the brain using a deep learning module, and converts the volumetric image into N slice images (where N is a natural number of 2 or more). It is characterized by dividing it and using it as an input image for the deep learning module.
  • the volumetric image of the brain may be a diffusion weighted image.
  • the deep learning module includes an encoder and a decoder that are symmetrical to each other and have a skip connection, the encoder determines the context by downsampling the spatial dimension, and the decoder upsamples the encoder output image. Increases the spatial dimension by performing, wherein the encoder includes 2D convolution, Relu (Rectified Linear Unit), batch normalization, 2 2 It may be characterized as including upward convolution.
  • the encoder includes 2D convolution, Relu (Rectified Linear Unit), batch normalization, 2 2 It may be characterized as including upward convolution.
  • it further includes a volume estimation module for estimating the volume of the brain lesion area detected by the deep learning module, wherein the volume estimation module calculates accumulation coefficients for the width and height of the original in the slice image.
  • the volume estimation module calculates accumulation coefficients for the width and height of the original in the slice image. Using the thickness of the slice image, determine the original volume of one pixel in the slice image, count the number of pixels in the brain lesion area in all slices, and calculate the original volume of one pixel in the slice image to the number of pixels in the brain lesion area. It may be characterized by estimating the volume of the brain lesion area by multiplying the volume.
  • Acute ischemic strokes are relatively small compared to the size of the brain and are rarely located. Therefore, when detecting brain lesions using a conventional deep learning module to confirm acute ischemic stroke, problems of class imbalance and data shortage occur.
  • the brain lesion detection system divides the diffusion-weighted image of the brain, which is a volumetric image, into slice images and uses them as input images to increase the amount of data and at the same time view overall brain cross-sectional information. It can solve class imbalance and data shortage problems.
  • the brain lesion detection system according to an embodiment of the present invention has excellent detection performance and can estimate the volume of the brain lesion using pixels of the predicted mask.
  • Figure 1 is a diffusion weighted image of the brain, illustrating (a) a case where the brain lesion area is large and (b) a case where the brain lesion area is small.
  • Figure 2 is a schematic flow chart of a method for detecting a brain lesion area and estimating its volume using a brain lesion detection system according to an embodiment of the present invention.
  • Figure 3 is an example of learning data augmentation for training the deep learning module of the brain lesion detection system according to an embodiment of the present invention.
  • Figure 4 is a reference diagram for explaining a method of estimating the volume of a brain lesion area detected using a brain lesion detection system according to an embodiment of the present invention.
  • Figure 5 is an example image of a small AIS, showing (a) internal verification and (b) external verification.
  • Figure 6 shows (a) expert annotation, (b) detection result of the example, and (c) detection result of the comparative example regarding the brain lesion area, including (1) large AIS, (2) small AIS, and (3) rarely located small AIS. This is the visual result. (TP is red, FP is green, FN is yellow)
  • module used in this document may include a unit implemented in hardware, software, or firmware, and may be used interchangeably with terms such as logic, logic block, component, or circuit, for example.
  • a module may be an integrated part or a minimum unit of the parts or a part thereof that performs one or more functions.
  • modules use computing devices such as CPUs, APs, etc. to perform tasks such as moving, storing, and converting data.
  • a “module” or “node” can be implemented as a device such as a server, PC, tablet PC, smartphone, etc.
  • the present invention relates to a brain lesion detection system that can detect a brain lesion area from a volumetric image of the brain using a deep learning module and simultaneously estimate its volume.
  • a volumetric image of the brain may be a concentration-weighted image, but the present invention is not limited thereto.
  • volume estimation method For the treatment of acute ischemic stroke, it is very important to estimate the volume of the brain lesion area from volumetric images of the brain. Since the brain lesion area is relatively small and sparsely located compared to the size of the brain, a method of detecting the brain lesion area and estimating its volume in a deep learning module using volumetric images taken of the brain directly as input data (hereinafter referred to as “direct When using the "volume estimation method"), when extracting a small patch, there is a problem that the patch does not include the brain lesion area, resulting in serious class imbalance.
  • the brain lesion detection system of the present invention changes the volumetric image taken of the brain into a slice image instead of the direct volume estimation method, and from there, the deep learning module detects the brain lesion area and estimates the volume from it (hereinafter referred to as the “indirect volume estimation method”) "), class imbalance and data shortage were resolved, resulting in high performance and increased accuracy of volume estimation.
  • Diffusion-weighted imaging (DWI) for acute ischemic stroke (AIS) was acquired at Hallym University Chuncheon Sacred Heart Hospital (HUCSH) and Kangwon National University Hospital (KNUH).
  • HUCSH Hallym University Chuncheon Sacred Heart Hospital
  • KNUH Kangwon National University Hospital
  • DWI diffusion weighted imaging
  • AIS acute ischemic stroke
  • KNUH diffusion weighted imaging
  • DWI images from healthy participants as controls were obtained.
  • DWI may show no lesions in healthy participants or a b-value of 1,000 s/mm 2 while some old lesions may be present if a b-value of 0 s/mm 2 .
  • This study was approved by the institutional review boards of HUCSHH and KNUH, which waived the requirement for informed consent (approval numbers HUCSHH 2021-06-013 and KNUH-A-2021-021-001).
  • MRI to acquire diffusion weighted imaging was performed using a variety of machines, including 1.5T (Siemens Helsineers, Er Weg, Germany) and 3T scanners (Ingenia CX, Philips Healthcare, Akechiba, Philips Healthcare, Best, Netherlands). was carried out.
  • 1.5T Siemens Helsineers, Er Weg, Germany
  • 3T scanners Ingenia CX, Philips Healthcare, Akechiba, Philips Healthcare, Best, Netherlands. was carried out.
  • the parameters of the DWI sequence are as follows.
  • DWI sequence parameters repetition time, 3,000 ms-8,000 ms; echo time, 56 ms-103 ms; flip angle, 90°; matrix, 256 x 256-512 x 512; Field Of View, 220 x 220-256 x 256 mm; number of excitations, 1-5; number of slices, 20-50; slice thickness, 3 mm; b-value, 1,000 s/mm 2
  • FIG. 1 shows a brain lesion area detected using a brain lesion detection system according to an embodiment of the present invention. This is a schematic flow chart of how to estimate the volume.
  • the brain lesion detection system of the present invention first converts a volumetric image (eg, DWI) of the brain into a slice image.
  • a volumetric image of the brain is cut into sections at set intervals (or thicknesses) to create N slice images (where N is a natural number of 2 or more).
  • the brain lesion detection system (indirect volume estimation method) of the present invention uses the generated slice image as an input image to the deep learning module.
  • the brain lesion detection system of the present invention performs data augmentation as shown in FIG. 3 to solve the problem of insufficient data. Specifically, random horizontal and vertical flipping and random 90-degree rotation were performed on the slice images. Furthermore, the image size was reduced (e.g., 256
  • a step of detecting the brain lesion area by segmenting the brain lesion area using a deep learning module is performed.
  • the deep learning module used in the present invention is based on 3D U-Net, a CNN (Convolutional Neural Networks)-based architecture, and is characterized by using 2D convolution instead of 3D convolution.
  • the deep learning module used in the present invention includes an encoder and a decoder that are symmetrical to each other and have a skip connection, the encoder determines the context by downsampling the spatial dimension, and the decoder performs upsampling on the encoder output image.
  • the encoder includes 2D convolution, Relu (Rectified Linear Unit), batch normalization, and 2 x 2 max pooling layers, and the decoder includes 2D convolution, Relu, batch normalization, and 2 x 2 up convolution.
  • Convolution, Relu, batch normalization, etc. can be used as known methods, and detailed explanations are omitted here.
  • the deep learning module of the present invention changes the volumetric image of the brain into a slice image and inputs it, thereby increasing the amount of data and making it possible to view overall brain cross-sectional information, thereby solving problems caused by hierarchical imbalance and lack of data.
  • the deep learning module of the present invention uses two adjacent slice images as three-channel input.
  • the deep learning module of the present invention has the effect of providing additional information such as context information and volume information by inputting two adjacent slice images in three channels.
  • the volume of the brain lesion area is estimated using the result of detecting the area separated as the brain lesion area from the slice image input from the deep learning module.
  • Figure 4 is a reference diagram for explaining a method of estimating the volume of a brain lesion area detected using a brain lesion detection system according to an embodiment of the present invention.
  • Estimating the volume of the detected brain lesion area is performed by the volume estimation module.
  • the slice image input to the deep learning module can be resized for performance. Therefore, in order to estimate the volume of the lesion area, the volume estimation module determines the scale coefficients sf w and sf h for the width and height of the original slice image, and the spacing (or thickness) of the slice image in the header file of the image file (DICOM header file). ) to obtain the interval information si. The volume estimation module calculates the volume (V) of the pixel through each acquired accumulation coefficient and the interval of the slice image.
  • the volume estimation module calculates the total number of pixels (P) corresponding to the brain lesion area by accumulating the number of pixels (p) corresponding to the brain lesion area in each slice image output from the deep learning module.
  • H, W, and N are the height, width, and number of slide images, respectively.
  • the volume estimation module estimates the total volume of the brain lesion area by multiplying the total number of pixels (P) corresponding to the brain lesion area by the pixel volume (V).
  • the loss function of the deep learning module of the present invention is learned using a dice loss loss function, and is learned based on knowledge based on diffusion weighted imaging (DWI) of acute ischemic stroke.
  • DWI diffusion weighted imaging
  • y true is the label for the pixel value
  • y pred is the prediction probability for the pixel value
  • 3D U-Net a CNN (Convolutional Neural Networks)-based architecture for deep learning modules.
  • 3D U-Net has a U-shaped structure and uses symmetric encoders and decoders with skipped connections.
  • the encoder determines the context by downsampling the spatial dimension.
  • the encoder consists of three layers: two 3D convolutions, a Rectified Linear Unit (Relu) and batch normalization, and a 2 x 2 max pooling layer.
  • the decoder performs upsampling on the encoder output image to increase the spatial dimension.
  • the decoder consists of three layers, including two 3D convolutions, Relu and batch normalization, and a 2 x 2 up convolution.
  • Symmetric blocks are used with skip concatenation of the corresponding encoding blocks to make up for lost context.
  • Skip connection adds a layer input to the output to compensate for information lost during the convolution process, which helps improve data recovery performance and perform semantic segmentation well. Training of 3D segmentation models requires high computation time and memory, so patch-based 3D segmentation was performed to alleviate this problem.
  • TP true-positive
  • TP is the number of pixels in the result correctly predicted as the brain lesion area (AIS)
  • false-positive is the number of pixels in the result incorrectly predicted as the brain lesion area (AIS)
  • false negative is the number of pixels in the result incorrectly predicted as the brain lesion area (AIS).
  • False-negative is the number of pixels that were incorrectly predicted to be not the brain lesion area (AIS).
  • volume similarity (VS) (see Equation 8) and mean absolute error (MAE) (see Equation 9) were adopted to evaluate the volume estimation performance of the brain lesion area.
  • VS estimated the volume of the predicted brain lesion area (AIS) to indicate its similarity to the labeled brain lesion area (AIS).
  • MAE is the average absolute value of the error, which is the difference between the actual value and the predicted value. That is, the error between the predicted brain lesion area (AIS) and the labeled brain lesion area (AIS) is estimated.
  • measurement is the area of the predicted brain lesion area (AIS) (TP), and groundtruth is the area of the labeled brain lesion area (AIS).
  • Both examples and comparative examples were implemented using the Pytorch framework 1.10 in Python 3.8.10 on Ubuntu using four NVIDIA RTX 3090s and were included as the core algorithm of an AI-based medical solution (ZioMed; ZIOVISION, Chuncheon, Korea).
  • Internal validation was performed using data obtained from HUCSHH, and the dataset was randomly divided into training, validation, and test sets in a ratio of 8:1:1.
  • Data obtained from KNUH were used for external validation. Since it takes a lot of time to load the images and interval information of the data set, considering that it takes a long time to load the DICOM file, the images and interval information extracted from the DICOM file were saved in HDF5.
  • the F1-score of the example (indirect) using the pre-trained brain glioma model was 76.02%. This was significantly higher than the F1-score of 55.71% and 52.48% in the comparative example (direct) using pre-trained models for automatic implants and tumors, respectively. Meanwhile, the F1-score of the scratch mode example (indirect) is 73.09%, which is higher than the F1-score of 54.76% of the scratch mode comparative example (direct). Among the comparative examples (direct), the F1-score is 73.09%. and showed superior performance than the F1-score of a pre-trained model for tumors.
  • the F1-score of the example (indirect) using the pre-trained brain glioma model was 77.23%, and like the internal validation data, it showed superior performance compared to the comparative example. It was. This is also the same for the scratch mode embodiment (indirect).
  • the Jaccard index of the internal and external validation of the example (indirect) using the pre-trained indirect model was 62.12% and 63.82%, respectively, which was higher than that of other models.
  • AIS brain lesion area
  • Table 4 shows the results of estimating the volume of the brain lesion area (AIS) detected for internal and external verification.
  • the VS and MAE which are the volume estimation results of the example (indirect), were 93.25% and 0.797cc, respectively, in internal verification, and 89.17% and 2.468cc, respectively, in external verification, which was significantly higher than that of the comparative example.
  • the best mode embodiment was validated against a normal group with no AIS at all to check for FP errors.
  • the MAE values were 0.028cc and 0.009cc, respectively, showing very small errors.
  • Figure 5 is an image example of a small AIS, showing examples of (a) internal verification and (b) external verification, and Figure 6 shows (a) expert annotation regarding the brain lesion area, (b) detection results of the example, and (c) )
  • the visual results are for (1) large AIS, (2) small AIS, and (3) rarely located small AIS (red for TP, green for FP, yellow for FN).
  • the embodiment has higher accuracy than the comparative example.
  • the volume of the brain lesion area is derived from a volumetric image (3D) of the brain without additional processing, so the comparative example is faster than the example using a slice image (2D) cut into cross-sections of the volumetric image of the brain. expected that more accurate volume estimation would be possible even if it was slower, but this was not the case.
  • the Example had higher performance in detecting the brain lesion area than the Comparative Example, and at the same time, the accuracy of estimating the volume of the brain lesion area was also high.
  • the brain lesion detection system as described above may be implemented as a program (or application) including an executable algorithm that can be executed on a computer.
  • the program may be stored and provided on a non-transitory computer readable medium.
  • a non-transitory readable medium refers to a medium that stores data semi-permanently and can be read by a device, rather than a medium that stores data for a short period of time, such as registers, caches, and memories.
  • the various applications or programs described above may be stored and provided in non-transitory readable media such as CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM, etc.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Neurology (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Psychology (AREA)
  • Quality & Reliability (AREA)
  • Physiology (AREA)
  • Neurosurgery (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The present invention relates to a brain lesion detection system characterized in that a brain lesion area is detected from a volumetric image of a brain using a deep learning module, wherein the volumetric image is divided into N slice images (where N is a natural number equal to or greater than 2) and used as input images for the deep learning module.

Description

슬라이스 이미지 분할을 사용한 딥러닝 기반 뇌병변 검출 시스템Deep learning-based brain lesion detection system using slice image segmentation
본 발명은 슬라이스 이미지 분할을 사용한 딥러닝 기반 뇌병변 검출 시스템에 관한 것이다. The present invention relates to a deep learning-based brain lesion detection system using slice image segmentation.
급성 허혈성 뇌졸중(AIS: Acute Ischemic Stroke)은 동맥이 차단되어 뇌로 혈액 및 산소가 충분하게 공급되지 않아 발생하는 뇌조직의 괴사(뇌경색)을 의미한다. Acute Ischemic Stroke (AIS) refers to necrosis of brain tissue (cerebral infarction) caused by insufficient supply of blood and oxygen to the brain due to blocked arteries.
이와 같은 급성 허혈성 뇌졸중은 장애의 주된 원인이 되며, 장애를 방지하기 위해서는 치료 전략을 빠르고 정확하게 결정하는 것이 매우 중요하다. Such acute ischemic stroke is a major cause of disability, and it is very important to quickly and accurately determine treatment strategies to prevent disability.
Extending the time for Thrombolysis in Emergency Neurological Deficits(EXTEND), 관류 영상으로 선정한 발병 6시간에서 16시간 이내의 뇌졸중에서 혈전제거술(Endovascular therapy following imaging evaluation for ischemic stroke 3, DEFUSE 3), 신경학적 결손과 뇌경색 사이의 불일치를 보이는 발병 6시간에서 24시간 이내의 뇌졸중에서 혈전제거술(Diffusion Weighted Image or CT assessment with clinical mismatch in the triage of wake-up and late presenting strokes undergoing neurointervention with Trevo, DAWN) 등의 최근 임상 시험에서 뇌 영상으로부터 정상 뇌 조직과 병변을 구분하여 병변의 부피를 측정하는 것의 중요성을 보였다. 뇌에 어떤 부위에 경색이 발생했는지 여부는 확산 가중 영상 (DWI: Diffusion-weighted imaging)을 이용한다.Extending the time for Thrombolysis in Emergency Neurological Deficits (EXTEND), thrombectomy in strokes within 6 to 16 hours of onset selected by perfusion imaging (Endovascular therapy following imaging evaluation for ischemic stroke 3, DEFUSE 3), between neurological deficits and cerebral infarction In recent clinical trials, such as thrombectomy (Diffusion Weighted Image or CT assessment with clinical mismatch in the triage of wake-up and late presenting strokes undergoing neurointervention with Trevo, DAWN) in strokes occurring within 6 to 24 hours of onset showing a discrepancy in The importance of distinguishing lesions from normal brain tissue from brain images and measuring lesion volumes was shown. Diffusion-weighted imaging (DWI) is used to determine where an infarction has occurred in the brain.
확산 가중 영상은 조직 복셀 내에서 물 분자의 무작위 브라운 운동을 측정하는 것을 기반으로 하는 MRI(magnetic resonance imaging)의 한 종류이다. Diffusion-weighted imaging is a type of magnetic resonance imaging (MRI) based on measuring the random Brownian motion of water molecules within tissue voxels.
확산 가중 영상에서 세포 팽창이 있는 조직들은 낮은 확산 계수를 나타내며, 이를 통해 경색이 발생하거나 발생할 뇌조직을 검출할 수 있다. In diffusion-weighted imaging, tissues with cellular expansion exhibit low diffusion coefficients, which can be used to detect brain tissue where infarction has occurred or will occur.
확산 가중 영상에서 경색이 발생한 병변을 정확하고 빠르게 분할하고, 투입할 약물의 양을 결정하기 위해 경색이 발생한 병변의 부피를 확인하는 것은 환자를 온전히 치료하기 위한 기초가 되며, 이에 따라 확산 가중 영상에서 뇌병변 영역을 자동으로 분할하여 검출하기 위한 다양한 방법이 개발되었다. Accurately and quickly segmenting the infarcted lesion in diffusion-weighted images and confirming the volume of the infarcted lesion to determine the amount of drug to be injected are the basis for complete treatment of the patient, and accordingly, diffusion-weighted images Various methods have been developed to automatically segment and detect brain lesion areas.
하지만 종래에 제안된 방법들은 사람이 직접 핸들링해야 하는 작업이 너무 많거나, 뇌병변의 다양한 아형, 움직임으로 인한 인공물 발생, 다초점 분포, 정상 조직과의 불분명한 경계 등으로 인한 문제가 다수 나타나고 있다. However, conventionally proposed methods have many problems such as too many tasks that require manual handling by humans, various subtypes of brain lesions, generation of artifacts due to movement, multifocal distribution, and unclear boundaries with normal tissue. .
그러므로 사람의 주관이 개입하지 않고 확산 가중 영상에서 뇌 병변을 객관적이고 빠르게 검출할 수 있는 방안이 필요하다. Therefore, a method is needed to objectively and quickly detect brain lesions in diffusion-weighted images without human intervention.
본 발명의 일 목적은 뇌에 대한 확산 가중 영상으로부터 딥러닝 모듈을 이용하여 뇌 병변 발생 영역 및 부피를 검출할 수 있는 뇌병변 검출 시스템을 제공하는 것을 목적으로 한다.One object of the present invention is to provide a brain lesion detection system that can detect brain lesion occurrence area and volume from diffusion-weighted images of the brain using a deep learning module.
한편, 본 발명의 명시되지 않은 또 다른 목적들은 하기의 상세한 설명 및 그 효과로부터 용이하게 추론할 수 있는 범위 내에서 추가적으로 고려될 것이다.Meanwhile, other unspecified purposes of the present invention will be additionally considered within the scope that can be easily inferred from the following detailed description and its effects.
이상에서 제안한 목적을 달성하기 위해 다음과 같은 해결수단을 제안한다. To achieve the purpose proposed above, we propose the following solutions.
본 발명의 일 실시예에 따른 뇌병변 검출 시스템은 딥러닝 모듈을 이용하여 뇌를 촬영한 체적 영상으로부터 뇌병변 영역을 검출하되, 체적 영상을 N개의 슬라이스 이미지(단, N은 2 이상의 자연수)로 분할하여 상기 딥러닝 모듈의 입력 이미지로 이용하는 것을 특징으로 한다. The brain lesion detection system according to an embodiment of the present invention detects the brain lesion area from a volumetric image of the brain using a deep learning module, and converts the volumetric image into N slice images (where N is a natural number of 2 or more). It is characterized by dividing it and using it as an input image for the deep learning module.
일 실시예에 있어서, 상기 뇌를 촬영한 체적 영상은 확산 가중 영상인 것을 특징으로 할 수 있다.In one embodiment, the volumetric image of the brain may be a diffusion weighted image.
일 실시예에 있어서, 상기 딥러닝 모듈은 서로 대칭이며 건너뜀 연결을 가지는 인코더와 디코더를 포함하고, 상기 인코더는 공간 차원을 다운샘플링하여 컨텍스트를 결정하고, 상기 디코더는 인코더 출력 이미지에 대해 업샘플링을 수행하여 공간 차원을 증가시키며, 상기 인코더는 2D 컨볼루션, Relu(Rectified Linear Unit), 배치 정규화, 2 x 2 최대 풀링 레이어를 포함하고, 상기 디코더는 2D 컨볼루션, Relu 및 배치 정규화, 2 x 2 상향 컨볼루션을 포함하는 것을 특징으로 할 수 있다. In one embodiment, the deep learning module includes an encoder and a decoder that are symmetrical to each other and have a skip connection, the encoder determines the context by downsampling the spatial dimension, and the decoder upsamples the encoder output image. Increases the spatial dimension by performing, wherein the encoder includes 2D convolution, Relu (Rectified Linear Unit), batch normalization, 2 2 It may be characterized as including upward convolution.
일 실시예에 있어서, 상기 딥러닝 모듈에 의해 검출된 뇌병변 영역의 부피를 추정하기 위한 부피 추정 모듈을 더 포함하고, 상기 부피 추정 모듈은 상기 슬라이스 이미지에서 원본에 대한 너비와 높이에 대한 축적 계수와 슬라이스 이미지의 두께를 이용하여 슬라이스 이미지에서 하나의 픽셀의 원본 부피를 결정하고, 모든 슬라이스에서의 뇌병변 영역의 픽셀수를 카운팅하여, 뇌병변 영역의 픽셀수에 슬라이스 이미지에서 하나의 픽셀의 원본 부피를 곱함으로써 뇌병변 영역의 부피를 추정하는 것을 특징으로 할 수 있다.In one embodiment, it further includes a volume estimation module for estimating the volume of the brain lesion area detected by the deep learning module, wherein the volume estimation module calculates accumulation coefficients for the width and height of the original in the slice image. Using the thickness of the slice image, determine the original volume of one pixel in the slice image, count the number of pixels in the brain lesion area in all slices, and calculate the original volume of one pixel in the slice image to the number of pixels in the brain lesion area. It may be characterized by estimating the volume of the brain lesion area by multiplying the volume.
급성 허혈성 뇌졸중은 뇌의 크기에 비해 상대적으로 작고, 또한 드물게 위치한다. 따라서 급성 허혈성 뇌졸중을 확인하기 위해 종래의 딥러닝 모듈을 이용하여 뇌병변을 검출할 경우 클래스 불균형 및 데이터 부족의 문제가 발생한다. Acute ischemic strokes are relatively small compared to the size of the brain and are rarely located. Therefore, when detecting brain lesions using a conventional deep learning module to confirm acute ischemic stroke, problems of class imbalance and data shortage occur.
본 발명의 일 실시예에 따른 뇌병변 검출 시스템은 체적 영상인 뇌에 대한 확산 가중 영상을 슬라이스 이미지로 분할하여 입력 이미지로 이용함으로써 데이터의 양을 증가시킴과 동시에, 전반적인 뇌 단면 정보를 볼 수 있어 클래스 불균형과 데이터 부족 문제를 해결할 수 있다.The brain lesion detection system according to an embodiment of the present invention divides the diffusion-weighted image of the brain, which is a volumetric image, into slice images and uses them as input images to increase the amount of data and at the same time view overall brain cross-sectional information. It can solve class imbalance and data shortage problems.
뿐만 아니라, 본 발명의 일 실시예에 따른 뇌병변 검출 시스템은 뛰어난 검출 성능을 가지며, 예측된 마스크의 픽셀을 이용하여 뇌병변의 부피를 추정할 수 있다.In addition, the brain lesion detection system according to an embodiment of the present invention has excellent detection performance and can estimate the volume of the brain lesion using pixels of the predicted mask.
한편, 여기에서 명시적으로 언급되지 않은 효과라 하더라도, 본 발명의 기술적 특징에 의해 기대되는 이하의 명세서에서 기재된 효과 및 그 잠정적인 효과는 본 발명의 명세서에 기재된 것과 같이 취급됨을 첨언한다.Meanwhile, it is to be added that even if the effects are not explicitly mentioned herein, the effects described in the following specification and their potential effects expected from the technical features of the present invention are treated as if described in the specification of the present invention.
도 1은 뇌의 확산 가중 영상으로서, (a) 뇌 병변 영역이 큰 경우와 (b) 작은 경우의 예시이다.Figure 1 is a diffusion weighted image of the brain, illustrating (a) a case where the brain lesion area is large and (b) a case where the brain lesion area is small.
도 2는 본 발명의 일 실시예에 따른 뇌병변 검출 시스템을 이용하여 뇌 병변 영역을 검출하고, 그 부피를 추정하는 방법의 개략적 플로우 차트이다. Figure 2 is a schematic flow chart of a method for detecting a brain lesion area and estimating its volume using a brain lesion detection system according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 뇌병변 검출 시스템의 딥러닝 모듈을 학습시키기 위한 학습 데이터 증강의 예시이다. Figure 3 is an example of learning data augmentation for training the deep learning module of the brain lesion detection system according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 뇌병변 검출 시스템을 이용하여 검출된 뇌병변 영역의 부피를 추정하는 방법을 설명하기 위한 참고도이다.Figure 4 is a reference diagram for explaining a method of estimating the volume of a brain lesion area detected using a brain lesion detection system according to an embodiment of the present invention.
도 5는 소형 AIS의 이미지 예로서 (a) 내부 검증 및 (b) 외부 검증의 이미지의 예이다.Figure 5 is an example image of a small AIS, showing (a) internal verification and (b) external verification.
도 6은 뇌병변 영역에 관한 (a) 전문가 주석, (b) 실시예의 검출 결과, (c) 비교예의 검출 결과로서, (1) 큰 AIS, (2) 작은 AIS 및 (3) 드물게 위치한 작은 AIS에 대한 시각적 결과이다. (TP는 빨간색, FP는 녹색, FN은 노란색)Figure 6 shows (a) expert annotation, (b) detection result of the example, and (c) detection result of the comparative example regarding the brain lesion area, including (1) large AIS, (2) small AIS, and (3) rarely located small AIS. This is the visual result. (TP is red, FP is green, FN is yellow)
첨부된 도면은 본 발명의 기술사상에 대한 이해를 위하여 참조로서 예시된 것임을 밝히며, 그것에 의해 본 발명의 권리범위가 제한되지는 아니한다.The attached drawings are intended as reference for understanding the technical idea of the present invention, and are not intended to limit the scope of the present invention.
이하, 도면을 참조하여 본 발명의 다양한 실시예가 안내하는 본 발명의 구성과 그 구성으로부터 비롯되는 효과에 대해 살펴본다. 본 발명을 설명함에 있어서 관련된 공지기능에 대하여 이 분야의 기술자에게 자명한 사항으로서 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우에는 그 상세한 설명을 생략한다.Hereinafter, with reference to the drawings, we will look at the configuration of the present invention guided by various embodiments of the present invention and the effects resulting from the configuration. In describing the present invention, if it is determined that related known functions may unnecessarily obscure the gist of the present invention as they are obvious to those skilled in the art, the detailed description thereof will be omitted.
본 문서에서 사용된 용어 "모듈"은 하드웨어, 소프트웨어 또는 펌웨어로 구현된 유닛을 포함할 수 있으며, 예를 들면, 로직, 논리 블록, 부품, 또는 회로 등의 용어와 상호 호환적으로 사용될 수 있다. 모듈은, 일체로 구성된 부품 또는 하나 또는 그 이상의 기능을 수행하는, 상기 부품의 최소 단위 또는 그 일부가 될 수 있다. The term “module” used in this document may include a unit implemented in hardware, software, or firmware, and may be used interchangeably with terms such as logic, logic block, component, or circuit, for example. A module may be an integrated part or a minimum unit of the parts or a part thereof that performs one or more functions.
본 문서에서 "모듈"이나 "노드"는 CPU, AP 등과 같은 연산 장치를 이용하여 데이터를 이동, 저장, 변환 등의 작업을 수행한다. 예컨대 "모듈"이나 "노드"는 서버, PC, 태블릿 PC, 스마트폰 등과 같은 장치로 구현될 수 있다.In this document, “modules” or “nodes” use computing devices such as CPUs, APs, etc. to perform tasks such as moving, storing, and converting data. For example, a “module” or “node” can be implemented as a device such as a server, PC, tablet PC, smartphone, etc.
본 발명은 딥러닝 모듈을 이용하여 뇌를 촬영한 체적 영상으로부터 뇌병변 영역을 검출하고, 동시에 그 부피를 추정할 수 있는 뇌병변 검출 시스템에 관한 것이다. 한편, 뇌를 촬영한 체적 영상의 예로는 확상 가중 영상이 있을 수 있으며, 본 발명이 이에 제한되는 것은 아니다. The present invention relates to a brain lesion detection system that can detect a brain lesion area from a volumetric image of the brain using a deep learning module and simultaneously estimate its volume. Meanwhile, an example of a volumetric image of the brain may be a concentration-weighted image, but the present invention is not limited thereto.
급성 허혈성 뇌졸중의 치료를 위해서는 뇌를 촬영한 체적 영상에서 뇌 병변 영역의 부피를 추정하는 것이 매우 중요하다. 뇌 병변 영역이 뇌의 크기에 비해 상대적으로 작고 드물게 위치하기 때문에 뇌를 촬영한 체적 영상을 직접 입력데이터로 이용하여 딥러닝 모듈에서 뇌병변 영역을 검출하고 그 부피를 추정하는 방법(이하, "직접부피추정법"이라 함)을 이용할 경우 작은 패치를 추출할 경우 패치에 뇌 병변 영역이 포함되지 않아 클래스 불균형이 심각하게 발생하는 문제가 있다. For the treatment of acute ischemic stroke, it is very important to estimate the volume of the brain lesion area from volumetric images of the brain. Since the brain lesion area is relatively small and sparsely located compared to the size of the brain, a method of detecting the brain lesion area and estimating its volume in a deep learning module using volumetric images taken of the brain directly as input data (hereinafter referred to as “direct When using the "volume estimation method"), when extracting a small patch, there is a problem that the patch does not include the brain lesion area, resulting in serious class imbalance.
이에 본 발명의 뇌병변 검출 시스템은 직접부피추정법이 아닌 뇌를 촬영한 체적 영상을 슬라이스 이미지로 변경하고, 그로부터 딥러닝 모듈이 뇌 병변 영역을 검출하고, 그로부터 부피를 추정(이하, "간접부피추정법"이라 함)함으로 클래스 불균형과 데이터 부족을 해소하였으며, 그 결과 높은 성능과 부피 추정의 정확도가 높아졌다. Accordingly, the brain lesion detection system of the present invention changes the volumetric image taken of the brain into a slice image instead of the direct volume estimation method, and from there, the deep learning module detects the brain lesion area and estimates the volume from it (hereinafter referred to as the “indirect volume estimation method”) "), class imbalance and data shortage were resolved, resulting in high performance and increased accuracy of volume estimation.
아래에서는 직접부피추정법(비교예)과 간접부피추정법(실시예)를 비교하여 살펴보도록 한다. Below, we will compare the direct volume estimation method (comparative example) and the indirect volume estimation method (example).
급성 허혈성 뇌졸중(AIS)에 대한 확산 가중 영상(DWI)을 한림대춘천성심병원(HUCSH)과 강원대병원(KNUH)에서 획득하였다. HUCSHH의 경우에는 2011년 1월부터 2019년 12월까지 획득한 급성 허혈성 뇌졸중(AIS)에 대한 확산 가중 영상(DWI)이며, KNUH의 경우에는 2014년 7월부터 2019년 10월까지 획득한 급성 허혈성 뇌졸중(AIS)에 대한 확산 가중 영상(DWI)이다. 최종적으로 AIS의 DWI 이미지 2,159개(HUCSHH의 경우 남성 69.8 ± 12.7세, KNUH의 경우 남성 555개, 여성 425개, 평균 ± 표준 편차 연령 72.4 ± 12.4세)가 획득하였다. 또한, 대조군인 건강한 참가자의 121개의 DWI 영상(HUCSHH의 경우, 평균 ± 표준 편차 연령 59.2 ± 16.4세의 남성 50명과 여성 52명, KNUH의 경우 평균 ± 표준 편차 연령 44.0 ± 17.1세의 남성 10명과 여성 11명)을 획득하였다. DWI는 건강한 참가자이거나 b-값(b-value)이 1,000 s/mm2이면 병변이 없는 반면 b-값이 0 s/mm2이면 일부 오래된 병변이 나타날 수 있습니다. 이 연구는 연구는 사전 동의 요구 사항을 포기한 HUCSHH 및 KNUH의 기관 검토 위원회의 승인을 받았다(승인 번호 HUCSHH 2021-06-013 및 KNUH-A-2021-021-001).Diffusion-weighted imaging (DWI) for acute ischemic stroke (AIS) was acquired at Hallym University Chuncheon Sacred Heart Hospital (HUCSH) and Kangwon National University Hospital (KNUH). For HUCSHH, diffusion weighted imaging (DWI) for acute ischemic stroke (AIS) acquired from January 2011 to December 2019, and for KNUH, diffusion weighted imaging (DWI) for acute ischemic stroke (AIS) acquired from July 2014 to October 2019. Diffusion weighted imaging (DWI) for stroke (AIS). Finally, 2,159 DWI images of AIS (69.8 ± 12.7 years for males for HUCSHH; 555 males and 425 females for KNUH; mean ± standard deviation age 72.4 ± 12.4 years) were obtained. Additionally, 121 DWI images from healthy participants as controls (for HUCSHH, 50 men and 52 women with a mean ± SD age of 59.2 ± 16.4 years; for KNUH, 10 men and women with a mean ± SD age of 44.0 ± 17.1 years) 11 people) were obtained. DWI may show no lesions in healthy participants or a b-value of 1,000 s/mm 2 while some old lesions may be present if a b-value of 0 s/mm 2 . This study was approved by the institutional review boards of HUCSHH and KNUH, which waived the requirement for informed consent (approval numbers HUCSHH 2021-06-013 and KNUH-A-2021-021-001).
확산 가중 영상(DWI)을 획득하기 위한 MRI는 1.5T(독일 지멘스 헬시니어스, 에를랑겐)와 3T 스캐너(잉게니아 CX, 필립스 헬스케어, 아케치바, 필립스 헬스케어, 베스트, 네덜란드)를 포함한 다양한 기계로 수행되었다. DWI 시퀀스의 파라미터는 다음과 같다.MRI to acquire diffusion weighted imaging (DWI) was performed using a variety of machines, including 1.5T (Siemens Helsineers, Erlangen, Germany) and 3T scanners (Ingenia CX, Philips Healthcare, Akechiba, Philips Healthcare, Best, Netherlands). was carried out. The parameters of the DWI sequence are as follows.
DWI 시퀀스 파라미터DWI sequence parameters repetition time, 3,000 ms-8,000 ms; echo time, 56 ms-103 ms; flip angle, 90°; matrix, 256 x 256-512 x 512; Field Of View, 220 x 220-256 x 256 mm; number of excitations, 1-5; number of slices, 20-50; slice thickness, 3 mm; b-value, 1,000 s/mm2 repetition time, 3,000 ms-8,000 ms; echo time, 56 ms-103 ms; flip angle, 90°; matrix, 256 x 256-512 x 512; Field Of View, 220 x 220-256 x 256 mm; number of excitations, 1-5; number of slices, 20-50; slice thickness, 3 mm; b-value, 1,000 s/mm 2
DWI에서 나타난 모든 병변은 HUCSHH 소속의 한 명의 신경과 의사(C.K)와 KNUH 소속의 두 명의 신경과 의사(S.H.K 및 J.-W.J)에 의해 오픈 소스 소프트웨어 애플리케이션인 ITK-Snap을 사용하여 수동으로 분할되었다. 마스크가 있는 모든 이미지는 DICOM(Digital Imaging and Communications in Medicine) 형식으로 저장되었다(도 1 참조).도 2는 본 발명의 일 실시예에 따른 뇌병변 검출 시스템을 이용하여 뇌 병변 영역을 검출하고, 그 부피를 추정하는 방법의 개략적 플로우 차트이다. All lesions seen on DWI were manually segmented using the open source software application ITK-Snap by one neurologist (C.K) from HUCSHH and two neurologists (S.H.K and J.-W.J) from KNUH. It has been done. All images with masks were saved in DICOM (Digital Imaging and Communications in Medicine) format (see Figure 1). Figure 2 shows a brain lesion area detected using a brain lesion detection system according to an embodiment of the present invention; This is a schematic flow chart of how to estimate the volume.
본 발명의 뇌병변 검출 시스템은 먼저 뇌를 촬영한 체적 영상(예를 들어, DWI)을 슬라이스 이미지로 변환한다. 뇌를 촬영한 체적 영상은 정해진 간격(또는 두께)으로 단면을 잘라서 N개의 슬라이스 이미지(단, N은 2 이상의 자연수)를 생성하게 된다. The brain lesion detection system of the present invention first converts a volumetric image (eg, DWI) of the brain into a slice image. A volumetric image of the brain is cut into sections at set intervals (or thicknesses) to create N slice images (where N is a natural number of 2 or more).
본 발명의 뇌병변 검출 시스템(간접부피추정법)에서는 생성된 슬라이스 이미지를 딥러닝 모듈에 입력하는 입력 이미지로 이용한다. The brain lesion detection system (indirect volume estimation method) of the present invention uses the generated slice image as an input image to the deep learning module.
한편, 딥러닝 모듈을 훈련시킬 경우 많은 양의 데이터가 필요하다. 하지만 뇌병변은 흔한 질병이 아니므로 데이터 양의 한계가 있다. 본 발명의 뇌병변 검출 시스템은 데이터 부족의 문제를 해결하기 위해 도 3과 같이 데이터 증강을 수행한다. 구체적으로 슬라이스 이미지에 대해 무작위 수평 · 수직 뒤집기, 무작위 90도 회전을 수행하였다. 나아가 훈련시간 단축하고 계산 비용을 위해 이미지 크기를 줄였다(예를 들어, 256 X 256).Meanwhile, when training a deep learning module, a large amount of data is required. However, since brain lesions are not common diseases, there is a limit to the amount of data. The brain lesion detection system of the present invention performs data augmentation as shown in FIG. 3 to solve the problem of insufficient data. Specifically, random horizontal and vertical flipping and random 90-degree rotation were performed on the slice images. Furthermore, the image size was reduced (e.g., 256
뇌를 촬영한 체적 영상을 슬라이스하고 증강 및 이미지 크기 축소하는 단계를 수행한 후에는 딥러닝 모듈을 이용하여 뇌병변 영역 분할(segmentation)함으로써 뇌병변 영역을 검출(segmentation)하는 단계가 수행된다. After slicing the volumetric image of the brain and performing augmentation and image size reduction steps, a step of detecting the brain lesion area by segmenting the brain lesion area using a deep learning module is performed.
본 발명에서 이용한 딥러닝 모듈은 CNN(Convolutional Neural Networks) 기반 아키텍처인 3D U-Net에 기초한 것으로서, 3D convolution 대신 2D convolution을 사용한 것에 특징이 있다. The deep learning module used in the present invention is based on 3D U-Net, a CNN (Convolutional Neural Networks)-based architecture, and is characterized by using 2D convolution instead of 3D convolution.
구체적으로 본 발명에서 이용한 딥러닝 모듈은 서로 대칭이며 건너뜀 연결을 가지는 인코더와 디코더를 포함하고, 인코더는 공간 차원을 다운샘플링하여 컨텍스트를 결정하고, 디코더는 인코더 출력 이미지에 대해 업샘플링을 수행하여 공간 차원을 증가시키며, 인코더는 2D 컨볼루션, Relu(Rectified Linear Unit), 배치 정규화, 2 x 2 최대 풀링 레이어를 포함하고, 디코더는 2D 컨볼루션, Relu 및 배치 정규화, 2 x 2 상향 컨볼루션을 포함한다. 컨볼루션, Relu, 배치정규화 등은 공지된 것을 이용할 수 있으며, 여기서는 구체적인 설명을 생략한다Specifically, the deep learning module used in the present invention includes an encoder and a decoder that are symmetrical to each other and have a skip connection, the encoder determines the context by downsampling the spatial dimension, and the decoder performs upsampling on the encoder output image. Increasing the spatial dimension, the encoder includes 2D convolution, Relu (Rectified Linear Unit), batch normalization, and 2 x 2 max pooling layers, and the decoder includes 2D convolution, Relu, batch normalization, and 2 x 2 up convolution. Includes. Convolution, Relu, batch normalization, etc. can be used as known methods, and detailed explanations are omitted here.
본 발명의 딥러닝 모듈은 뇌를 촬영한 체적 영상을 슬라이스 이미지로 변경하여 입력함으로써 데이터의 양이 증가하고 전반적인 뇌 단면 정보를 볼 수 있어 계층적 불균형과 데이터 부족에 따른 문제를 해결할 수 있다.The deep learning module of the present invention changes the volumetric image of the brain into a slice image and inputs it, thereby increasing the amount of data and making it possible to view overall brain cross-sectional information, thereby solving problems caused by hierarchical imbalance and lack of data.
또한, 본 발명의 딥러닝 모듈은 두 개의 인접한 슬라이스 이미지를 3채널 입력으로 사용한다. 뇌를 촬영한 체적 영상을 슬라이스 이미지로 전환하여 하나의 입력채널로 슬라이스 이미지를 입력할 경우 체적 영상의 전체적인 문맥 정보들이 누락되는 문제가 있다. 본 발명의 딥러닝 모듈은 두개의 인접한 슬라이스 이미지를 3채널로 입력함으로써 컨텍스트(context) 정보 및 볼륨 정보 등의 추가정보를 제공하는 효과가 있다. Additionally, the deep learning module of the present invention uses two adjacent slice images as three-channel input. When converting a volumetric image of the brain into a slice image and inputting the slice image through one input channel, there is a problem that the entire context information of the volumetric image is missing. The deep learning module of the present invention has the effect of providing additional information such as context information and volume information by inputting two adjacent slice images in three channels.
딥러닝 모듈에서 입력된 슬라이스 이미지로부터 뇌병변 영역으로 분리된 영역을 검출한 결과물을 이용하여 뇌병변 영역의 부피를 추정한다. The volume of the brain lesion area is estimated using the result of detecting the area separated as the brain lesion area from the slice image input from the deep learning module.
도 4는 본 발명의 일 실시예에 따른 뇌병변 검출 시스템을 이용하여 검출된 뇌병변 영역의 부피를 추정하는 방법을 설명하기 위한 참고도이다.Figure 4 is a reference diagram for explaining a method of estimating the volume of a brain lesion area detected using a brain lesion detection system according to an embodiment of the present invention.
검출된 뇌병변 영역의 부피를 추정하는 것은 부피 추정 모듈에 의해 수행된다. Estimating the volume of the detected brain lesion area is performed by the volume estimation module.
앞서 말한 바와 같이 딥러닝 모듈에 입력되는 슬라이스 이미지는 성능을 위해 크기가 조절될 수 있다. 따라서 부피 추정 모듈은 병변 영역의 부피를 추정하기 위해서 원본 슬라이스 이미지의 너비와 높이에 대한 축적 계수 sfw 및 sfh를 결정하고, 이미지 파일의 헤더 파일DICOM 헤더 파일)의 슬라이스 이미지의 간격(또는 두께)를 통해 간격 정보 si를 획득한다. 부피 추정 모듈은 획득한 각 축적계수와 슬라이스 이미지의 간격을 통해 픽셀의 부피(V)를 산출한다.As mentioned earlier, the slice image input to the deep learning module can be resized for performance. Therefore, in order to estimate the volume of the lesion area, the volume estimation module determines the scale coefficients sf w and sf h for the width and height of the original slice image, and the spacing (or thickness) of the slice image in the header file of the image file (DICOM header file). ) to obtain the interval information si. The volume estimation module calculates the volume (V) of the pixel through each acquired accumulation coefficient and the interval of the slice image.
Figure PCTKR2022014959-appb-img-000001
Figure PCTKR2022014959-appb-img-000001
그 다음 부피 추정 모듈은 딥러닝 모듈로부터 출력되는 각각의 슬라이스 이미지에서 뇌병변 영역에 해당하는 픽셀의 수(p)를 누적하여 뇌병변 영역에 해당하는 총 필셀 수(P)를 구한다. H, W, N은 각각 높이, 너비 및 슬라이드 이미지의 수이다. Next, the volume estimation module calculates the total number of pixels (P) corresponding to the brain lesion area by accumulating the number of pixels (p) corresponding to the brain lesion area in each slice image output from the deep learning module. H, W, and N are the height, width, and number of slide images, respectively.
Figure PCTKR2022014959-appb-img-000002
Figure PCTKR2022014959-appb-img-000002
마지막으로 부피 추정 모듈은 뇌병변 영역에 해당하는 총 필셀 수(P)에 픽셀의 부피(V)를 곱하여 뇌병변 영역의 총 부피를 추정하게 된다. Finally, the volume estimation module estimates the total volume of the brain lesion area by multiplying the total number of pixels (P) corresponding to the brain lesion area by the pixel volume (V).
한편, 본 발명의 딥러닝 모듈의 손실 함수로는 다이스 로스(dice loss) 손실함수를 이용하여 학습되며, 급성 허혈성 뇌졸중의 확산 가중 영상(DWI)을 기초로 지식으로 학습했다. Meanwhile, the loss function of the deep learning module of the present invention is learned using a dice loss loss function, and is learned based on knowledge based on diffusion weighted imaging (DWI) of acute ischemic stroke.
Figure PCTKR2022014959-appb-img-000003
Figure PCTKR2022014959-appb-img-000003
여기서 ytrue는 픽셀 값에 대한 레이블이며, ypred는 픽셀 값에 대한 예측확률이다.Here, y true is the label for the pixel value, and y pred is the prediction probability for the pixel value.
한편, 비교예의 직접부피추정법에서는 딥러닝 모듈로 위한 CNN(Convolutional Neural Networks) 기반 아키텍처인 3D U-Net를 이용하였다. 3D U-Net은 U자형 구조를 가지며 건너뛴 연결이 있는 대칭 인코더 및 디코더를 사용한다. 인코더는 공간 차원을 다운샘플링하여 컨텍스트를 결정한다. 인코더는 2개의 3D 컨볼루션, Relu(Rectified Linear Unit) 및 배치 정규화, 2 x 2 최대 풀링 레이어의 세 가지 레이어로 구성된다. 디코더는 인코더 출력 이미지에 대해 업샘플링을 수행하여 공간 차원을 증가시킨다. 디코더는 2개의 3D 컨볼루션, Relu 및 배치 정규화, 2 x 2 상향 컨볼루션을 포함한 3개의 레이어로 구성된다. 대칭 블록은 손실된 컨텍스트를 보충하기 위해 해당 인코딩 블록의 건너뛰기 연결과 함께 사용된다. 스킵 연결은 컨볼루션 과정에서 손실된 정보를 보상하기 위해 출력에 레이어 입력을 추가하여 데이터 복구 성능을 향상시키고 의미론적 분할을 잘 수행하는 데 도움이 된다. 3D 세분화 모델의 교육에는 높은 계산 시간과 메모리가 필요하며, 따라서 이 문제를 완화하기 위해 패치 기반 3D 분할을 수행했다. Meanwhile, in the direct volume estimation method of the comparative example, 3D U-Net, a CNN (Convolutional Neural Networks)-based architecture for deep learning modules, was used. 3D U-Net has a U-shaped structure and uses symmetric encoders and decoders with skipped connections. The encoder determines the context by downsampling the spatial dimension. The encoder consists of three layers: two 3D convolutions, a Rectified Linear Unit (Relu) and batch normalization, and a 2 x 2 max pooling layer. The decoder performs upsampling on the encoder output image to increase the spatial dimension. The decoder consists of three layers, including two 3D convolutions, Relu and batch normalization, and a 2 x 2 up convolution. Symmetric blocks are used with skip concatenation of the corresponding encoding blocks to make up for lost context. Skip connection adds a layer input to the output to compensate for information lost during the convolution process, which helps improve data recovery performance and perform semantic segmentation well. Training of 3D segmentation models requires high computation time and memory, so patch-based 3D segmentation was performed to alleviate this problem.
실시예(간접부피추정법)와 비교예(직접부피추정법)의 성능을 비교하기 위해 정밀도(수학식 4 참조), 재현율(수학식 5 참조), F1-score(수학식 6 참조), Jaccard 지수(수학식 7)을 채택하였다.To compare the performance of the example (indirect volume estimation method) and the comparative example (direct volume estimation method), precision (see Equation 4), recall rate (see Equation 5), F1-score (see Equation 6), and Jaccard index ( Equation 7) was adopted.
Figure PCTKR2022014959-appb-img-000004
Figure PCTKR2022014959-appb-img-000004
Figure PCTKR2022014959-appb-img-000005
Figure PCTKR2022014959-appb-img-000005
Figure PCTKR2022014959-appb-img-000006
Figure PCTKR2022014959-appb-img-000006
Figure PCTKR2022014959-appb-img-000007
Figure PCTKR2022014959-appb-img-000007
뇌병변 검출 시스템에서 중요한 것은 뇌병변 영역을 정확하게 검출하는 것이므로 여기서는 주로 참양성(TP: True-positive)에 대한 메트릭을 설정했다. Since the important thing in a brain lesion detection system is to accurately detect the brain lesion area, here we mainly set the metric for true-positive (TP).
여기서 TP는 뇌병변 영역(AIS)으로 올바르게 예측된 결과의 픽셀 수이며, 가양성(FP: False-positive)은 뇌병변 영역(AIS)으로 잘못 예측된 결과의 픽셀 수이며, 가음성(FN: False-negative)은 뇌병변 영역(AIS)이 아닌 것으로 잘못 예측된 결과의 픽셀 수이다. Here, TP is the number of pixels in the result correctly predicted as the brain lesion area (AIS), false-positive (FP) is the number of pixels in the result incorrectly predicted as the brain lesion area (AIS), and false negative (FN) is the number of pixels in the result incorrectly predicted as the brain lesion area (AIS). False-negative) is the number of pixels that were incorrectly predicted to be not the brain lesion area (AIS).
또한, 뇌병변 영역의 부피 추정 성능을 평가하기 위해 볼륨 유사도(VS: volume similarity)(수학식 8 참조)와 평균절대오차(MAE: mean absolute error)(수학식 9 참조)를 채택했다. In addition, volume similarity (VS) (see Equation 8) and mean absolute error (MAE) (see Equation 9) were adopted to evaluate the volume estimation performance of the brain lesion area.
Figure PCTKR2022014959-appb-img-000008
Figure PCTKR2022014959-appb-img-000008
Figure PCTKR2022014959-appb-img-000009
Figure PCTKR2022014959-appb-img-000009
VS는 레이블이 지정된 뇌 병변 영역(AIS)과의 유사성을 나타내기 위해 예측된 뇌 병변 영역(AIS)의 부피를 추정했다. MAE는 실제 값과 예측 값의 차이인 오차의 절대값 평균이다. 즉, 예측된 뇌 병변 영역(AIS)와 레이블이 지정된 뇌 병변 영역(AIS)사이의 오차를 추정한다. 수학식 8 및 9에서 measurement는 예측된 뇌 병변 영역(AIS)(TP)의 영역이고, groundtruth는 레이블이 지정된 뇌 병변 영역(AIS)의 영역이다.VS estimated the volume of the predicted brain lesion area (AIS) to indicate its similarity to the labeled brain lesion area (AIS). MAE is the average absolute value of the error, which is the difference between the actual value and the predicted value. That is, the error between the predicted brain lesion area (AIS) and the labeled brain lesion area (AIS) is estimated. In Equations 8 and 9, measurement is the area of the predicted brain lesion area (AIS) (TP), and groundtruth is the area of the labeled brain lesion area (AIS).
실시예와 비교예 모두 4개의 NVIDIA RTX 3090을 사용하여 Ubuntu의 Python 3.8.10에서 Pytorch 프레임워크 1.10을 사용하여 구현되었으며 AI 기반 의료 솔루션(ZioMed; ZIOVISION, 한국 춘천)의 핵심 알고리즘으로 포함되었다. HUCSHH에서 얻은 데이터를 사용하여 내부 검증을 수행했으며, 데이터 세트를 8:1:1의 비율로 훈련, 검증 및 테스트 세트로 무작위로 나누었다. KNUH에서 획득한 데이터는 외부 검증에 사용되었다. 데이터 세트의 이미지 및 간격 정보를 로드하는 데 시간이 많이 소요되므로, DICOM 파일을 불러오는 데 시간이 오래 걸리는 점을 고려하여 DICOM 파일에서 추출한 이미지와 간격 정보를 HDF5에 저장하였다.Both examples and comparative examples were implemented using the Pytorch framework 1.10 in Python 3.8.10 on Ubuntu using four NVIDIA RTX 3090s and were included as the core algorithm of an AI-based medical solution (ZioMed; ZIOVISION, Chuncheon, Korea). Internal validation was performed using data obtained from HUCSHH, and the dataset was randomly divided into training, validation, and test sets in a ratio of 8:1:1. Data obtained from KNUH were used for external validation. Since it takes a lot of time to load the images and interval information of the data set, considering that it takes a long time to load the DICOM file, the images and interval information extracted from the DICOM file were saved in HDF5.
실시예와 비교예에 대해서 급성 허혈성 뇌졸중(AIS)의 확산 가중영상(DWI)의 분할 성능을 평가하기 위한 실험을 수행하였다. 표 2 및 표 3은 급성 허혈성 뇌졸중(AIS)의 확산 가중영상(DWI)를 내부 및 외부 검증을 위해 수행한 실시예와 비교예의 성능 결과를 보여준다. For Examples and Comparative Examples, an experiment was performed to evaluate the segmentation performance of diffusion weighted imaging (DWI) of acute ischemic stroke (AIS). Tables 2 and 3 show the performance results of examples and comparative examples performed for internal and external verification of diffusion weighted imaging (DWI) of acute ischemic stroke (AIS).
급성 허혈성 뇌졸중 환자의 내부 검증 데이터와 관련하여 사전 훈련된 뇌 신경교종(Glioma) 모델을 사용한 실시예(indirect)의 F1-score는 76.02%였다. 이는 자동 임플란트 및 종양에 대한 사전 훈련된 모델을 사용한 비교예(direct)의 F1-score가 각각 55.71% 및 52.48% 였던 것보다 현저히 높은 수치였다. 한편, 스크래치 모드의 실시예(indirect)의 F1-score는 73.09%로 스크래치 모드의 비교예(direct)의 F1-score가 54.76%인 것에 비해서 높은 수치임은 물로, 비교예(direct) 중 자동 임플란트 및 종양에 대한 사전 훈련된 모델의 F1-score보다도 뛰어난 성능을 보였다. Regarding the internal validation data of acute ischemic stroke patients, the F1-score of the example (indirect) using the pre-trained brain glioma model was 76.02%. This was significantly higher than the F1-score of 55.71% and 52.48% in the comparative example (direct) using pre-trained models for automatic implants and tumors, respectively. Meanwhile, the F1-score of the scratch mode example (indirect) is 73.09%, which is higher than the F1-score of 54.76% of the scratch mode comparative example (direct). Among the comparative examples (direct), the F1-score is 73.09%. and showed superior performance than the F1-score of a pre-trained model for tumors.
급성 허혈성 뇌졸중 환자의 외부 검증 데이터와 관련하여 사전 훈련된 뇌 신경교종(Glioma) 모델을 사용한 실시예(indirect)의 F1-score는 77.23%였으며, 내부 검증 데이터와 마찬가지로 비교예에 비해 뛰어난 성능을 나타내었다. 이는 스크래치 모드의 실시예(indirect)도 동일하다. Regarding the external validation data of acute ischemic stroke patients, the F1-score of the example (indirect) using the pre-trained brain glioma model was 77.23%, and like the internal validation data, it showed superior performance compared to the comparative example. It was. This is also the same for the scratch mode embodiment (indirect).
마찬가지로 사전 훈련된 간접 모델을 사용한 실시예(indirect)의 내부 및 외부 검증의 Jaccard 지수는 각각 62.12% 및 63.82%로 다른 모델보다 높았다.Likewise, the Jaccard index of the internal and external validation of the example (indirect) using the pre-trained indirect model was 62.12% and 63.82%, respectively, which was higher than that of other models.
Figure PCTKR2022014959-appb-img-000010
Figure PCTKR2022014959-appb-img-000010
Figure PCTKR2022014959-appb-img-000011
Figure PCTKR2022014959-appb-img-000011
VS와 MAE를 통해 실시예와 비교예에서 뇌 병변 영역(AIS) 체적을 추정한 결과가 신뢰할 수 있는지 확인하였다. 표 4는 내부검증과 외부검증을 위해 검출된 뇌 병변 영역(AIS)의 부피를 추정한 결과이다. 사전 훈련된 뇌신경교종을 모델을 사용한 비교예(direct)의 부피 추정 결과인 VS 및 MAE는 내부 검증에서 각각 67.68% 및 1.159cc였으며, 외부 검증에서는 각각 62.59% 및 5.706cc였습니다. 이에 비해 실시예(indirect)의 부피 추정 결과인 VS 및MAE는 내부 검증에서 각각 93.25% 및 0.797cc, 외부 검증에서 각각 89.17% 및 2.468cc로 비교예에 비해 현저히 높은 것을 확인할 수 있다. Through VS and MAE, it was confirmed whether the results of estimating brain lesion area (AIS) volume in Examples and Comparative Examples were reliable. Table 4 shows the results of estimating the volume of the brain lesion area (AIS) detected for internal and external verification. The VS and MAE, the volume estimation results of the direct example using the pretrained cranial glioma model, were 67.68% and 1.159 cc, respectively, in internal validation, and 62.59% and 5.706 cc, respectively, in external validation. In comparison, the VS and MAE, which are the volume estimation results of the example (indirect), were 93.25% and 0.797cc, respectively, in internal verification, and 89.17% and 2.468cc, respectively, in external verification, which was significantly higher than that of the comparative example.
Figure PCTKR2022014959-appb-img-000012
Figure PCTKR2022014959-appb-img-000012
또한 FP 오류를 확인하기 위해 AIS가 전혀 없는 정상 그룹에 대해 베스트 모드의 실시예를 검증했습니다. 그 결과 내부 및 외부 검증의 경우 MAE 값은 각각 0.028cc와 0.009cc로 매우 작은 오차를 보였다. Additionally, the best mode embodiment was validated against a normal group with no AIS at all to check for FP errors. As a result, in the case of internal and external verification, the MAE values were 0.028cc and 0.009cc, respectively, showing very small errors.
한편, 이와 같은 결과에서 급성 허혈성 뇌졸중(AIS)의 확산 가중 영상(DWI)으로부터 실시예의 딥러닝 모듈을 학습시킬 경우 뇌 신경교종에 관한 데이터를 이용하여 사전 훈련시킬 경우 가장 뛰어난 성능과 정확성을 가짐을 알 수 있다. Meanwhile, these results show that when learning the deep learning module of the example from diffusion weighted imaging (DWI) of acute ischemic stroke (AIS), it has the best performance and accuracy when pre-trained using data on brain glioma. Able to know.
도 5는 소형 AIS의 이미지 예로서 (a) 내부 검증 및 (b) 외부 검증의 이미지의 예이며, 도 6은 뇌병변 영역에 관한 (a) 전문가 주석, (b) 실시예의 검출 결과, (c) 비교예의 검출 결과로서, (1) 큰 AIS, (2) 작은 AIS 및 (3) 드물게 위치한 작은 AIS에 대한 시각적 결과이다 (TP는 빨간색, FP는 녹색, FN은 노란색). 위에서 살펴본 바와 같이, 도 5 및 도 6을 참조하더라도 실시예가 비교예에 비해 더 높은 정확도를 가짐을 알 수 있다. Figure 5 is an image example of a small AIS, showing examples of (a) internal verification and (b) external verification, and Figure 6 shows (a) expert annotation regarding the brain lesion area, (b) detection results of the example, and (c) ) As the detection results of the comparative example, the visual results are for (1) large AIS, (2) small AIS, and (3) rarely located small AIS (red for TP, green for FP, yellow for FN). As seen above, even with reference to FIGS. 5 and 6, it can be seen that the embodiment has higher accuracy than the comparative example.
비교예의 경우 뇌를 촬영한 체적 영상(3차원)으로부터 추가적인 처리 없이 뇌 병변 영역의 부피를 도출하기 때문에 뇌를 촬영한 체적 영상을 단면으로 자른 슬라이스 이미지(2차원)를 이용한 실시예보다 비교예가 속도는 더 느리더라도 더 정확한 부피 추정이 가능할 것으로 예상했으나 결과는 그러하지 않았다. 즉, 실시예가 비교예에 비해 뇌 병변 영역을 검출하는 성능도 높았으며, 동시에 뇌 병변 영역의 부피를 추정하는 정확도도 높았다. In the comparative example, the volume of the brain lesion area is derived from a volumetric image (3D) of the brain without additional processing, so the comparative example is faster than the example using a slice image (2D) cut into cross-sections of the volumetric image of the brain. expected that more accurate volume estimation would be possible even if it was slower, but this was not the case. In other words, the Example had higher performance in detecting the brain lesion area than the Comparative Example, and at the same time, the accuracy of estimating the volume of the brain lesion area was also high.
상술한 바와 같은 뇌병변 검출 시스템은 컴퓨터에서 실행될 수 있는 실행가능한 알고리즘을 포함하는 프로그램(또는 어플리케이션)으로 구현될 수 있다. 상기 프로그램은 비일시적 판독 가능 매체(non-transitory computer readable medium)에 저장되어 제공될 수 있다. 여기서 비일시적 판독 가능 매체란 레지스터, 캐쉬, 메모리 등과 같이 짧은 순간 동안 데이터를 저장하는 매체가 아니라 반영구적으로 데이터를 저장하며, 기기에 의해 판독(reading)이 가능한 매체를 의미한다. 구체적으로는, 상술한 다양한 어플리케이션 또는 프로그램들은 CD, DVD, 하드 디스크, 블루레이 디스크, USB, 메모리카드, ROM등과 같은 비일시적 판독 가능 매체에 저장되어 제공될 수 있다.The brain lesion detection system as described above may be implemented as a program (or application) including an executable algorithm that can be executed on a computer. The program may be stored and provided on a non-transitory computer readable medium. Here, a non-transitory readable medium refers to a medium that stores data semi-permanently and can be read by a device, rather than a medium that stores data for a short period of time, such as registers, caches, and memories. Specifically, the various applications or programs described above may be stored and provided in non-transitory readable media such as CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM, etc.
본 발명의 보호범위가 이상에서 명시적으로 설명한 실시예의 기재와 표현에 제한되는 것은 아니다. 또한, 본 발명이 속하는 기술분야에서 자명한 변경이나 치환으로 말미암아 본 발명이 보호범위가 제한될 수도 없음을 다시 한번 첨언한다.The scope of protection of the present invention is not limited to the description and expression of the embodiments explicitly described above. In addition, it is once again added that the scope of protection of the present invention may not be limited due to changes or substitutions that are obvious in the technical field to which the present invention pertains.

Claims (4)

  1. 딥러닝 모듈을 이용하여 뇌를 촬영한 체적 영상으로부터 뇌병변 영역을 검출하되, 체적 영상을 N개의 슬라이스 이미지(단, N은 2 이상의 자연수)로 분할하여 상기 딥러닝 모듈의 입력 이미지로 이용하는 것을 특징으로 하는 뇌병변 검출 시스템. The brain lesion area is detected from a volumetric image of the brain using a deep learning module, and the volumetric image is divided into N slice images (where N is a natural number of 2 or more) and used as the input image of the deep learning module. Brain lesion detection system.
  2. 제1항에 있어서, According to paragraph 1,
    상기 뇌를 촬영한 체적 영상은 확산 가중 영상인 것을 특징으로 하는 뇌병변 검출 시스템.A brain lesion detection system, wherein the volumetric image of the brain is a diffusion weighted image.
  3. 제1항에 있어서, According to paragraph 1,
    상기 딥러닝 모듈은 서로 대칭이며 건너뜀 연결을 가지는 인코더와 디코더를 포함하고, 상기 인코더는 공간 차원을 다운샘플링하여 컨텍스트를 결정하고, 상기 디코더는 인코더 출력 이미지에 대해 업샘플링을 수행하여 공간 차원을 증가시키며, 상기 인코더는 2D 컨볼루션, Relu(Rectified Linear Unit), 배치 정규화, 2 x 2 최대 풀링 레이어를 포함하고, 상기 디코더는 2D 컨볼루션, Relu 및 배치 정규화, 2 x 2 상향 컨볼루션을 포함하는 것을 특징으로 하는 뇌병변 검출 시스템.The deep learning module includes an encoder and a decoder that are symmetrical to each other and have a skip connection, the encoder determines the context by downsampling the spatial dimension, and the decoder performs upsampling on the encoder output image to determine the spatial dimension. Increasing, the encoder includes 2D convolution, Relu (Rectified Linear Unit), batch normalization, 2 x 2 max pooling layer, and the decoder includes 2D convolution, Relu, batch normalization, 2 x 2 up convolution. A brain lesion detection system characterized by:
  4. 제1항에 있어서, According to paragraph 1,
    상기 딥러닝 모듈에 의해 검출된 뇌병변 영역의 부피를 추정하기 위한 부피 추정 모듈을 더 포함하고, Further comprising a volume estimation module for estimating the volume of the brain lesion area detected by the deep learning module,
    상기 부피 추정 모듈은 상기 슬라이스 이미지에서 원본에 대한 너비와 높이에 대한 축적 계수와 슬라이스 이미지의 두께를 이용하여 슬라이스 이미지에서 하나의 픽셀의 원본 부피를 결정하고, 모든 슬라이스에서의 뇌병변 영역의 픽셀수를 카운팅하여, 뇌병변 영역의 픽셀수에 슬라이스 이미지에서 하나의 픽셀의 원본 부피를 곱함으로써 뇌병변 영역의 부피를 추정하는 것을 특징으로 하는 뇌병변 검출 시스템. The volume estimation module determines the original volume of one pixel in the slice image using the accumulation coefficient for the width and height of the original in the slice image and the thickness of the slice image, and determines the original volume of one pixel in the slice image, and the number of pixels in the brain lesion area in all slices. A brain lesion detection system characterized by counting and estimating the volume of the brain lesion area by multiplying the number of pixels in the brain lesion area by the original volume of one pixel in the slice image.
PCT/KR2022/014959 2022-06-14 2022-10-05 Deep learning-based brain lesion detection system using slice image segmentation WO2023243775A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2022-0072214 2022-06-14
KR1020220072214A KR20230171720A (en) 2022-06-14 2022-06-14 Deep learning-based brain lesion detection system using slice image segmentation

Publications (1)

Publication Number Publication Date
WO2023243775A1 true WO2023243775A1 (en) 2023-12-21

Family

ID=89191539

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/014959 WO2023243775A1 (en) 2022-06-14 2022-10-05 Deep learning-based brain lesion detection system using slice image segmentation

Country Status (2)

Country Link
KR (1) KR20230171720A (en)
WO (1) WO2023243775A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210065768A (en) * 2019-11-27 2021-06-04 연세대학교 산학협력단 System for Processing Medical Image and Clinical Factor for Individualized Diagnosis of Stroke

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210065768A (en) * 2019-11-27 2021-06-04 연세대학교 산학협력단 System for Processing Medical Image and Clinical Factor for Individualized Diagnosis of Stroke

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MARZBAN EMAN N., ELDEIB AYMAN M., YASSINE INAS A., KADAH YASSER M.: "Alzheimer’s disease diagnosis from diffusion tensor images using convolutional neural networks", PLOS ONE, PUBLIC LIBRARY OF SCIENCE, US, vol. 15, no. 3, 24 March 2020 (2020-03-24), US , pages e0230409, XP093118042, ISSN: 1932-6203, DOI: 10.1371/journal.pone.0230409 *
NAHIAN SIDDIQUE; PAHEDING SIDIKE; COLIN ELKIN; VIJAY DEVABHAKTUNI: "U-Net and its variants for medical image segmentation: theory and applications", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 2 November 2020 (2020-11-02), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081929727, DOI: 10.1109/ACCESS.2021.3086020 *
TIAN QIYUAN; BILGIC BERKIN; FAN QIUYUN; LIAO CONGYU; NGAMSOMBAT CHANON; HU YUXIN; WITZEL THOMAS; SETSOMPOP KAWIN; POLIMENI JONATHA: "DeepDTI: High-fidelity six-direction diffusion tensor imaging using deep learning", NEUROIMAGE, ELSEVIER, AMSTERDAM, NL, vol. 219, 3 June 2020 (2020-06-03), AMSTERDAM, NL , XP086246481, ISSN: 1053-8119, DOI: 10.1016/j.neuroimage.2020.117017 *
ZHOU TONGXUE, RUAN SU, CANU STéPHANE: "A review: Deep learning for medical image segmentation using multi-modality fusion", ARRAY, vol. 3-4, 1 September 2019 (2019-09-01), pages 100004, XP055802652, ISSN: 2590-0056, DOI: 10.1016/j.array.2019.100004 *

Also Published As

Publication number Publication date
KR20230171720A (en) 2023-12-21

Similar Documents

Publication Publication Date Title
Ardakani et al. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks
Arthurs et al. Diagnostic accuracy and limitations of post-mortem MRI for neurological abnormalities in fetuses and children
Schwarz et al. A comparison of partial volume correction techniques for measuring change in serial amyloid PET SUVR
Wu et al. Gray matter deterioration pattern during Alzheimer's disease progression: a regions-of-interest based surface morphometry study
WO2021151275A1 (en) Image segmentation method and apparatus, device, and storage medium
KR102215119B1 (en) Method, program and computing device for predicting alzheimer's disease by quantifying brain features
Teng et al. Artificial intelligence can effectively predict early hematoma expansion of intracerebral hemorrhage analyzing noncontrast computed tomography image
Mak et al. Efficacy of voxel-based morphometry with DARTEL and standard registration as imaging biomarkers in Alzheimer's disease patients and cognitively normal older adults at 3.0 Tesla MR imaging
WO2021248749A1 (en) Diagnosis aid model for acute ischemic stroke, and image processing method
WO2019143179A1 (en) Method for automatically detecting same regions of interest between images of same object taken with temporal interval, and apparatus using same
CN112348785A (en) Epileptic focus positioning method and system
Harston et al. Optimizing image registration and infarct definition in stroke research
EP2194844A1 (en) A method of analysing stroke images
CN112288705B (en) Accurate quantification method for cerebral hypoperfusion area based on artery spin labeling
CN111445443A (en) Method and device for detecting early acute cerebral infarction
WO2015085319A1 (en) Gross feature recognition of anatomical images based on atlas grid
Sweeney et al. Estimation of multiple sclerosis lesion age on magnetic resonance imaging
US11257227B2 (en) Brain image normalization apparatus, brain image normalization method, and brain image normalization program
WO2023243775A1 (en) Deep learning-based brain lesion detection system using slice image segmentation
CN111402218A (en) Cerebral hemorrhage detection method and device
US20200081085A1 (en) Probabilistic atlases of post-treatment multi-parametric mri scans reveal distinct hemispheric distribution of glioblastoma progression versus pseudo-progression
JP7477906B2 (en) Apparatus and method for providing dementia diagnosis support information
Gómez et al. APIS: A paired CT-MRI dataset for ischemic stroke segmentation challenge
Wang et al. Automated ventricle parcellation and evan’s ratio computation in pre-and post-surgical ventriculomegaly
Hu et al. Regional quantification of developing human cortical shape with a three‐dimensional surface‐based magnetic resonance imaging analysis in utero

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946980

Country of ref document: EP

Kind code of ref document: A1