WO2020138932A1 - Machine learning-based method and system for classifying thrombi using gre image - Google Patents

Machine learning-based method and system for classifying thrombi using gre image Download PDF

Info

Publication number
WO2020138932A1
WO2020138932A1 PCT/KR2019/018431 KR2019018431W WO2020138932A1 WO 2020138932 A1 WO2020138932 A1 WO 2020138932A1 KR 2019018431 W KR2019018431 W KR 2019018431W WO 2020138932 A1 WO2020138932 A1 WO 2020138932A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
clot
gre
neural network
patch
Prior art date
Application number
PCT/KR2019/018431
Other languages
French (fr)
Korean (ko)
Inventor
김원태
강신욱
이명재
김동민
장진성
박종혁
Original Assignee
주식회사 제이엘케이인스펙션
사회복지법인 삼성생명공익재단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 제이엘케이인스펙션, 사회복지법인 삼성생명공익재단 filed Critical 주식회사 제이엘케이인스펙션
Priority to JP2021537199A priority Critical patent/JP2022515465A/en
Publication of WO2020138932A1 publication Critical patent/WO2020138932A1/en

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • the present invention relates to a thrombus classification method and system using machine learning based GRE (Gradient echo) images, in particular, detecting a thrombus region from a GRE image through an artificial neural network model, and automatically classifying and providing the type of thrombus It relates to a method and a system.
  • GRE Gradient echo
  • CNN convolutional neural networks
  • GRE Gradient Echo
  • Patent No. 10-2018-0021635 a method and system for analyzing and expressing lesion features using depth direction recursive learning in 3D medical images
  • lesions using convolutional and recursive neural networks in 3D medical images It only discloses a method for extracting feature expressions.
  • the present invention was devised to solve the above-described problems, and a machine learning based GRE image that detects a thrombus region from a GRE (Gradient echo) image through an artificial neural network model and automatically classifies the thrombus type. To provide a thrombus classification method and system utilizing.
  • GRE Gradient echo
  • the method according to an aspect of the present invention for solving the above technical problem is a method of classifying blood clots using a machine learning-based gradient echo (GRE) image, wherein the image acquisition unit acquires a GRE image, and a lesion detection unit A step of detecting a lesion area in a GRE image obtained using an artificial neural network model, and setting the detected patch area to the patch area of a constant size, and resetting the patch area through 3D projection And a step of classifying the thrombus in the patch region using the artificial neural network model.
  • GRE machine learning-based gradient echo
  • a method for solving the above technical problem is a method of classifying blood clots using a machine learning-based gradient echo (GRE) image, comprising: (a) an image acquisition unit obtaining a GRE image; (b) detecting a lesion region in the GRE image acquired by the lesion detection unit using an artificial neural network model; (c) setting the lesion area in which the patch area setting unit is detected as a patch area of a predetermined size, and resetting the patch area through projection in a 3D direction; (d) classifying the thrombus in the lesion region including the patch region using the artificial neural network model; And (e) generating an image including projection information of any one of RED-CLOT and WHITE-CLOT based on the result of the classification by the image generation unit, wherein in the step (c), the patch area setting unit comprises Comparing the shape of the lesion feature expression that appears in the patch region of the predetermined size reset through dimensional projection, and in step (d), the thrombus classification
  • the thrombus classification unit may classify RED-CLOT and WHITE-CLOT according to an artificial neural network model previously learned through cognition using a YOLO neural network in the patch region.
  • the YOLO neural network is a kind of object detection algorithm, and after training algorithms that detect each of RED-CLOT and WHITE-CLOT, it is possible to generate a target vector of the training set according to the final output grid cell.
  • a system for classifying blood clots using a machine learning based gradient echo (GRE) image an image acquisition unit for acquiring a GRE image;
  • a lesion detection unit for detecting a lesion region in a GRE image obtained using an artificial neural network model;
  • a patch area setting unit that sets the detected lesion area as a patch area of a constant size and resets the patch area through projection in 3D direction;
  • a thrombus classification unit for classifying thrombus in a lesion region including a patch region using an artificial neural network model.
  • the patch region setting unit may compare the shape of the lesion feature expression appearing in the patch region of the predetermined size reset through projection in 3D direction.
  • the thrombus classification unit may classify any one of RED-CLOT and WHITE-CLOT in the lesion region according to the comparison result of the patch region setting unit.
  • the thrombus classification unit may classify RED-CLOT and WHITE-CLOT according to an artificial neural network model previously learned through cognition using a YOLO neural network in the patch region.
  • the YOLO neural network is a kind of an object detection algorithm, and after training an algorithm for detecting each of the RED-CLOT and WHITE-CLOT, a target vector of the training set is generated according to the final output grid cell.
  • based on the classification result of the thrombus classification unit may further include an image generation unit for generating an image including the projection information of either RED-CLOT or WHITE-CLOT.
  • lesion area detection and thrombus type are automatically classified and provided to provide convenience to the user. It is possible to increase the accuracy of diagnosis by projecting and analyzing a lesion region reconstructed in three dimensions in various directions.
  • FIG. 1 is a block diagram of a thrombus classification system using a machine learning-based GRE image according to an embodiment of the present invention.
  • FIG. 2 is a block diagram of a data processing unit of a thrombus classification system according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a thrombus classification method using a machine learning-based GRE image according to an embodiment of the present invention.
  • FIG. 4 is an exemplary view of a GRE image showing RED-CLOT and WHITE-CLOT according to an embodiment of the present invention.
  • FIG. 5 is an exemplary diagram of a structure for a convolutional neural network employable in the system and method of the present embodiment.
  • T2-weighted imaging refers to a technique obtained from a specific pulse sequence (magnetic pulse imaging) from magnetic resonance imaging (MRI) or an image obtained by this technique. Provide structural information.
  • FLAIR Fluid attenuated inversion recovery
  • DWI diffusion weighted imaging: Diffusion-weighted imaging mainly refers to diffusion-weighted images obtained from magnetic resonance imaging, and provides information on the degree and extent of diffusion of water molecules in cell tissues in a specific direction.
  • Perfusion weighted imaging refers to perfusion-weighted images (simply, perfusion images) obtained from magnetic resonance imaging, and informs the change in concentration over time of the injected contrast agent.
  • Penumbra A semi-shaded region in an image caused by an ischemic event or embolism, which indicates that the oxygen transport function is locally reduced to cause hypoxic cell death or to be viable upon proper treatment within a few hours.
  • ADC Current diffusion coefficient
  • AP Arterial phase
  • Capillary phase A period of specific perfusion obtained from magnetic resonance imaging, which indicates the time when the contrast medium injected over time passes through the capillary portion.
  • Venous phase A period of specific perfusion obtained from magnetic resonance imaging, which indicates when the contrast medium injected over time passes through the vein.
  • FIG. 1 is a block diagram of a thrombus classification system using a machine learning-based GRE image according to an embodiment of the present invention.
  • the thrombus classification system using a machine learning-based GRE image includes a control unit 2, a storage unit 4, an image acquisition unit 6, a display unit 8, and a data processing unit ( 10).
  • a thrombus classification system using a GRE image can automatically detect a lesion area in a GRE (Gradient Echo) image, automatically classify thrombus, and provide an image including projection information according to the classification result.
  • GRE Gradient Echo
  • the control unit 2 implements a method of detecting a lesion area and automatically classifying blood clots by executing a program or a software module stored in the storage unit 4, and can control each component of the system.
  • the storage unit 4 may store a program or a software module for implementing a method of detecting a lesion area and automatically classifying thrombi.
  • the storage unit 4 may store the GRE image transmitted from an external device.
  • the storage unit 4 may store programs or software modules for machine learning, deep learning, or artificial intelligence.
  • Deep learning or artificial intelligence may have an architecture to increase accuracy.
  • deep learning or artificial intelligence architectures include a convolutional neural network (CNN) and a pooling structure, a deconvolution structure for upsampling, and a skip connection structure to improve learning efficiency. And the like.
  • the image acquisition unit 6 may acquire a GRE image from an external device.
  • the image acquisition unit 6 may be connected to a magnetic resonance images (MRI) device, an MRA device, or a CT device to obtain a 3D image of a patient.
  • MRI magnetic resonance images
  • MRA MRA
  • CT computed tomography
  • the display unit 8 generates data information stored in the storage unit 4, image information acquired by the image acquisition unit 6, lesion area detection results processed by the data processing unit 10, patch area setting results, thrombus classification results, and generation. It can be made to output the image in a visual, audible or a mixture of them.
  • the display unit 8 may include a display device.
  • the data processing unit 10 may detect a lesion area in a GRE image using machine learning, classify a thrombus within the patch region by setting a patch region, and generate an image including projection information based on the classification result. have.
  • the data processing unit 10 includes a lesion area extraction unit 100, a patch area setting unit 200, a blood clot classification unit 300, and an image generation unit 400.
  • the lesion region extracting unit 100 may detect a lesion region in a GRE image using an artificial neural network model.
  • the lesion region extracting unit 100 may extract the lesion from any one of a 2D convolutional neural network (CNN), a 3D convolutional neural network, and a virtual 3D convolutional neural network from the GRE image.
  • the lesion region extracting unit 100 may extract the lesion region through a deep learning structure composed of CNN, pooling, deconvolution, and skip connection. That is, the lesion region is detected through an artificial neural network model in which an artificial neural network is trained through an annotation and CAM method in a GRE image signal.
  • the patch area setting unit 200 may set the detected lesion area as a patch area of a predetermined size. In addition, the patch area setting unit 200 may reset the patch area through projection in a 3D direction.
  • the thrombus classification unit 300 may classify a thrombus in a patch region using an artificial neural network model.
  • the thrombus can be classified as either RED-CLOT or WHITE-CLOT.
  • the thrombus classification unit 300 may perform classification through recognition using the YOLO neural network in the patch region. That is, the thrombus classification unit may classify RED-CLOT and WHITE-CLOT according to an artificial neural network model previously learned through recognition using the YOLO neural network in the patch region.
  • the YOLO neural network is a kind of object detection algorithm, and after training algorithms that detect each of RED-CLOT and WHITE-CLOT, it is possible to generate a target vector of the training set according to the final output grid cell.
  • the size of the target vector can be made of the product of height, width, number of anchor boxes, and vector.
  • the result vector may include the existence of an object, the coordinates (x, y) of the center value, the height of the bounding box value, and classes.
  • the probability of existence of RED-CLOT or WHITE-CLOT when the classification object exists will be close to 1, and the center value and the bounding box value corresponding to the grid cell, And you can print out the class probability.
  • non-maximum suppression may be applied to all grid cells.
  • Thrombus refers to the process of forming a lump or lump in which blood is tangled in a solid state. It can block blood vessel passages and induce a decrease in blood flow, and can be classified into WHITE-CLOT with a predominant platelet component and RED-CLOT with a predominant red blood cell component. In the case of the WHITE-CLOT, it is possible to perform stent treatment, which is a non-surgical treatment, but it is important to distinguish between WHITE-CLOT and RED-CLOT because the RED-CLOT is not capable of non-surgical treatment.
  • the image generator 400 may generate a 3D image by visualizing projection information of the patch region extracted from the lesion region extraction unit and the thrombus classification unit. According to an embodiment, an image including projection information of RED-CLOT may be generated in a W shape, but is not limited thereto.
  • FIG. 3 is a flowchart illustrating a thrombus classification method using a machine learning-based GRE image according to an embodiment of the present invention.
  • the image acquisition unit acquires a gradient echo (GRE) image (S310 ).
  • the GRE image is an image measured by signaling the magnetization component of a 3D magnetic resonance image (MRI).
  • the lesion area detection unit detects the lesion area in the GRE image using the artificial neural network model (S320).
  • the artificial neural network model may be at least one of a 2D convolutional neural network (CNN), a 3D convolutional neural network, and a virtual 3D convolutional neural network.
  • the lesion area in which the patch area setting unit is detected is set as a patch area of a predetermined size (S330). At this time, the patch area can be set to a size already specified by the user. Subsequently, the patch region setting unit resets the patch region through projection in various 3D directions again (S340 ).
  • the thrombus classification unit uses the artificial neural network model to classify the patch region as either RED-CLOT or WHITE-CLOT (S350). RED-CLOT and WHITE-CLOT can be classified according to a pre-trained artificial neural network model.
  • the thrombus classification unit may perform classification through recognition using the YOLO neural network in the patch region.
  • the image generator Based on the classification result, the image generator generates an image including projection information of one of RED-CLOT or WHITE-CLOT (S360).
  • FIG. 4 is an exemplary view of a GRE image showing RED-CLOT and WHITE-CLOT according to an embodiment of the present invention.
  • 5 is an exemplary diagram of a structure for a convolutional neural network employable in the system and method of the present embodiment.
  • the learning module consists of a CNN and a pooling structure for summing lesion information, a deconvolution structure for upsampling, and a skip connection structure for smooth learning. I can learn.
  • the deep learning architecture may have a form including a convolutional network, a deconvolutional network, and a shortcut. As shown in FIG. 5, the deep learning architecture stacks a 3x3 size color convolution layer and an activation layer (ReLU) and extracts a 2x2 size filter to extract local features of the medical image (X).
  • ReLU activation layer
  • the convolution block in the convolution network and the deconvolution network may be implemented by a combination of conv-ReLU-conv layers. Further, the output of the deep learning architecture may be obtained through a classifier connected to a convolutional network or a deconvolutional network, but is not limited thereto.
  • the classifier may be used to extract local features from an image using a fully connectivity network (FCN) technique.
  • FCN fully connectivity network
  • the deep learning architecture may be implemented to additionally use an insulation module or a multi filter pathway in a convolution block depending on the implementation.
  • Different filters in the inception module or multi-filter path may include 1x1 filters.
  • the size of the input image X corresponding to the medical image corresponding to the target vector may be [32x32x3].
  • these sizes may correspond to the product of height, width, and number of anchor boxes and classes, respectively, in the order described.
  • the last 3 of [32x32x3] may be, for example, a value obtained by adding a class (eg, 3) to a predetermined value (eg, 0) multiplied by the number of actor boxes (eg, 1), that is, (3). .
  • the convolutional (CONV) layer is connected to some areas of the input image, and can be designed to calculate the dot product of the connected areas and their weights. .
  • the ReLU (rectified linear unit) layer is an activation function applied to each element, such as max(0,x).
  • the ReLU layer does not change the size of the volume.
  • the POOLING layer may output a reduced volume by performing downsampling or subsampling on a dimension represented by (horizontal, vertical).
  • the fully-connected (FC) layer may calculate class scores and output a volume having a size of [1x1x10], for example.
  • 10 numbers correspond to class scores for 10 categories.
  • the pre-connection layer is connected to all elements of the previous volume. There, some layers may have parameters, while some layers may not.
  • CONV/FC layers may include weight and bias as an activation function, not just input volume. Meanwhile, the ReLU/POOLING layers are fixed functions, and the parameters of the CONV/FC layer can be learned with a gradient descent so that the class score for each image is the same as the label of the corresponding image.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Signal Processing (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Psychiatry (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fuzzy Systems (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The present invention relates to a machine learning-based method and system for classifying thrombi using a gradient echo (GRE) image, the method comprising the steps in which: an image acquisition unit acquires a GRE image; a lesion detection unit detects a lesion region from the acquired GRE image by using an artificial neural network model; a patch region configuration unit configures the detected lesion region into a patch region having a predetermined size, and reconfigures the patch region through a three-dimensional directional projection; and a thrombi classification unit classifies thrombi in the patch region by using the artificial neural network model.

Description

머신러닝 기반의 GRE 영상을 활용한 혈전 분류 방법 및 시스템Thrombus classification method and system using machine learning based GRE image
본 발명은 머신러닝 기반의 GRE(Gradient echo) 영상을 활용한 혈전 분류 방법 및 시스템에 관한 것으로, 특히 GRE 영상으로부터 인공신경망 모델을 통해 혈전영역을 검출하고, 혈전의 종류를 자동으로 분류하여 제공하는 방법 및 시스템에 관한 것이다.The present invention relates to a thrombus classification method and system using machine learning based GRE (Gradient echo) images, in particular, detecting a thrombus region from a GRE image through an artificial neural network model, and automatically classifying and providing the type of thrombus It relates to a method and a system.
컴퓨터를 이용하여 의료영상을 분석하고 진단하는 연구가 활발하게 진행되고 있으며, 특히 딥러닝을 기반으로 하는 인공지능 기술의 혁신적인 발전으로 의료영상을 통해 진단하는 기술이 발전하고 있다. Research into analyzing and diagnosing medical images using a computer has been actively conducted. In particular, the technology for diagnosing through medical images has been developed due to the innovative development of artificial intelligence technology based on deep learning.
딥러닝 기반의 의료영상 분석은 영상을 분류(classification)하는 것을 시작으로 객체의 검출(Detection), 객체 경계의 추출(Segmentation), 서로 다른 영상의 정합(Registration)이 의료 영상 분석에서 중요한 이슈들이며, 영상을 입력으로 하기 때문에 영상에서 특징을 추출하는데 특화된 컨볼루션 신경망(Convolution neural networks; CNN)이 가장 많이 활용되고 있다.Deep learning-based medical image analysis starts with classifying images, and detection of objects, segmentation of objects, and registration of different images are important issues in medical image analysis. Convolutional neural networks (CNN), which are specialized in extracting features from images, are used most often because images are used as input.
한편, GRE(Gradient Echo)영상은 자기공명영상(magnetic resonance imaging, MRI)의 자화성분을 신호화하여 측정하여 혈전을 민감하게 볼 수 있는 MRI 시퀀스로서 널리 이용되고 있으나, 2차원 기준의 GRE영상을 이용하여 의사가 직접 영상을 보고 혈전의 종류를 판단해야 하는 불편함이 있다. On the other hand, GRE (Gradient Echo) images are widely used as MRI sequences that can sensitively view blood clots by signaling and measuring the magnetic components of magnetic resonance imaging (MRI). There is a discomfort that the doctor must judge the type of thrombus by directly viewing the image.
선행기술로는 공개특허 제10-2018-0021635호(3차원 의료 영상에서 깊이 방향 재귀 학습을 이용하는 병변 특징 표현 분석 방법 및 시스템)가 있으나, 3차원 의료 영상에서 컨벌루션 신경망과 재귀 신경망을 이용하여 병변 특징 표현을 추출하는 방법을 개시하고 있을 뿐이다.Prior art has published Patent No. 10-2018-0021635 (a method and system for analyzing and expressing lesion features using depth direction recursive learning in 3D medical images), but lesions using convolutional and recursive neural networks in 3D medical images It only discloses a method for extracting feature expressions.
본 발명은 상술한 바와 같은 문제점을 해결하기 위하여 안출된 것으로, GRE(Gradient echo)영상으로부터 인공신경망 모델을 통해 혈전 영역을 검출하고, 혈전의 종류를 자동으로 분류하여 제공하는 머신러닝 기반의 GRE 영상을 활용한 혈전 분류 방법 및 시스템을 제공하는데 있다.The present invention was devised to solve the above-described problems, and a machine learning based GRE image that detects a thrombus region from a GRE (Gradient echo) image through an artificial neural network model and automatically classifies the thrombus type. To provide a thrombus classification method and system utilizing.
상기 기술적 과제를 해결하기 위한 본 발명의 일 측면에 따른 방법은, 머신러닝 기반의 GRE(Gradient echo) 영상을 활용하여 혈전을 분류하는 방법으로서, 영상획득부가 GRE영상을 획득하는 단계와, 병변검출부가 인공신경망 모델을 이용하여 획득한 GRE영상에서 병변 영역을 검출하는 단계와, 상기 패치영역설정부가 검출된 병변 영역을 일정한 크기의 패치영역으로 설정하고, 3차원 방향의 프로젝션을 통해 패치영역을 재설정하는 단계와, 혈전분류부가 인공신경망 모델을 이용하여 패치영역에서 혈전을 분류하는 단계를 포함한다.The method according to an aspect of the present invention for solving the above technical problem is a method of classifying blood clots using a machine learning-based gradient echo (GRE) image, wherein the image acquisition unit acquires a GRE image, and a lesion detection unit A step of detecting a lesion area in a GRE image obtained using an artificial neural network model, and setting the detected patch area to the patch area of a constant size, and resetting the patch area through 3D projection And a step of classifying the thrombus in the patch region using the artificial neural network model.
상기 기술적 과제를 해결하기 위한 본 발명의 다른 측면에 따른 방법은, 머신러닝 기반의 GRE(Gradient echo) 영상을 활용하여 혈전을 분류하는 방법으로서, (a) 영상획득부가 GRE영상을 획득하는 단계; (b) 병변검출부가 인공신경망 모델을 이용하여 획득한 GRE영상에서 병변 영역을 검출하는 단계; (c) 상기 패치영역설정부가 검출된 병변 영역을 일정한 크기의 패치영역으로 설정하고, 3차원 방향의 프로젝션을 통해 패치영역을 재설정하는 단계; (d) 혈전분류부가 인공신경망 모델을 이용하여 패치영역을 포함한 병변 영역에서 혈전을 분류하는 단계; 및 (e) 영상생성부가 분류결과에 기초하여 RED-CLOT 및 WHITE-CLOT 중 어느 하나의 프로젝션 정보를 포함한 영상을 생성하는 단계를 포함하며, 여기서 상기 (c)단계에서 상기 패치영역설정부는 상기 3차원 방향의 프로젝션을 통해 재설정된 상기 일정한 크기의 패치영역에서 나타나는 병변 특징 표현에 대한 형태를 비교하고, 상기 (d)단계에서 상기 혈전분류부는 상기 패치영역설정부의 비교 결과에 따라 상기 병변 영역에서 RED-CLOT 및 WHITE-CLOT 중 어느 하나를 분류한다.A method according to another aspect of the present invention for solving the above technical problem is a method of classifying blood clots using a machine learning-based gradient echo (GRE) image, comprising: (a) an image acquisition unit obtaining a GRE image; (b) detecting a lesion region in the GRE image acquired by the lesion detection unit using an artificial neural network model; (c) setting the lesion area in which the patch area setting unit is detected as a patch area of a predetermined size, and resetting the patch area through projection in a 3D direction; (d) classifying the thrombus in the lesion region including the patch region using the artificial neural network model; And (e) generating an image including projection information of any one of RED-CLOT and WHITE-CLOT based on the result of the classification by the image generation unit, wherein in the step (c), the patch area setting unit comprises Comparing the shape of the lesion feature expression that appears in the patch region of the predetermined size reset through dimensional projection, and in step (d), the thrombus classification unit is RED in the lesion region according to the comparison result of the patch region setting unit. -CLOT and WHITE-CLOT.
일실시예에서, 상기 혈전분류부는 상기 패치영역에서 YOLO 신경망을 이용한 인지를 통하여 미리 학습된 인공신경망 모델에 따라 RED-CLOT과 WHITE-CLOT을 분류할 수 있다. YOLO 신경망은 물체 감지 알고리즘의 일종으로서 RED-CLOT 및 WHITE-CLOT 각각을 검출하는 알고리즘을 훈련시킨 후 최종 출력 격자 셀에 맞춰 훈련 세트의 타겟 벡터를 생성할 수 있다.In one embodiment, the thrombus classification unit may classify RED-CLOT and WHITE-CLOT according to an artificial neural network model previously learned through cognition using a YOLO neural network in the patch region. The YOLO neural network is a kind of object detection algorithm, and after training algorithms that detect each of RED-CLOT and WHITE-CLOT, it is possible to generate a target vector of the training set according to the final output grid cell.
상기 기술적 과제를 해결하기 위한 본 발명의 또 다른 측면에 따른 시스템은, 머신러닝 기반의 GRE(Gradient echo) 영상을 활용하여 혈전을 분류하는 시스템으로서, GRE영상을 획득하는 영상획득부; 인공신경망 모델을 이용하여 획득한 GRE영상에서 병변 영역을 검출하는 병변검출부; 검출된 병변 영역을 일정한 크기의 패치영역으로 설정하고, 3차원 방향의 프로젝션을 통해 패치영역을 재설정하는 패치영역설정부; 및 인공신경망 모델을 이용하여 패치영역을 포함한 병변 영역에서 혈전을 분류하는 혈전분류부를 포함한다.A system according to another aspect of the present invention for solving the above technical problem is a system for classifying blood clots using a machine learning based gradient echo (GRE) image, an image acquisition unit for acquiring a GRE image; A lesion detection unit for detecting a lesion region in a GRE image obtained using an artificial neural network model; A patch area setting unit that sets the detected lesion area as a patch area of a constant size and resets the patch area through projection in 3D direction; And a thrombus classification unit for classifying thrombus in a lesion region including a patch region using an artificial neural network model.
일실시예에서, 상기 패치영역설정부는 3차원 방향의 프로젝션을 통해 재설정된 상기 일정한 크기의 패치영역에서 나타나는 병변 특징 표현에 대한 형태를 비교할 수 있다. 그리고, 상기 혈전분류부는 상기 패치영역설정부의 비교 결과에 따라 상기 병변 영역에서 RED-CLOT 및 WHITE-CLOT 중 어느 하나를 분류할 수 있다.In one embodiment, the patch region setting unit may compare the shape of the lesion feature expression appearing in the patch region of the predetermined size reset through projection in 3D direction. In addition, the thrombus classification unit may classify any one of RED-CLOT and WHITE-CLOT in the lesion region according to the comparison result of the patch region setting unit.
일실시예에서, 상기 혈전분류부는 상기 패치영역에서 YOLO 신경망을 이용한 인지를 통하여 미리 학습된 인공신경망 모델에 따라 RED-CLOT과 WHITE-CLOT을 분류할 수 있다. 여기서 상기 YOLO 신경망은 물체 감지 알고리즘의 일종으로서 RED-CLOT 및 WHITE-CLOT 각각을 검출하는 알고리즘을 훈련시킨 후 최종 출력 격자 셀에 맞춰 훈련 세트의 타겟 벡터를 생성한다.In one embodiment, the thrombus classification unit may classify RED-CLOT and WHITE-CLOT according to an artificial neural network model previously learned through cognition using a YOLO neural network in the patch region. Here, the YOLO neural network is a kind of an object detection algorithm, and after training an algorithm for detecting each of the RED-CLOT and WHITE-CLOT, a target vector of the training set is generated according to the final output grid cell.
일실시예에서, 상기 혈전분류부의 분류 결과에 기초하여 RED-CLOT 또는 WHITE-CLOT 중 어느 하나의 프로젝션 정보를 포함한 영상을 생성하는 영상생서부를 더 포함할 수 있다.In one embodiment, based on the classification result of the thrombus classification unit may further include an image generation unit for generating an image including the projection information of either RED-CLOT or WHITE-CLOT.
전술한 본 발명의 머신러닝 기반의 GRE(Gradient echo) 영상을 활용하여 혈전을 분류하는 방법 및 시스템을 사용하는 경우에는, 병변 영역 검출 및 혈전 종류를 자동으로 분류하여 제공함으로써 사용자에게 편의성을 제공할 수 있고, 3차원으로 재구성된 병변 영역을 다양한 방향으로 프로젝션시켜 분석함으로써 진단의 정확성을 높일 수 있다.In the case of using the method and system for classifying thrombi by utilizing the machine learning-based gradient echo (GRE) image of the present invention described above, lesion area detection and thrombus type are automatically classified and provided to provide convenience to the user. It is possible to increase the accuracy of diagnosis by projecting and analyzing a lesion region reconstructed in three dimensions in various directions.
도 1은 본 발명의 실시예에 따른 머신러닝 기반의 GRE영상을 활용한 혈전 분류 시스템의 구성도이다.1 is a block diagram of a thrombus classification system using a machine learning-based GRE image according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 혈전 분류 시스템의 데이터처리부의 구성도이다.2 is a block diagram of a data processing unit of a thrombus classification system according to an embodiment of the present invention.
도 3은 본 발명의 실시예에 따른 머신러닝 기반의 GRE영상을 활용한 혈전 분류 방법을 설명하는 순서도이다.3 is a flowchart illustrating a thrombus classification method using a machine learning-based GRE image according to an embodiment of the present invention.
도 4는 본 발명의 실시예에 따른 RED-CLOT과 WHITE-CLOT을 나타내는 GRE영상 예시도이다.4 is an exemplary view of a GRE image showing RED-CLOT and WHITE-CLOT according to an embodiment of the present invention.
도 5는 본 실시예의 시스템 및 방법에 채용할 수 있는 합성곱 신경망에 대한 구조의 예시도이다.5 is an exemplary diagram of a structure for a convolutional neural network employable in the system and method of the present embodiment.
본 명세서에 개시되어 있는 본 발명의 개념에 따른 실시 예들에 대해서 특정한 구조적 또는 기능적 설명은 단지 본 발명의 개념에 따른 실시 예들을 설명하기 위한 목적으로 예시된 것으로서, 본 발명의 개념에 따른 실시 예들은 다양한 형태들로 실시될 수 있으며 본 명세서에 설명된 실시 예들에 한정되지 않는다.Specific structural or functional descriptions of the embodiments according to the concept of the present invention disclosed in this specification are exemplified only for the purpose of explaining the embodiments according to the concept of the present invention, and the embodiments according to the concept of the present invention It can be implemented in various forms and is not limited to the embodiments described herein.
본 발명의 개념에 따른 실시 예들은 다양한 변경들을 가할 수 있고 여러 가지 형태들을 가질 수 있으므로 실시 예들을 도면에 예시하고 본 명세서에서 상세하게 설명하고자 한다. 그러나 이는 본 발명의 개념에 따른 실시 예들을 특정한 개시 형태들에 대해 한정하려는 것이 아니며, 본 발명의 사상 및 기술 범위에 포함되는 모든 변경, 균등물, 또는 대체물을 포함한다.Embodiments according to the concept of the present invention can be applied to various changes and can have various forms, so the embodiments will be illustrated in the drawings and described in detail herein. However, this is not intended to limit the embodiments according to the concept of the present invention to specific disclosure forms, and includes all modifications, equivalents, or substitutes included in the spirit and scope of the present invention.
본 명세서에서 사용한 용어는 단지 특정한 실시 예를 설명하기 위해 사용된 것으로서, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 명세서에서, "포함하다" 또는 "가지다" 등의 용어는 본 명세서에 기재된 특징, 숫자, 단계, 동작, 구성 요소, 부분품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성 요소, 부분품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.The terms used in this specification are only used to describe specific embodiments, and are not intended to limit the present invention. Singular expressions include plural expressions unless the context clearly indicates otherwise. In this specification, terms such as “include” or “have” are intended to indicate the presence of features, numbers, steps, actions, components, parts, or combinations thereof described herein, one or more other features. It should be understood that the existence or addition possibilities of fields or numbers, steps, actions, components, parts or combinations thereof are not excluded in advance.
본 명세서에서 사용되는 용어는 다음과 같다.Terms used in the present specification are as follows.
T2 강조 영상(T2-weighted imaging): 자기공명영상(magnetic resonance imaging, MRI)으로부터 특정 펄스열(pulse sequence)에서 획득하는 기법 혹은 이 기법으로 획득된 영상을 지칭하는 것으로, 영상은 주로 인체 내부 조직의 구조적 정보를 제공한다.T2-weighted imaging (T2-weighted imaging): refers to a technique obtained from a specific pulse sequence (magnetic pulse imaging) from magnetic resonance imaging (MRI) or an image obtained by this technique. Provide structural information.
FLAIR(Fluid attenuated inversion recovery): 긴 역전 시간과 에코 시간으로 뇌척수액의 신호를 약화시켜 T2 강조영상에서 놓치기 쉬운 병변의 발견을 더욱 용이하게 한 자기영상장치를 이용한 신호 획득 기법이나 이 기법으로 획득되는 영상을 지칭한다. FLAIR은 유체 감쇠 반전 복구로 지칭될 수 있다.FLAIR (Fluid attenuated inversion recovery): A signal acquisition technique using magnetic imaging devices or images obtained by this technique that makes it easier to detect lesions that are easily missed in T2-enhanced images by weakening the signals of the cerebrospinal fluid with long reversal and echo times Refers to. FLAIR can be referred to as fluid attenuation inversion recovery.
DWI(Diffusion weighted imaging): 자기공명영상으로부터 획득하는 확산 강조 영상을 주로 지칭하며, 세포 조직 안의 물 분자가 특정 방향으로 확산하는 정도 및 여부에 대한 정보를 제공한다.DWI (Diffusion weighted imaging): Diffusion-weighted imaging mainly refers to diffusion-weighted images obtained from magnetic resonance imaging, and provides information on the degree and extent of diffusion of water molecules in cell tissues in a specific direction.
PWI(Perfusion weighted imaging): 자기공명영상으로부터 획득하는 관류 강조 영상(간단히, 관류 영상)을 지칭하며, 투입된 조영제의 시간에 따른 농도의 변화를 알려준다.Perfusion weighted imaging (PWI): refers to perfusion-weighted images (simply, perfusion images) obtained from magnetic resonance imaging, and informs the change in concentration over time of the injected contrast agent.
Penumbra: 허혈성 사건이나 색전증 등에 의해서 발생하는 영상 내 반음영 영역으로, 산소 운반 기능이 국부적으로 감소되어 저산소 세포 사멸을 일으키거나 수시간 동안 내의 적절한 처치 시 생존가능한 영역을 가리킨다.Penumbra: A semi-shaded region in an image caused by an ischemic event or embolism, which indicates that the oxygen transport function is locally reduced to cause hypoxic cell death or to be viable upon proper treatment within a few hours.
ADC(Apparent diffusion coefficient): 자기공명영상으로부터 획득하는 겉보기 확산 계수로서, 인체 내부 조직의 확산 방해 요소에 대한 정보를 제공한다.ADC (Apparent diffusion coefficient): This is an apparent diffusion coefficient obtained from a magnetic resonance image, and provides information on a diffusion impeding factor in internal tissues of the human body.
Arterial phase(AP): 자기공명영상으로부터 획득하는 특정 관류의 시기로서, 시간에 따라 투입된 조영제가 동맥 부분을 지나가는 시기를 나타낸다.Arterial phase (AP): A period of specific perfusion obtained from magnetic resonance imaging, indicating the time when contrast medium injected over time passes through the artery.
Capillary phase(CP): 자기공명영상으로부터 획득하는 특정 관류의 시기로서, 시간에 따라 투입된 조영제가 모세혈관 부분을 지나가는 시기를 나타낸다.Capillary phase (CP): A period of specific perfusion obtained from magnetic resonance imaging, which indicates the time when the contrast medium injected over time passes through the capillary portion.
Venous phase(VP): 자기공명영상으로부터 획득하는 특정 관류의 시기로서, 시간에 따라 투입된 조영제가 정맥 부분을 지나가는 시기를 나타낸다.Venous phase (VP): A period of specific perfusion obtained from magnetic resonance imaging, which indicates when the contrast medium injected over time passes through the vein.
이하, 본 명세서에 첨부된 도면들을 참조하여 본 발명의 실시 예들을 상세히 설명한다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 실시예에 따른 머신러닝 기반의 GRE영상을 활용한 혈전 분류 시스템의 구성도이다.1 is a block diagram of a thrombus classification system using a machine learning-based GRE image according to an embodiment of the present invention.
도 1을 참조하면, 본 실시예에 따른 머신러닝 기반의 GRE영상을 활용한 혈전 분류 시스템은 제어부(2), 저장부(4), 영상획득부(6), 표시부(8), 데이터처리부(10)로 구성된다. GRE영상을 활용한 혈전 분류 시스템은 GRE(Gradient Echo)영상에서 병변 영역을 검출하여 자동으로 혈전을 분류하고, 분류결과에 따른 프로젝션 정보를 포함하는 영상을 제공할 수 있다. Referring to FIG. 1, the thrombus classification system using a machine learning-based GRE image according to this embodiment includes a control unit 2, a storage unit 4, an image acquisition unit 6, a display unit 8, and a data processing unit ( 10). A thrombus classification system using a GRE image can automatically detect a lesion area in a GRE (Gradient Echo) image, automatically classify thrombus, and provide an image including projection information according to the classification result.
제어부(2)는 저장부(4)에 저장되는 프로그램이나 소프트웨어 모듈을 수행하여 병변 영역을 검출하고 혈전을 자동으로 분류하는 방법을 구현하며, 시스템의 각 구성요소를 제어할 수 있다.The control unit 2 implements a method of detecting a lesion area and automatically classifying blood clots by executing a program or a software module stored in the storage unit 4, and can control each component of the system.
저장부(4)는 병변 영역을 검출하고 혈전을 자동으로 분류하는 방법을 구현하기 위한 프로그램이나 소프트웨어 모듈을 저장할 수 있다. 저장부(4)는 외부 장치로부터 전송된 GRE 영상을 저장할 수 있다. The storage unit 4 may store a program or a software module for implementing a method of detecting a lesion area and automatically classifying thrombi. The storage unit 4 may store the GRE image transmitted from an external device.
또한, 저장부(4)는 머신러닝이나 딥러닝이나 인공지능을 위한 프로그램이나 소프트웨어 모듈을 저장할 수 있다. 딥러닝이나 인공지능은 정확도를 높이기 위한 아키텍처를 구비할 수 있다. 일례로, 딥러닝이나 인공지능 아키텍처는 CNN(convolutional neural network)과 풀링(pooling) 구조, 업샘플링(upsampling)을 위한 디컨볼루션(deconvolution) 구조, 학습 효율 향상을 위한 스킵 커넥션(skip connection) 구조 등을 이용하여 구현될 수 있다.In addition, the storage unit 4 may store programs or software modules for machine learning, deep learning, or artificial intelligence. Deep learning or artificial intelligence may have an architecture to increase accuracy. For example, deep learning or artificial intelligence architectures include a convolutional neural network (CNN) and a pooling structure, a deconvolution structure for upsampling, and a skip connection structure to improve learning efficiency. And the like.
영상획득부(6)는 외부 장치로부터 GRE영상을 획득할 수 있다. 영상획득부(6)는 MRI(magnetic resonance images) 장치, MRA 장치, CT장치 등에 연결되어 환자를 촬영한 3차원 영상을 획득할 수 있다. The image acquisition unit 6 may acquire a GRE image from an external device. The image acquisition unit 6 may be connected to a magnetic resonance images (MRI) device, an MRA device, or a CT device to obtain a 3D image of a patient.
표시부(8)는 저장부(4)에 저장된 데이터 정보, 영상획득부(6)에서 획득한 영상 정보, 데이터처리부(10)에서 처리하는 병변 영역 검출 결과, 패치 영역 설정 결과, 혈전 분류 결과, 생성된 영상을 시각적, 청각적 또는 이들의 혼합 방식으로 출력하도록 이루어질 수 있다. 표시부(8)는 디스플레이 장치를 포함할 수 있다.The display unit 8 generates data information stored in the storage unit 4, image information acquired by the image acquisition unit 6, lesion area detection results processed by the data processing unit 10, patch area setting results, thrombus classification results, and generation. It can be made to output the image in a visual, audible or a mixture of them. The display unit 8 may include a display device.
데이터처리부(10)는 기계학습을 이용하여 GRE영상에서 병변 영역을 검출하고, 패치영역을 설정하여 상기 패치영역 내에서 혈전을 분류하고, 분류결과에 기초하여 프로젝션 정보를 포함하는 영상을 생성할 수 있다.The data processing unit 10 may detect a lesion area in a GRE image using machine learning, classify a thrombus within the patch region by setting a patch region, and generate an image including projection information based on the classification result. have.
도 2는 본 발명의 실시예에 따른 혈전 분류 시스템의 데이터처리부의 구성도이다. 도 2를 참조하면, 데이터처리부(10)는 병변영역추출부(100), 패치영역설정부(200), 혈전분류부(300), 영상생성부(400)로 구성된다. 2 is a block diagram of a data processing unit of a thrombus classification system according to an embodiment of the present invention. Referring to FIG. 2, the data processing unit 10 includes a lesion area extraction unit 100, a patch area setting unit 200, a blood clot classification unit 300, and an image generation unit 400.
병변영역추출부(100)는 인공신경망 모델을 이용하여 GRE영상에서 병변 영역을 검출할 수 있다. 병변영역추출부(100)는 GRE영상으로부터 2차원 합성곱 신경망(Convolutional Neural Network; CNN), 3차원 합성곱 신경망, 가상 3차원 합성곱 신경망 중 어느 하나를 이용하여 병변을 추출할 수 있다. 구체적으로 병변영역추출부(100)는 CNN, 풀링(pooling), deconvolution, skip connection 로 구성된 딥러닝 구조를 통해 병변영역을 추출할 수 있다. 즉, GRE 영상 신호내 어노테이션(Annotation), CAM의 방법을 통해 인공신경망을 학습시킨 인공신경망 모델을 통해 병변 영역을 검출한다. The lesion region extracting unit 100 may detect a lesion region in a GRE image using an artificial neural network model. The lesion region extracting unit 100 may extract the lesion from any one of a 2D convolutional neural network (CNN), a 3D convolutional neural network, and a virtual 3D convolutional neural network from the GRE image. Specifically, the lesion region extracting unit 100 may extract the lesion region through a deep learning structure composed of CNN, pooling, deconvolution, and skip connection. That is, the lesion region is detected through an artificial neural network model in which an artificial neural network is trained through an annotation and CAM method in a GRE image signal.
패치영역설정부(200)는 검출된 병변 영역을 일정 크기의 패치 영역으로 설정할 수 있다. 또한, 패치영역설정부(200)는 3차원 방향의 프로젝션을 통해 상기 패치 영역을 재설정할 수 있다. The patch area setting unit 200 may set the detected lesion area as a patch area of a predetermined size. In addition, the patch area setting unit 200 may reset the patch area through projection in a 3D direction.
혈전분류부(300)는 인공신경망 모델을 이용하여 패치 영역에서 혈전을 분류할 수 있다. 이때 상기 혈전은 RED-CLOT 또는 WHITE-CLOT 중 하나로 분류할 수 있다. 혈전분류부(300)는 상기 패치영역에서 YOLO 신경망을 이용한 인지를 통하여 분류를 수행할 수 있다. 즉, 혈전분류부는 상기 패치영역에서 YOLO 신경망을 이용한 인지를 통하여 미리 학습된 인공신경망 모델에 따라 RED-CLOT과 WHITE-CLOT을 분류할 수 있다.The thrombus classification unit 300 may classify a thrombus in a patch region using an artificial neural network model. In this case, the thrombus can be classified as either RED-CLOT or WHITE-CLOT. The thrombus classification unit 300 may perform classification through recognition using the YOLO neural network in the patch region. That is, the thrombus classification unit may classify RED-CLOT and WHITE-CLOT according to an artificial neural network model previously learned through recognition using the YOLO neural network in the patch region.
YOLO 신경망은 물체 감지 알고리즘의 일종으로서 RED-CLOT 및 WHITE-CLOT 각각을 검출하는 알고리즘을 훈련시킨 후 최종 출력 격자 셀에 맞춰 훈련 세트의 타겟 벡터를 생성할 수 있다. 타겟 벡터의 크기는 높이, 넓이, 앵커박스 개수 및 벡터의 곱으로 이루어질 수 있다. 그리고 결과값 벡터는 물체존재 여부, 중심값의 좌표(x, y), 경계 상자값의 높이이 넓이 그리고 클래스들을 포함할 수 있다. 여기서, YOLO 신경망을 통한 분류시, 훈련이 잘 되어있다면, 분류 대상이 존재할 때 RED-CLOT 또는 WHITE-CLOT에 대한 존재 여부 확률이 1에 가까울 것이고, 격자 셀에 해당되는 중심값과 경계박스 값, 그리고 클래스 확률을 출력할 수 있다. 이때, 모든 격자 셀에 대하여는 비-최댓값 억제가 적용될 수 있다.The YOLO neural network is a kind of object detection algorithm, and after training algorithms that detect each of RED-CLOT and WHITE-CLOT, it is possible to generate a target vector of the training set according to the final output grid cell. The size of the target vector can be made of the product of height, width, number of anchor boxes, and vector. And the result vector may include the existence of an object, the coordinates (x, y) of the center value, the height of the bounding box value, and classes. Here, in the case of classification through the YOLO neural network, if training is good, the probability of existence of RED-CLOT or WHITE-CLOT when the classification object exists will be close to 1, and the center value and the bounding box value corresponding to the grid cell, And you can print out the class probability. At this time, non-maximum suppression may be applied to all grid cells.
혈전(CLOT)은 혈액이 고체상태로 엉킨 덩어리 또는 덩어리를 형성하는 과정을 의미한다. 혈관 통로를 막아 혈류량 감소를 유발할 수 있고, 혈소판 성분이 우세한 WHITE-CLOT과 적혈구 성분이 우세한 RED-CLOT으로 분류할 수 있다. 상기 WHITE-CLOT의 경우 비수술적 치료인 스텐트 시술이 가능하나 상기 RED-CLOT은 비수술적 치료가 불가능해 WHITE-CLOT과 RED-CLOT을 구분하는 방법이 중요하다. Thrombus (CLOT) refers to the process of forming a lump or lump in which blood is tangled in a solid state. It can block blood vessel passages and induce a decrease in blood flow, and can be classified into WHITE-CLOT with a predominant platelet component and RED-CLOT with a predominant red blood cell component. In the case of the WHITE-CLOT, it is possible to perform stent treatment, which is a non-surgical treatment, but it is important to distinguish between WHITE-CLOT and RED-CLOT because the RED-CLOT is not capable of non-surgical treatment.
영상생성부(400)는 병변영역 추출부와 혈전분류부에서 추출된 패치영역의 프로젝션 정보를 시각화하여 3차원 영상으로 생성할 수 있다. 실시예에 따라 RED-CLOT의 프로젝션 정보를 포함하는 영상은 W형상으로 생성될 수 있으나 이에 대해 한정하는 것은 아니다.The image generator 400 may generate a 3D image by visualizing projection information of the patch region extracted from the lesion region extraction unit and the thrombus classification unit. According to an embodiment, an image including projection information of RED-CLOT may be generated in a W shape, but is not limited thereto.
도 3은 본 발명의 실시예에 따른 머신러닝 기반의 GRE영상을 활용한 혈전 분류 방법을 설명하는 순서도이다.3 is a flowchart illustrating a thrombus classification method using a machine learning-based GRE image according to an embodiment of the present invention.
도 3을 참조하면, 영상획득부가 GRE(Gradient echo) 영상을 획득한다(S310). 상기 GRE영상은 3차원 자기공명영상(MRI)의 자화성분을 신호화하여 측정한 영상이다. 이후에, 병변영역검출부가 인공신경망 모델을 이용하여 GRE영상에서 병변 영역을 검출한다(S320). 이때, 상기 인공신경망 모델은 2차원 합성곱 신경망(Convolutional Neural Network; CNN), 3차원 합성곱 신경망, 가상 3차원 합성곱 신경망 중 적어도 하나일 수 있다.Referring to FIG. 3, the image acquisition unit acquires a gradient echo (GRE) image (S310 ). The GRE image is an image measured by signaling the magnetization component of a 3D magnetic resonance image (MRI). Thereafter, the lesion area detection unit detects the lesion area in the GRE image using the artificial neural network model (S320). At this time, the artificial neural network model may be at least one of a 2D convolutional neural network (CNN), a 3D convolutional neural network, and a virtual 3D convolutional neural network.
패치영역설정부가 검출된 병변 영역을 일정 크기의 패치영역으로 설정한다(S330). 이때, 사용자가 이미 지정한 크기로 패치영역을 설정할 수 있다. 이후에, 패치영역설정부가 다시 다양한 3차원 방향의 프로젝션을 통해 패치영역을 재설정한다(S340).The lesion area in which the patch area setting unit is detected is set as a patch area of a predetermined size (S330). At this time, the patch area can be set to a size already specified by the user. Subsequently, the patch region setting unit resets the patch region through projection in various 3D directions again (S340 ).
혈전분류부가 인공신경망 모델을 이용하여 패치영역에서 RED-CLOT 또는 WHITE-CLOT 중 하나로 분류한다(S350). 미리 학습된 인공신경망 모델에 따라 RED-CLOT과 WHITE-CLOT을 분류할 수 있다. 혈전분류부는 상기 패치영역에서 YOLO 신경망을 이용한 인지를 통하여 분류를 수행할 수 있다.The thrombus classification unit uses the artificial neural network model to classify the patch region as either RED-CLOT or WHITE-CLOT (S350). RED-CLOT and WHITE-CLOT can be classified according to a pre-trained artificial neural network model. The thrombus classification unit may perform classification through recognition using the YOLO neural network in the patch region.
영상생성부가 분류결과에 기초하여 RED-CLOT 또는 WHITE-CLOT중 하나의 프로젝션 정보를 포함하는 영상을 생성한다(S360).Based on the classification result, the image generator generates an image including projection information of one of RED-CLOT or WHITE-CLOT (S360).
도 4는 본 발명의 실시예에 따른 RED-CLOT과 WHITE-CLOT을 나타내는 GRE영상 예시도이다. 도 5는 본 실시예의 시스템 및 방법에 채용할 수 있는 합성곱 신경망에 대한 구조의 예시도이다.4 is an exemplary view of a GRE image showing RED-CLOT and WHITE-CLOT according to an embodiment of the present invention. 5 is an exemplary diagram of a structure for a convolutional neural network employable in the system and method of the present embodiment.
도 4(a)는 WHITE-CLOT이 발견되는 GRE영상이고, 도 4(b)는 RED-CLOT이 발견되는 GRE영상으로 해당 영상을 학습데이터로 하여 인공신경망 모델을 학습할 수 있다. 즉, CNN과 병변 정보 합을 위한 풀링(pooling) 구조, 업샘플링(upsampling)을 위한 디컨볼루션(deconvolution) 구조, 학습을 원할하게 하기 위한 스킵 컨넥션(skip connection) 구조로 구성된 학습모듈에 의해 미리 학습할 수 있다.4(a) is a GRE image in which WHITE-CLOT is found, and FIG. 4(b) is a GRE image in which RED-CLOT is found, and an artificial neural network model may be trained using the corresponding image as training data. In other words, the learning module consists of a CNN and a pooling structure for summing lesion information, a deconvolution structure for upsampling, and a skip connection structure for smooth learning. I can learn.
일례로, 딥러닝 아키텍처는 컨볼루션 네트워크와 디컨볼루션 네트워크 및 숏컷(shortcut)을 포함한 형태를 구비할 수 있다. 도 5에 도시한 바와 같이, 딥러닝 아키텍처는 의료 영상(X)의 국소적인 특징을 추출하기 위하여 3x3 크기의 컬러 컨볼루션 레이어(convolution layer)와 액티베이션 레이어(ReLU)를 쌓고 2x2 크기의 필터를 스트라이드(stride) 1로 적용하여 다음 하위 깊이 레벨로 연결되는 컨볼루션 블록의 연산을 4회 반복하여 수행하고, 그 다음에 2x2 크기의 디컨볼루션 레이어(deconvolution layer)와 액티베이션 레이어(ReLU)를 적용하여 다음 상위 깊이 레벨로 연결한 후 3x3 크기의 컬러 컨볼루션 레이어와 액티베이션 레이어를 쌓는 역컨볼루션 블록의 연산을 4회 반복하여 수행하며, 여기서 각 레벨의 컨볼루션 블록의 연산을 포함한 컨볼루션 네트워크의 각 레벨의 컨볼루션 블록의 이미지에 동일 레벨의 역컨볼루션 네트워크의 대응 레벨의 컨볼루션 결과를 갖다 붙이고(copy and contatenate) 각 블록에서 컨볼루션 연산을 각각 수행하도록 이루어질 수 있다.As an example, the deep learning architecture may have a form including a convolutional network, a deconvolutional network, and a shortcut. As shown in FIG. 5, the deep learning architecture stacks a 3x3 size color convolution layer and an activation layer (ReLU) and extracts a 2x2 size filter to extract local features of the medical image (X). By applying (stride) 1 to perform the operation of the convolution block that is connected to the next lower depth level 4 times, and then apply a 2x2 size deconvolution layer and activation layer (ReLU) After connecting to the next higher depth level, the operation of the inverse convolution block that stacks the 3x3 size color convolution layer and the activation layer is repeated 4 times, where each of the convolution networks, including the operation of the convolution block of each level, is performed. It may be made to copy and contatenate the convolution result of the corresponding level of the inverse convolution network of the same level to the image of the convolution block of the level and perform convolution operations in each block.
컨볼루션 네트워크와 디컨볼루션 네트워크 내 컨볼루션 블록은 conv-ReLU-conv 레이어들의 조합으로 구현될 수 있다. 그리고, 딥러닝 아키텍처의 출력은 컨볼루션 네트워크이나 디컨볼루션 네트워크에 연결되는 분류기를 통해 얻어질 수 있으나, 이에 한정되지는 않는다. 분류기는 FCN(fully connectivity network) 기법을 이용하여 영상에서 국소적인 특징을 추출하는데 이용될 수 있다.The convolution block in the convolution network and the deconvolution network may be implemented by a combination of conv-ReLU-conv layers. Further, the output of the deep learning architecture may be obtained through a classifier connected to a convolutional network or a deconvolutional network, but is not limited thereto. The classifier may be used to extract local features from an image using a fully connectivity network (FCN) technique.
또한, 딥러닝 아키텍처는 구현에 따라서 컨볼루션 블록 내에 인셉션 모듈(inseption module) 또는 멀티 필터 경로(multi filter pathway)를 추가로 사용하도록 구현될 수 있다. 인셉션 모듈 또는 멀티 필터 경로 내 서로 다른 필터는 1x1 필터를 포함할 수 있다.Further, the deep learning architecture may be implemented to additionally use an insulation module or a multi filter pathway in a convolution block depending on the implementation. Different filters in the inception module or multi-filter path may include 1x1 filters.
참고로, 딥러닝 아키텍처에서 입력(input) 이미지가 가로 32, 세로 32, 그리고 RGB 채널을 가지는 경우, 타겟 벡터에 대응하는 의료 영상에 대응하는 입력 이미지(X)의 크기는 [32x32x3]일 수 있다. 이러한 크기는, YOLO 알고리즘에 적용했을 때, 기재된 순서대로 높이, 넓이 그리고 앵커박스 개수와 클래스와의 곱에 각각 대응될 수 있다. 여기서 [32x32x3]의 마지막 3은 예를 들어 소정의 값(예컨대, 0)에 클래스(예컨대, 3)를 더한 값에 액커박스의 개수(예컨대, 1)을 곱한 값 즉(3)이 될 수 있다.For reference, when the input image in the deep learning architecture has 32 horizontal, 32 vertical, and RGB channels, the size of the input image X corresponding to the medical image corresponding to the target vector may be [32x32x3]. . When applied to the YOLO algorithm, these sizes may correspond to the product of height, width, and number of anchor boxes and classes, respectively, in the order described. Here, the last 3 of [32x32x3] may be, for example, a value obtained by adding a class (eg, 3) to a predetermined value (eg, 0) multiplied by the number of actor boxes (eg, 1), that is, (3). .
딥러닝 아키텍처의 CNN(convloultional neural network)에서 콘볼루션(convolutional, CONV) 레이어는 입력 이미지의 일부 영역과 연결되며, 이 연결된 영역과 자신의 가중치의 내적 연산(dot product)을 계산하도록 설계될 수 있다.In the deep learning architecture's convolutional neural network (CNN), the convolutional (CONV) layer is connected to some areas of the input image, and can be designed to calculate the dot product of the connected areas and their weights. .
여기서, ReLU(rectified linear unit) 레이어는 max(0,x)와 같이 각 요소에 적용되는 액티베이션 함수(activation function)이다. ReLU 레이어는 볼륨의 크기를 변화시키지 않는다. POOLING 레이어는 (가로, 세로)로 표현되는 차원에 대해 다운샘플링(downsampling) 또는 서브샘블링(subsampling)을 수행하여 감소된 볼륨을 출력할 수 있다.Here, the ReLU (rectified linear unit) layer is an activation function applied to each element, such as max(0,x). The ReLU layer does not change the size of the volume. The POOLING layer may output a reduced volume by performing downsampling or subsampling on a dimension represented by (horizontal, vertical).
그리고, 전연결(fully-connected, FC) 레이어는 클래스 점수들을 계산하여 예컨대 [1x1x10]의 크기를 갖는 볼륨을 출력할 수 있다. 이 경우, 10개 숫자들은 10개 카테고리에 대한 클래스 점수에 해당한다. 전연결 레이어는 이전 볼륨의 모든 요소와 연결된다. 거기서, 어떤 레이어는 모수(parameter)를 갖지만 어떤 레이어는 모수를 갖지 않을 수 있다. CONV/FC 레이어들은 액티베이션 함수로서 단순히 입력 볼륨만이 아니라 가중치(weight)와 바이어스(bias)를 포함할 수 있다. 한편, ReLU/POOLING 레이어들은 고정된 함수로서, CONV/FC 레이어의 모수들은 각 이미지에 대한 클래스 점수가 해당 이미지의 레이블과 같아지도록 그라디언트 디센트(gradient descent)로 학습될 수 있다.Then, the fully-connected (FC) layer may calculate class scores and output a volume having a size of [1x1x10], for example. In this case, 10 numbers correspond to class scores for 10 categories. The pre-connection layer is connected to all elements of the previous volume. There, some layers may have parameters, while some layers may not. CONV/FC layers may include weight and bias as an activation function, not just input volume. Meanwhile, the ReLU/POOLING layers are fixed functions, and the parameters of the CONV/FC layer can be learned with a gradient descent so that the class score for each image is the same as the label of the corresponding image.
본 발명은 도면에 도시된 실시 예를 참고로 설명되었으나 이는 예시적인 것에 불과하며, 본 기술 분야의 통상의 지식을 가진 자라면 이로부터 다양한 변형 및 균등한 타 실시 예가 가능하다는 점을 이해할 것이다. 따라서, 본 발명의 진정한 기술적 보호 범위는 첨부된 청구범위의 기술적 사상에 의해 정해져야 할 것이다.The present invention has been described with reference to the embodiments shown in the drawings, but these are merely exemplary, and those skilled in the art will understand that various modifications and other equivalent embodiments are possible therefrom. Therefore, the true technical protection scope of the present invention should be defined by the technical spirit of the appended claims.

Claims (5)

  1. 머신러닝 기반의 GRE 영상을 활용하여 혈전을 분류하는 방법에 있어서,In the method of classifying blood clots using a machine learning-based GRE image,
    (a) 영상획득부가 GRE(Gradient echo)영상을 획득하는 단계;(a) obtaining a gradient echo (GRE) image by an image acquisition unit;
    (b) 병변검출부가 인공신경망 모델을 이용하여 획득한 GRE영상에서 병변 영역을 검출하는 단계;(b) detecting a lesion region in the GRE image acquired by the lesion detection unit using an artificial neural network model;
    (c) 상기 패치영역설정부가 검출된 병변 영역을 일정한 크기의 패치영역으로 설정하고, 3차원 방향의 프로젝션을 통해 패치영역을 재설정하는 단계;(c) setting the lesion area in which the patch area setting unit is detected as a patch area of a predetermined size, and resetting the patch area through projection in a 3D direction;
    (d) 혈전분류부가 인공신경망 모델을 이용하여 패치영역을 포함한 병변 영역에서 혈전을 분류하는 단계; 및(d) classifying the thrombus in the lesion region including the patch region using the artificial neural network model; And
    (e) 영상생성부가 분류결과에 기초하여 RED-CLOT 또는 WHITE-CLOT 중 어느 하나의 프로젝션 정보를 포함한 영상을 생성하는 단계를 포함하며,(e) the image generating unit includes generating an image including projection information of either RED-CLOT or WHITE-CLOT based on the classification result,
    상기 (c)단계에서 상기 패치영역설정부는 상기 3차원 방향의 프로젝션을 통해 재설정된 상기 일정한 크기의 패치영역에서 나타나는 병변 특징 표현에 대한 형태를 비교하고,In the step (c), the patch region setting unit compares the shape of the lesion feature expression appearing in the patch region of the constant size reset through the projection in the 3D direction,
    상기 (d)단계에서 상기 혈전분류부는 상기 패치영역설정부의 비교 결과에 따라 상기 병변 영역에서 RED-CLOT 및 WHITE-CLOT 중 어느 하나를 분류하는 머신러닝 기반의 GRE영상을 활용한 혈전 분류 방법.In the step (d), the thrombus classification unit uses a machine learning based GRE image to classify any one of RED-CLOT and WHITE-CLOT in the lesion area according to the comparison result of the patch area setting unit.
  2. 청구항 1에 있어서,The method according to claim 1,
    상기 분류하는 단계는 상기 패치영역에서 YOLO 신경망을 이용한 인지를 통하여 미리 학습된 인공신경망 모델에 따라 RED-CLOT과 WHITE-CLOT을 분류하고, 여기서 상기 YOLO 신경망은 물체 감지 알고리즘의 일종으로서 RED-CLOT 및 WHITE-CLOT 각각을 검출하는 알고리즘을 훈련시킨 후 최종 출력 격자 셀에 맞춰 훈련 세트의 타겟 벡터를 생성하는, 머신러닝 기반의 GRE영상을 활용한 혈전 분류 방법.The classifying step classifies the RED-CLOT and WHITE-CLOT according to an artificial neural network model previously learned through recognition using the YOLO neural network in the patch region, wherein the YOLO neural network is a type of object detection algorithm, and the RED-CLOT and A thrombus classification method using a machine learning-based GRE image that generates a target vector of a training set according to a final output grid cell after training an algorithm that detects each WHITE-CLOT.
  3. 머신러닝 기반의 GRE(Gradient echo) 영상을 활용하여 혈전을 분류하는 시스템에 있어서,In a system for classifying blood clots using machine learning-based GRE (Gradient echo) images,
    GRE영상을 획득하는 영상획득부;An image acquisition unit that acquires a GRE image;
    인공신경망 모델을 이용하여 획득한 GRE영상에서 병변 영역을 검출하는 병변검출부;A lesion detection unit for detecting a lesion region in a GRE image obtained using an artificial neural network model;
    검출된 병변 영역을 일정한 크기의 패치영역으로 설정하고, 3차원 방향의 프로젝션을 통해 패치영역을 재설정하는 패치영역설정부; 및A patch area setting unit that sets the detected lesion area as a patch area of a constant size and resets the patch area through projection in 3D direction; And
    인공신경망 모델을 이용하여 패치영역을 포함한 병변 영역에서 혈전을 분류하는 혈전분류부;를 포함하고,Contains a thrombus classification unit for classifying thrombus in a lesion region including a patch region using an artificial neural network model.
    상기 패치영역설정부는 3차원 방향의 프로젝션을 통해 재설정된 상기 일정한 크기의 패치영역에서 나타나는 병변 특징 표현에 대한 형태를 비교하고,The patch region setting unit compares the shape of the lesion feature expression that appears in the patch region of the constant size reset through projection in 3D direction,
    상기 혈전분류부는 상기 패치영역설정부의 비교 결과에 따라 상기 병변 영역에서 RED-CLOT 및 WHITE-CLOT 중 어느 하나를 분류하는 머신러닝 기반의 GRE영상을 활용한 혈전 분류 시스템.The thrombus classification system uses a machine learning based GRE image to classify any one of RED-CLOT and WHITE-CLOT in the lesion region according to the comparison result of the patch region setting unit.
  4. 청구항 3에 있어서,The method according to claim 3,
    상기 혈전분류부는 상기 패치영역에서 YOLO 신경망을 이용한 인지를 통하여 미리 학습된 인공신경망 모델에 따라 RED-CLOT과 WHITE-CLOT을 분류하고, 여기서 상기 YOLO 신경망은 물체 감지 알고리즘의 일종으로서 RED-CLOT 및 WHITE-CLOT 각각을 검출하는 알고리즘을 훈련시킨 후 최종 출력 격자 셀에 맞춰 훈련 세트의 타겟 벡터를 생성하는, 머신러닝 기반의 GRE영상을 활용한 혈전 분류 시스템.The thrombus classification unit classifies RED-CLOT and WHITE-CLOT according to an artificial neural network model previously learned through recognition using a YOLO neural network in the patch region, wherein the YOLO neural network is a type of object detection algorithm, RED-CLOT and WHITE -A thrombus classification system using machine learning based GRE images that trains an algorithm that detects each CLOT and then generates a target vector of the training set according to the final output grid cell.
  5. 청구항 3에 있어서,The method according to claim 3,
    상기 혈전분류부의 분류 결과에 기초하여 RED-CLOT 또는 WHITE-CLOT 중 어느 하나의 프로젝션 정보를 포함한 영상을 생성하는 영상생서부를 더 포함하는 머신러닝 기반의 GRE영상을 활용한 혈전 분류 시스템.A thrombus classification system using a machine learning-based GRE image, further comprising an image generation unit generating an image including projection information of either RED-CLOT or WHITE-CLOT based on the classification result of the thrombus classification unit.
PCT/KR2019/018431 2018-12-24 2019-12-24 Machine learning-based method and system for classifying thrombi using gre image WO2020138932A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021537199A JP2022515465A (en) 2018-12-24 2019-12-24 Thrombus classification method and system using GRE images based on machine learning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0168341 2018-12-24
KR1020180168341A KR102056989B1 (en) 2018-12-24 2018-12-24 Method and system for classifying blood clot in gradient echo images based on machine learning

Publications (1)

Publication Number Publication Date
WO2020138932A1 true WO2020138932A1 (en) 2020-07-02

Family

ID=69568514

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/018431 WO2020138932A1 (en) 2018-12-24 2019-12-24 Machine learning-based method and system for classifying thrombi using gre image

Country Status (3)

Country Link
JP (1) JP2022515465A (en)
KR (1) KR102056989B1 (en)
WO (1) WO2020138932A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767350A (en) * 2021-01-19 2021-05-07 深圳麦科田生物医疗技术股份有限公司 Method, device, equipment and storage medium for predicting maximum interval of thromboelastogram
CN112754511A (en) * 2021-01-20 2021-05-07 武汉大学 CT image intracranial thrombus detection and property classification method based on deep learning
CN112863649A (en) * 2020-12-31 2021-05-28 四川大学华西医院 System and method for outputting intravitreal tumor image result

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210147155A (en) * 2020-05-27 2021-12-07 현대모비스 주식회사 Apparatus of daignosing noise quality of motor
KR102336058B1 (en) 2020-07-14 2021-12-07 주식회사 휴런 Device and Method for Detecting Cerebral Microbleeds Using Magnetic Resonance Images

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120056312A (en) * 2010-11-01 2012-06-04 전남대학교산학협력단 System for detecting pulmonary embolism and method therefor
KR20150056866A (en) * 2012-10-19 2015-05-27 하트플로우, 인크. Systems and methods for numerically evaluating vasculature
KR20170096088A (en) * 2016-02-15 2017-08-23 삼성전자주식회사 Image processing apparatus, image processing method thereof and recording medium
KR20180021635A (en) * 2016-08-22 2018-03-05 한국과학기술원 Method and system for analyzing feature representation of lesions with depth directional long-term recurrent learning in 3d medical images
KR20180040287A (en) * 2016-10-12 2018-04-20 (주)헬스허브 System for interpreting medical images through machine learnings

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120022360A1 (en) * 2008-03-28 2012-01-26 Volcano Corporation Methods for intravascular imaging and flushing
US10163040B2 (en) * 2016-07-21 2018-12-25 Toshiba Medical Systems Corporation Classification method and apparatus
KR101740464B1 (en) * 2016-10-20 2017-06-08 (주)제이엘케이인스펙션 Method and system for diagnosis and prognosis of stroke and systme therefor
CN109937012B (en) * 2016-11-10 2023-02-17 皇家飞利浦有限公司 Selecting acquisition parameters for an imaging system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120056312A (en) * 2010-11-01 2012-06-04 전남대학교산학협력단 System for detecting pulmonary embolism and method therefor
KR20150056866A (en) * 2012-10-19 2015-05-27 하트플로우, 인크. Systems and methods for numerically evaluating vasculature
KR20170096088A (en) * 2016-02-15 2017-08-23 삼성전자주식회사 Image processing apparatus, image processing method thereof and recording medium
KR20180021635A (en) * 2016-08-22 2018-03-05 한국과학기술원 Method and system for analyzing feature representation of lesions with depth directional long-term recurrent learning in 3d medical images
KR20180040287A (en) * 2016-10-12 2018-04-20 (주)헬스허브 System for interpreting medical images through machine learnings

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112863649A (en) * 2020-12-31 2021-05-28 四川大学华西医院 System and method for outputting intravitreal tumor image result
CN112767350A (en) * 2021-01-19 2021-05-07 深圳麦科田生物医疗技术股份有限公司 Method, device, equipment and storage medium for predicting maximum interval of thromboelastogram
CN112767350B (en) * 2021-01-19 2024-04-26 深圳麦科田生物医疗技术股份有限公司 Method, device, equipment and storage medium for predicting maximum section of thromboelastography
CN112754511A (en) * 2021-01-20 2021-05-07 武汉大学 CT image intracranial thrombus detection and property classification method based on deep learning

Also Published As

Publication number Publication date
KR102056989B1 (en) 2020-02-11
JP2022515465A (en) 2022-02-18
KR102056989B9 (en) 2020-02-11

Similar Documents

Publication Publication Date Title
WO2020138932A1 (en) Machine learning-based method and system for classifying thrombi using gre image
Weng et al. INet: convolutional networks for biomedical image segmentation
US20210272681A1 (en) Image recognition model training method and apparatus, and image recognition method, apparatus, and system
Chen et al. Self-supervised learning for medical image analysis using image context restoration
WO2020138925A1 (en) Artificial intelligence-based method and system for classification of blood flow section
KR101992057B1 (en) Method and system for diagnosing brain diseases using vascular projection images
Saha et al. Topomorphologic separation of fused isointensity objects via multiscale opening: Separating arteries and veins in 3-D pulmonary CT
Niemann et al. A knowledge based system for analysis of gated blood pool studies
WO2020050635A1 (en) Method and system for automatically segmenting blood vessels in medical image by using machine learning and image processing algorithm
CN110036408B (en) Automatic ct detection and visualization of active bleeding and blood extravasation
WO2019132589A1 (en) Image processing device and method for detecting multiple objects
CN106777953A (en) The analysis method and system of medical image data
CN109815919A (en) A kind of people counting method, network, system and electronic equipment
Yamamoto et al. Image processing for computer‐aided diagnosis of lung cancer by CT (LSCT)
WO2017135635A1 (en) Method for analyzing blood flow by using medical image
WO2019231104A1 (en) Method for classifying images by using deep neural network and apparatus using same
KR102020157B1 (en) Method and system for detecting lesion and diagnosing lesion status based on fluid attenuation inverted recovery
CN109461163A (en) A kind of edge detection extraction algorithm for magnetic resonance standard water mould
CN103945755A (en) Image processing device, image processing method, and image processing program
WO2019098415A1 (en) Method for determining whether subject has developed cervical cancer, and device using same
KR102015223B1 (en) Method and apparatus for diagnosing brain diseases using 3d magnetic resonance imaging and 2d magnetic resonance angiography
Sofian et al. Calcification detection using convolutional neural network architectures in intravascular ultrasound images
CN115115575A (en) Image detection method and device, computer equipment and storage medium
KR102336003B1 (en) Apparatus and method for increasing learning data using patch matching
WO2020139011A1 (en) Brain lesion information provision device and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19903029

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021537199

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01/10/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19903029

Country of ref document: EP

Kind code of ref document: A1