WO2018155765A1 - Method and device analyzing plaque from computed tomography image - Google Patents

Method and device analyzing plaque from computed tomography image Download PDF

Info

Publication number
WO2018155765A1
WO2018155765A1 PCT/KR2017/005764 KR2017005764W WO2018155765A1 WO 2018155765 A1 WO2018155765 A1 WO 2018155765A1 KR 2017005764 W KR2017005764 W KR 2017005764W WO 2018155765 A1 WO2018155765 A1 WO 2018155765A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
coronary
plaque
images
channel data
Prior art date
Application number
PCT/KR2017/005764
Other languages
French (fr)
Korean (ko)
Inventor
장혁재
홍영택
Original Assignee
연세대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 연세대학교 산학협력단 filed Critical 연세대학교 산학협력단
Publication of WO2018155765A1 publication Critical patent/WO2018155765A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • A61B6/504Clinical applications involving diagnosis of blood vessels, e.g. by angiography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • the present invention relates to a method and apparatus for analyzing plaque in a computed tomography image. More specifically, plaque analysis using deep learning techniques to automatically generate masks of the inner and outer walls of coronary arteries from computed tomography images, thereby enabling easier and more accurate analysis of plaques in coronary arteries.
  • the present invention relates to a method and an image processing apparatus.
  • Coronary artery disease can produce coronary plaque in blood vessels that provide blood to the heart, such as stenosis (a disease in which blood vessels become abnormally narrowed). Coronary plaques may limit blood flow to the heart, and patients suffering from coronary artery disease may experience chest pain such as unstable angina when at rest or chronic invariant angina when vigorous physical exercise.
  • stenosis a disease in which blood vessels become abnormally narrowed. Coronary plaques may limit blood flow to the heart, and patients suffering from coronary artery disease may experience chest pain such as unstable angina when at rest or chronic invariant angina when vigorous physical exercise.
  • Non-invasive tests may include biomarker evaluation from electrocardiograms, blood tests, treadmill tests, electrocardiogram recording, single positron emission computed tomography (SPECT) and positron emission tomography (PET). ) May be included.
  • Anatomical data can be obtained non-invasively using coronary computed tomography angiography (CCTA).
  • CCTA can be used for imaging of patients with chest pain and involves the use of computed tomography (CT) technology to image the heart and coronary arteries following intravenous infusion of contrast agents.
  • CT computed tomography
  • CCTA has become a reliable test method for the diagnosis of anatomically obstructed coronary artery disease, and CCTA can use analytical software to automatically or semi-automatically analyze atherosclerotic plaques in CCTA images.
  • the present invention has been made to solve the above-mentioned problems, and the present invention uses a new CNN structure for image learning and a new medical image format suitable for such a CNN structure, and automatically masks the inner and outer walls of the coronary artery. It is an object of the present invention to provide a method and an image processing apparatus which can generate and thereby analyze the plaque in the coronary artery more easily and accurately.
  • a method for analyzing a plaque in a medical image including: receiving a medical image including a computed tomography (CT) image; Generating n-channel data by adjusting a window width (WW) and a window level (WL) of the CT image, wherein n is a natural number of two or more; Reconstructing an image into an orthogonal image in each of the n-channel data and reconstructing the image into a horizontal image, a sagittal image, and a coronal image; Machine learning the reconstructed images based on a convolutional neural network (CNN); And generating a mask image from an image including at least one of a machine-learned orthogonally reconstructed image, and acquiring a cross-sectional image of the coronary vessel based on the generated mask images. It is characterized in that the inner wall and the coronary outer wall are performed individually or integrally, respectively.
  • the method may further include analyzing a plaque in the coronary artery using the obtained cross-sectional image of the coronary inner wall and the acquired cross-sectional image of the coronary outer wall.
  • n-channel data by adjusting the window width (WW) and window level (WL) of the CT image, WW 1 and WL 1 for observation of coronary lumen, WW for calcium analysis
  • WW 3 and WL 3 for 2 and WL 2 and lipid plaques, where n is 3.
  • CNN composite product neural network
  • BCN brief convolutional networks
  • prior learning may be performed by using an auto-encoder in the preceding first BCN among the two simple synthetic product networks BCN.
  • the image processing apparatus for solving the above technical problem, the image receiving unit configured to receive a CT image, and the heart image received from the image receiving unit And an image processor configured to display at least one of a coronary inner wall image, an outer wall image, and a plaque image output from the image processor, and to control the image receiver, the image processor, and the display unit.
  • An n-channel data generation unit configured to generate n-channel data by adjusting a window width (WW) and a window level (WL) of the CT image, wherein n is two or more; Natural mission; An image reconstruction unit configured to reconstruct an image at right angles from each of the n-channel data to reconstruct it into a horizontal plane image, a sagittal plane image, and a coronal plane image; A machine learning unit configured to machine learn based on a composite product neural network (CNN) for the reconstructed images; And an image acquisition unit configured to generate a mask image from each of the images including any one of the machine-reversed right-angle reconstructed images, and obtain a cross-sectional image of the coronary vessel based on the generated mask images.
  • CNN composite product neural network
  • the image processor may further include a plaque analysis unit configured to analyze plaque in the coronary artery using the obtained cross-sectional image of the coronary inner wall and the acquired cross-sectional image of the coronary outer wall.
  • a plaque analysis unit configured to analyze plaque in the coronary artery using the obtained cross-sectional image of the coronary inner wall and the acquired cross-sectional image of the coronary outer wall.
  • the n-channel data generating unit is configured by setting WW 1 and WL 1 for coronary lumen observation, WW 2 and WL 2 for calcium analysis, and WW 3 and WL 3 for lipid plaques. Generate channel data, where n is three.
  • CNN composite product neural network
  • BCN brief convolutional networks
  • prior learning may be performed by using a self-encoder in the preceding first BCN of the two simple synthetic product networks BCN.
  • CT computed tomography
  • an image processing apparatus According to a method for analyzing plaque in a computed tomography (CT) image and an image processing apparatus according to an embodiment of the present invention, analysis of coronary plaque in a computed tomography (CT) image using deep learning is performed. It can be done more easily and accurately.
  • FIG. 1 is an exemplary diagram for describing a format of a general DICOM standard medical image.
  • FIG. 2 is a flowchart of a method S200 for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
  • FIG. 3 is an exemplary diagram for describing a process of generating 3-channel data according to an embodiment of the present invention.
  • FIG. 4 is an exemplary diagram for describing a process of reconstructing three-channel data generated in FIG. 3 into horizontal, coronal, and sagittal images according to an embodiment of the present invention.
  • FIG. 5A is a schematic block diagram of a simplified composite product network (BCN) according to an embodiment of the present invention
  • FIG. 5B is a schematic block diagram for explaining machine learning logic according to an embodiment of the present invention.
  • FIG. 6 is a flowchart schematically illustrating the overall flow of a method for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
  • CT computed tomography
  • FIG. 7 is a schematic block diagram of an image processing apparatus 700 configured to perform a method for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
  • CT computed tomography
  • any component when a part of the present specification is said to "include” any component, this means that it may further include other components, without excluding the other components unless otherwise stated.
  • the terms “... unit”, “module”, etc. described in the specification mean a unit for processing at least one function or operation, which may be implemented in hardware or software or a combination of hardware and software. .
  • FIG. 1 is an exemplary diagram for describing a format of a general DICOM standard medical image.
  • DICOM Digital Imaging and Communications in Medicine
  • ACR American College of Radiology
  • NEMA National Electrical Manufacturers Association
  • FIG. 1 schematically illustrates the format of such DICOM standard medical images, as shown, DICOM images are 2 bytes, i.e. 16 bits in size, for which the user directly measures the window width (see FIG. We are using the window width (WW) and window level (WL) controls.
  • WW window width
  • WL window level
  • CT images can represent different stages of black and white using CT numbers, which are the relative linear attenuation coefficients of pixels that are set based on water, bone, and air.
  • the window width (WW) means a range of CT numbers that can be expressed in gray-scale, which is various levels of black and white
  • the window level (WL) means the median value of the gray scale.
  • the window width WW of the abdominal image is set to +300 and the window level WL is set to 0
  • the range of hounsfield units (HU) appearing in the image is -150 to +150. Therefore, a material with an absorption lower than -150 appears black, a material with an absorption higher than +150 appears bright, and a material having a HU between -149 and +149 can be represented by a level between black and white.
  • FIG. 2 is a flowchart of a method S200 for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
  • CCTA coronary computed tomography angiography
  • Method (S200) for analyzing plaque in a computed tomography (CT) image receiving a medical image including a computed tomography (CT) image (S210); Generating n-channel data by adjusting a window width (WW) and a window level (WL) of the CT image (S220), where n is a natural number of two or more; Reconstructing an image from each of the n-channel data at right angles and reconstructing the image into a horizontal image, a sagittal image, and a coronal image (S230); Machine learning on the reconstructed images based on a convolutional neural network (CNN) (S240); Generating a mask image from each of the images including any one of the machine-learned right-angle reconstructed images, and obtaining a cross-sectional image of the coronary blood vessel based on the generated mask images (S250); And analyzing the plaque in the coronary artery using the obtained cross-sectional image of the coronary inner wall and the obtained
  • CNN convolutional neural network
  • each step S210 to S260 shown in FIG. 2 corresponds to an example for easy understanding of the present invention, and thus it will be apparent that additional steps not shown in FIG. 2 may be further performed. will be.
  • S210 corresponds to receiving a CT image, such as coronary computed tomography angiography (CCTA) image.
  • CT computed tomography
  • CT is one of the medical image processing methods using tomography produced by computer processing, wherein the CCTA image is any coronal image that can be obtained by any computed tomography (CT) equipment / device / instrument Arterial CT imaging.
  • S220 corresponds to a step of generating n-channel data by adjusting a window width WW and a window level WL of the received CT image.
  • n is a natural number of two or more, and in the following description, three-channel data will be described as an example for easy understanding of the present invention, but it will be apparent that other plurality of channel data may additionally or alternatively be implemented. will be.
  • machine learning eg, CNN
  • the three-channel data according to an embodiment of the present invention is composed of three data, each of which adjusts the window width (WW) and the window level (WL) of the CT image, each of which is coronary lumen Data for observation, data for calcium analysis, and data for lipid plaques.
  • the image for coronary lumen observation can be set to window width WW 1 and window level WL 1
  • the image for calcium analysis can be set to window width WW 2 and window level WL 2 , for lipid plaques.
  • the image can be set to window width WW 3 and window level WL 3 .
  • FIG. 3 is an exemplary diagram for describing a process of generating 3-channel data according to an embodiment of the present invention.
  • FIG. 3 exemplarily shows an image of a coronary lumen.
  • the window width / window level for the same coronary artery lumen images in each WW 1 / WL 1, WW 2 / WL 2 and WW 3 / WL 3 Three channel data ch1, ch2 and ch3 can be generated.
  • the three-channel data ch1, ch2 and ch3 thus generated can be reconstructed on a scale of [0, 255], for this [-150HU, 590HU], [-200HU] , 1300 HU] and [-150 HU, 350 HU] different crop windows can be used.
  • the reason for constructing such three-channel data is to simulate the technique used in the conventional CT image analysis method.
  • the above-described WW 1 / WL 1 , WW 2 / WL 2 and WW 3 / WL 3 are presented.
  • the present invention therefore has an important meaning that the deep learning model simulates the actual working environment to allow it to learn.
  • the purpose of the new three-channel data according to the present invention is to simulate the process of adjusting the width and level of the data display window to suit its characteristics when a real expert analyzes plaque by hand.
  • S230 reconstructs the image at right angles from each of the three-channel data generated in such a manner so that the horizontal image, the coronal image, and Corresponds to the reconstruction of sagittal images.
  • the present invention is characterized by additionally generating a coronal image and a sagittal image by rotating the horizontal image as well as the horizontal image and reconstructing the image at right angles, and using them for image machine learning.
  • FIG. 4 is an exemplary diagram for describing a process of reconstructing three-channel data generated in FIG. 3 into horizontal, coronal, and sagittal images according to an embodiment of the present invention. As shown in Fig. 4, each of the three-channel data ch1, ch2 and ch3 is reconstructed into a horizontal plane image, a coronal plane image and a sagittal plane image.
  • the size of the horizontal plane image is fixed to 64 ⁇ 64, but since the blood vessel length is different according to the degree of disease of a person and a person, a 64 ⁇ 64 ⁇ 64 size cube can be defined as a sampling unit. The part may be excluded from the learning object.
  • FIG. 4 exemplarily shows a 'horizontal plane image', 'coronal plane image' and 'sagittal plane image' for easy understanding of various embodiments of the present invention. It will be apparent in the art that it can be used.
  • 3-channel data is generated by adjusting the window width (WW) and the window level (WL) of the CT image (S220), and reconstructing the horizontal / sagittal and coronal images for each of the generated 3-channel data.
  • S240 corresponds to machine learning a reconstructed image using a composite product neural network (CNN).
  • CNN composite product neural network
  • a convolutional neural network corresponds to a neural network having a structure for performing pretreatment in a compound multiplication layer consisting of one or several compound multiplication layers and a general artificial neural network layer mounted thereon. It may also be referred to as a convolutional neural network, a circuit neural network, a brain network, and the like.
  • CNN general composite neural network
  • BCN brief convolutional network
  • FIG. 5A shows a schematic block diagram of this simplified composite product network (BCN).
  • a simple synthetic product network (BCN) according to an embodiment of the present invention includes two compressed layers consisting of three composite product layers and one max-pooling layer, and three composite products. It may consist of two decompression layers consisting of a product layer and one up-sampling layer.
  • the max-pooling layer and the up-sampling layer may use 2 pixel striding.
  • FIG. 5B is a schematic block diagram for explaining machine learning logic according to an embodiment of the present invention.
  • machine learning for coronary inner wall / outer wall / plaque areaization according to an embodiment of the present invention is performed using a network of simple stacked composite product networks (BCNs). It is done.
  • BCNs simple stacked composite product networks
  • the composite product neural network (CNN) used in the image machine learning according to the present invention may be configured by successively stacking two simple composite product networks (BCN).
  • pre-training is performed by using an auto-encoder in the preceding first BCN (left BCN in FIG. 5B) of the two simplified composite product networks BCN.
  • pre-learning is carried out using a simple synthetic product network (BCN) structure in order to obtain better initial values prior to coronary inner and outer wall prediction, where the output computed from the input is the input itself. It is characterized by using an auto-encoder to be similar to.
  • BCN simple synthetic product network
  • the self-encoder process by the first BCN not only can the image noise be reduced, but also the boundary between the inner wall and the outer wall of the coronary artery is cleared, and the characteristics of the calcified plaques are improved to facilitate the learning.
  • a predictive model of the coronary inner and outer walls may be completed.
  • the prediction model is individually learned for the horizontal plane image, the sagittal plane image, and the coronal plane image, and the learning and prediction for the coronary inner wall and the coronary outer wall may be separately or collectively performed.
  • S250 is respectively obtained from the machine-learned horizontal plane, sagittal plane, and coronal plane images.
  • the input of the first BCN is an image and the output is also an image
  • a plurality of composite products in the BCN structure learn features of the image in the learning process. That is, meaningless things (e.g. noise) will naturally disappear in the learning process, and only meaningful things (e.g. plaques) will remain.
  • the same image may appear to be output, but in reality, it is possible to acquire an image in which noise is reduced and the plaque remains as it is.
  • the pre-learned BCN and the unlearned BCN are then stacked, and in the stacked structure, the input is an image and the output is a mask and the stacked layers are learned in this state.
  • a partial weight-update occurs in the pre-learned first BCN, where the weights of all the composite product layers in the second BCN are all the weights learned from the random-weighted values. Is updated. Therefore, when inputting a new image not used for learning, a mask may be output through all learned weights.
  • FIG. 6 is a flowchart schematically illustrating the overall flow of a method for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
  • CT computed tomography
  • the mask image for each channel (ch1, ch2 and ch3) as a result of the machine learning.
  • the mask images 61, 62, and 63 may be generated, and a cross-sectional image 70 of the coronary vessel may be obtained based on the mask image 60 generated as described above.
  • the entire flow chart of FIG. 6 illustrates the regionalization of the coronary lumen.
  • x), f c p c (y
  • x), and f s p s (y
  • an amplified feature that amplifies the probability for each pixel is defined as follows.
  • a feature vector ⁇ f a , f c , f s , f m ⁇ for determining the label for each pixel is defined, and label determination for each pixel of the output image uses a gradient boosting technique.
  • the gradient boosting model is trained on the extracted feature vectors and label true values.
  • S260 is a cross-sectional image of the acquired coronary inner wall and the obtained cross-sectional image of the coronary outer wall.
  • S260 corresponds to the step (S260) of analyzing the plaque in the coronary artery using.
  • the remaining region may correspond to the plaque region.
  • the illustrated series of steps S210 to S260 of FIG. 2 may be performed separately or integrally with respect to the coronary inner wall and the coronary outer wall, respectively.
  • FIG. 7 is a schematic block diagram of an image processing apparatus 700 configured to perform a method for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
  • CT computed tomography
  • an image processing apparatus 700 configured to execute a method for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention is a computed tomography (CT) image.
  • An image receiver 720 configured to receive a medical image including an image, an image processor 730 configured to process a cardiac image received by the image receiver 720, and a coronal tube output from the image processor 730.
  • a controller configured to display at least one of an artery inner wall image, an outer wall image, and a plaque image, and a controller configured to control the image receiver 720, the image processor 730, and the display 740. 710 may be included.
  • each element of the image processing apparatus 700 of the block diagram illustrated in FIG. 7 is merely an example for easy understanding of the present disclosure, and elements other than the element illustrated in FIG. 7 may be used as the image processing apparatus 700. Will be included in the
  • the image processing unit 730 is configured to adjust the window width (WW) and the window level (WL) of the CT image to generate n-channel data, where n-channel data generation unit 731, where n Is at least two natural numbers;
  • An image reconstruction unit 732 configured to reconstruct an image at right angles from each of the n-channel data to reconstruct it into a horizontal plane image, a sagittal plane image, and a coronal plane image;
  • a machine learning unit (733) configured to machine learn based on a composite product neural network (CNN) for the reconstructed images;
  • a mask image is generated from a horizontal plane image, a sagittal plane image, and a coronal plane image, respectively, from an image including at least one of a machine-learned orthogonally reconstructed image, and a cross-sectional image of a coronary vessel is obtained based on the generated mask images.
  • An image acquisition unit 734 configured to be configured to be;
  • a plaque analysis unit 735 configured to analyze plaque in the coronary
  • controller 710 may be configured to collectively control the image receiver 720, the image processor 730, and the display 740.
  • the controller 710 may be implemented as a single controller, or may be implemented as a plurality of micro-controllers.
  • the present inventors experimented with the performance of the machine-learned model using 10 coronary artery data, the results obtained with 1 channel image and 3 channel image to confirm the effect on the 3-channel data was compared.
  • the experiment was performed by reconstructing the input data into three-channel data and reconstructing the image into horizontal, sagittal, and coronal images.
  • DSC Dice Similarity Coefficient
  • Running can be used to more easily and accurately analyze coronary plaques in computed tomography (CT) images.
  • Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media.
  • computer readable media may include both computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transmission mechanism, and includes any information delivery media.
  • a deep learning technique is used to automatically generate a mask of an inner wall and an outer wall of a coronary artery from a computed tomography image, thereby enabling easier and accurate analysis of plaques in a coronary artery. It is available.

Abstract

Disclosed is a method for analyzing a plaque from a medical image. The method comprises the steps of: receiving a medical image including a computed tomography (CT) image; generating n-channel data by adjusting the window width (WW) and the window level (WL) of the CT image, n being a natural number equal to or larger than 2; orthogonally reconfiguring images from each of the n-channel data such that the same are reconfigured into an axial image, a sagittal image, and a coronal image; machine-learning the reconfigured images on the basis of a convolutional neural network (CNN); generating a mask image from an image including at least one of the machine-learned, orthogonally reconfigured images and acquiring a sectional image of a coronary artery vessel on the basis of the generated mask images. Above steps are performed in an individual or integrated manner with regard to the coronary artery inner wall and the coronary artery outer wall.

Description

컴퓨터 단층촬영 영상에서 플라크를 분석하기 위한 방법 및 장치Method and apparatus for analyzing plaque in computed tomography images
본 발명은 컴퓨터 단층촬영 영상에서 플라크를 분석하기 위한 방법 및 장치에 관한 발명이다. 보다 구체적으로, 딥러닝(deep learning) 기법을 이용하여 컴퓨터 단층촬영 영상에서 자동으로 관상동맥의 내벽과 외벽의 마스크를 생성하고 그에 따라 관상동맥 내 플라크의 보다 용이하고 정확한 분석을 가능하게 하는 플라크 분석 방법 및 그 영상 처리 장치에 관한 발명이다.The present invention relates to a method and apparatus for analyzing plaque in a computed tomography image. More specifically, plaque analysis using deep learning techniques to automatically generate masks of the inner and outer walls of coronary arteries from computed tomography images, thereby enabling easier and more accurate analysis of plaques in coronary arteries. The present invention relates to a method and an image processing apparatus.
관상동맥 질환은 협착증(혈관이 비정상적으로 좁아지는 질환)과 같이 심장에 혈액을 제공하는 혈관에 관상동맥 플라크(plaque)를 생성할 수 있다. 관상동맥 플라크로 인해 심장으로의 혈류가 제한될 수 있고, 관상동맥 질환에 시달리는 환자는 휴식 중일 때에는 불안전 협심증, 또는 격렬한 신체 운동 중일 때에는 만성 불변성 협심증 등의 흉통을 겪을 수 있다. Coronary artery disease can produce coronary plaque in blood vessels that provide blood to the heart, such as stenosis (a disease in which blood vessels become abnormally narrowed). Coronary plaques may limit blood flow to the heart, and patients suffering from coronary artery disease may experience chest pain such as unstable angina when at rest or chronic invariant angina when vigorous physical exercise.
통증에 시달리거나 관상동맥 질환의 증상을 나타내는 환자는 관상동맥 플라크에 관한 소정의 간접적 증거를 제공할 수 있는 하나 이상의 검사를 받을 수 있다. 예컨대, 비침습적 검사는 심전도, 혈액 검사로부터 바이오마커(biomarker) 평가, 트레드밀(treadmill) 검사, 심전도 기록, 단일 양전자 방출 CT(SPECT; single positron emission computed tomography) 및 양전자 방출 토모그래피(PET; positron emission tomography)를 포함할 수 있다. 해부학적 데이터는 관상동맥 CT 조영술(CCTA; coronary computed tomography angiography)을 이용하여 비침습적으로 얻을 수 있다. CCTA는 흉통을 갖는 환자의 이미징에 사용될 수 있고, 조영제의 정맥 내 투입에 이어 심장 및 관상동맥을 이미징하기 위해 컴퓨터 단층촬영(CT) 기술의 이용을 수반한다.Patients suffering from pain or exhibiting symptoms of coronary artery disease may receive one or more tests that can provide some indirect evidence of coronary plaque. For example, non-invasive tests may include biomarker evaluation from electrocardiograms, blood tests, treadmill tests, electrocardiogram recording, single positron emission computed tomography (SPECT) and positron emission tomography (PET). ) May be included. Anatomical data can be obtained non-invasively using coronary computed tomography angiography (CCTA). CCTA can be used for imaging of patients with chest pain and involves the use of computed tomography (CT) technology to image the heart and coronary arteries following intravenous infusion of contrast agents.
이와 같이, 심장 관상동맥(coronary artery)에서 죽상동맥경화 플라크(atheroscelerotic plaque)를 정량적으로 분석하는 것은 관상동맥 질환의 치료에 있어 매우 중요하다. CCTA는 해부학적으로 폐쇄된 관상동맥 질환의 진단을 위해 신뢰할 수 있는 검사 방법으로 자리 잡았고, CCTA는 분석 소프트웨어를 이용하여 CCTA 영상에서 죽상동맥경화반을 자동으로 또는 반자동으로 분석하는 것이 가능하다. As such, quantitative analysis of atherosclerotic plaques in coronary artery is very important in the treatment of coronary artery disease. CCTA has become a reliable test method for the diagnosis of anatomically obstructed coronary artery disease, and CCTA can use analytical software to automatically or semi-automatically analyze atherosclerotic plaques in CCTA images.
하지만, CCTA에서 분석 소프트웨어를 이용하여 관상동맥을 분석하는 과정에 있어 숙련된 전문가의 정교한 수정 작업이 필요한데, 이는 분석 소프트웨어가 관상동맥의 내벽과 외벽의 경계를 정밀하게 분석하지 못하기 때문이다. 그러므로, 임상적으로 유용한 결과를 도출하기 위해서는 수많은 수작업을 거쳐야만 하는 문제점이 있다.However, in the process of analyzing coronary arteries using analysis software in CCTA, sophisticated modifications by skilled specialists are necessary because the analysis software cannot accurately analyze the boundaries of the inner and outer walls of the coronary arteries. Therefore, there is a problem that requires a lot of manual work to produce clinically useful results.
게다가, CCTA에서 측정한 플라크와 실제 플라크가 얼마나 일치하는지를 비교한 연구에서 CCTA에서 측정된 플라크의 부피가 참조 표준인 IVUS(혈관 내 초음파; intravascular ultrasound)보다 과대 평가되는 경향이 발견되기도 하였다.In addition, a study comparing how plaques measured in CCTA and actual plaques found that the volume of plaques measured in CCTA was overestimated by reference standard IVUS (intravascular ultrasound).
따라서, 관상동맥 내의 플라크를 분석함에 있어 실제 임상환경에서 활용될 수 있도록 정확한 결과를 도출하기 위한 새로운 플라크 분석 방법 및 장치에 대한 필요성이 점점 증가하고 있는 실정이다.Therefore, there is an increasing need for a new plaque analysis method and apparatus for analyzing the plaque in the coronary artery to derive accurate results to be utilized in the actual clinical environment.
본 발명은 상기한 종래의 문제점들을 해결하고자 안출된 것으로서, 본 발명은 영상 학습을 위한 새로운 CNN 구조와 그러한 CNN 구조에 적합한 새로운 의료영상 포맷을 이용하여, 자동으로 관상동맥의 내벽과 외벽의 마스크를 생성하고 그에 따라 관상동맥 내 플라크를 보다 용이하고 정확하게 분석할 수 있는 방법 및 그 영상 처리 장치를 제공하는 것을 목적으로 한다. SUMMARY OF THE INVENTION The present invention has been made to solve the above-mentioned problems, and the present invention uses a new CNN structure for image learning and a new medical image format suitable for such a CNN structure, and automatically masks the inner and outer walls of the coronary artery. It is an object of the present invention to provide a method and an image processing apparatus which can generate and thereby analyze the plaque in the coronary artery more easily and accurately.
상기의 기술적 과제를 해결하기 위한 본 발명의 일 실시예에 따른 의료영상에서 플라크(plaque)를 분석하기 위한 방법은, 컴퓨터 단층촬영(CT) 영상을 포함하는 의료영상을 수신하는 단계; 상기 CT 영상의 창 너비(WW; window width) 및 창 수준(WL; window level)을 조정하여 n-채널 데이터를 생성하는 단계 ― 여기서, n은 2 이상의 자연수임 ― ; 상기 n-채널 데이터 각각에서 영상을 직각(orthogonal)으로 재구성하여 수평면(axial) 영상, 시상면(sagittal) 영상 및 관상면(coronal) 영상으로 재구성하는 단계; 상기 재구성된 영상들에 대해 합성곱 신경망(CNN; convolutional neural network) 기반으로 기계 학습시키는 단계; 및 기계 학습된 직각으로 재구성된 영상 중 적어도 하나를 포함하는 영상으로부터 마스크 영상을 생성하고, 생성된 마스크 영상들에 기초하여 관상동맥 혈관의 단면 영상을 획득하는 단계를 포함하고, 상기 단계들이 관상동맥 내벽과 관상동맥 외벽에 대해 각각 개별적으로 또는 통합적으로 수행되는 것을 특징으로 한다.According to an aspect of the present invention, there is provided a method for analyzing a plaque in a medical image, the method including: receiving a medical image including a computed tomography (CT) image; Generating n-channel data by adjusting a window width (WW) and a window level (WL) of the CT image, wherein n is a natural number of two or more; Reconstructing an image into an orthogonal image in each of the n-channel data and reconstructing the image into a horizontal image, a sagittal image, and a coronal image; Machine learning the reconstructed images based on a convolutional neural network (CNN); And generating a mask image from an image including at least one of a machine-learned orthogonally reconstructed image, and acquiring a cross-sectional image of the coronary vessel based on the generated mask images. It is characterized in that the inner wall and the coronary outer wall are performed individually or integrally, respectively.
또한, 상기 방법은, 획득된 상기 관상동맥 내벽의 단면 영상 및 획득된 상기 관상동맥 외벽의 단면 영상을 이용하여, 상기 관상동맥 내의 플라크를 분석하는 단계를 더 포함할 수 있다.The method may further include analyzing a plaque in the coronary artery using the obtained cross-sectional image of the coronary inner wall and the acquired cross-sectional image of the coronary outer wall.
또한, 상기 CT 영상의 창 너비(WW) 및 창 수준(WL)을 조정하여 n-채널 데이터를 생성하는 단계는, 관상동맥 내강(lumen) 관찰을 위한 WW1 및 WL1, 칼슘 분석을 위한 WW2 및 WL2 및 지질 플라크를 위한 WW3 및 WL3를 설정함으로써 상기 n-채널 데이터를 생성하는 단계를 포함하고, 여기서 n은 3이다.In addition, generating the n-channel data by adjusting the window width (WW) and window level (WL) of the CT image, WW 1 and WL 1 for observation of coronary lumen, WW for calcium analysis Generating the n-channel data by setting WW 3 and WL 3 for 2 and WL 2 and lipid plaques, where n is 3.
또한, 상기 합성곱 신경망(CNN)은 2개의 간략 합성곱망(BCN; brief convolutional network)을 연속적으로 적층하여 구성될 수 있다.In addition, the composite product neural network (CNN) may be configured by successively stacking two brief convolutional networks (BCN).
또한, 상기 2개의 간략 합성곱망(BCN) 중 선행하는 제1 BCN에서 자기부호화기(auto-encoder)를 이용하여 사전 학습이 수행될 수 있다.In addition, prior learning may be performed by using an auto-encoder in the preceding first BCN among the two simple synthetic product networks BCN.
추가하여, 상기의 기술적 과제를 해결하기 위한 본 발명의 일 실시예에 따른 영상 처리 장치는, 컴퓨터 단층촬영(CT) 영상을 수신하도록 구성되는 영상 수신부와, 상기 영상 수신부에서 수신된 심장 영상을 처리하도록 구성되는 영상 처리부와, 상기 영상 처리부에서 출력된 관상동맥 내벽 영상, 외벽 영상 및 플라크 영상 중 적어도 하나를 디스플레이하도록 구성되는 디스플레이부와, 상기 영상 수신부, 상기 영상 처리부 및 상기 디스플레이부를 제어하도록 구성되는 제어부를 포함하고, 상기 영상 처리부는, 상기 CT 영상의 창 너비(WW) 및 창 수준(WL)을 조정하여 n-채널 데이터를 생성하도록 구성되는 n-채널데이터생성유닛 ― 여기서, n은 2 이상의 자연수임 ―; 상기 n-채널 데이터 각각에서 영상을 직각으로 재구성하여 수평면 영상, 시상면 영상 및 관상면 영상으로 재구성하도록 구성되는 영상재구성유닛; 상기 재구성된 영상들에 대해 합성곱 신경망(CNN) 기반으로 기계 학습시키도록 구성되는 기계학습유닛; 및 기계 학습된 직각으로 재구성된 영상 중 어느 하나를 포함하는 영상으로부터 각각 마스크 영상을 생성하고, 생성된 마스크 영상들에 기초하여 관상동맥 혈관의 단면 영상을 획득하도록 구성되는 영상획득유닛을 포함할 수 있다.In addition, the image processing apparatus according to an embodiment of the present invention for solving the above technical problem, the image receiving unit configured to receive a CT image, and the heart image received from the image receiving unit And an image processor configured to display at least one of a coronary inner wall image, an outer wall image, and a plaque image output from the image processor, and to control the image receiver, the image processor, and the display unit. An n-channel data generation unit configured to generate n-channel data by adjusting a window width (WW) and a window level (WL) of the CT image, wherein n is two or more; Natural mission; An image reconstruction unit configured to reconstruct an image at right angles from each of the n-channel data to reconstruct it into a horizontal plane image, a sagittal plane image, and a coronal plane image; A machine learning unit configured to machine learn based on a composite product neural network (CNN) for the reconstructed images; And an image acquisition unit configured to generate a mask image from each of the images including any one of the machine-reversed right-angle reconstructed images, and obtain a cross-sectional image of the coronary vessel based on the generated mask images. have.
또한, 상기 영상 처리부는, 획득된 상기 관상동맥 내벽의 단면 영상 및 획득된 상기 관상동맥 외벽의 단면 영상을 이용하여, 상기 관상동맥 내의 플라크를 분석하도록 구성되는 플라크분석유닛을 더 포함할 수 있다.The image processor may further include a plaque analysis unit configured to analyze plaque in the coronary artery using the obtained cross-sectional image of the coronary inner wall and the acquired cross-sectional image of the coronary outer wall.
또한, 상기 n-채널데이터생성유닛은, 관상동맥 내강(lumen) 관찰을 위한 WW1 및 WL1, 칼슘 분석을 위한 WW2 및 WL2 및 지질 플라크를 위한 WW3 및 WL3를 설정함으로써 상기 n-채널 데이터를 생성하도록 구성되고, 여기서 n은 3이다.In addition, the n-channel data generating unit is configured by setting WW 1 and WL 1 for coronary lumen observation, WW 2 and WL 2 for calcium analysis, and WW 3 and WL 3 for lipid plaques. Generate channel data, where n is three.
또한, 상기 합성곱 신경망(CNN)은 2개의 간략 합성곱망(BCN; brief convolutional network)을 연속적으로 적층하여 구성될 수 있다.In addition, the composite product neural network (CNN) may be configured by successively stacking two brief convolutional networks (BCN).
또한, 상기 2개의 간략 합성곱망(BCN) 중 선행하는 제1 BCN에서 자기부호화기를 이용하여 사전 학습이 수행될 수 있다.In addition, prior learning may be performed by using a self-encoder in the preceding first BCN of the two simple synthetic product networks BCN.
본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크를 분석하기 위한 방법 및 그 영상 처리 장치에 의하면, 딥러닝을 이용하여 컴퓨터 단층촬영(CT) 영상에서 관상동맥 내 플라크의 분석을 보다 용이하게 그리고 정확하게 수행할 수 있다.According to a method for analyzing plaque in a computed tomography (CT) image and an image processing apparatus according to an embodiment of the present invention, analysis of coronary plaque in a computed tomography (CT) image using deep learning is performed. It can be done more easily and accurately.
또한, 본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크를 분석하기 위한 방법 및 그 영상 처리 장치에 의하면, 관상동맥의 플라크를 분석함에 있어 실제 임상환경에서 활용될 수 있도록 정확한 결과를 도출하는 것이 가능하다.In addition, according to the method and the image processing apparatus for analyzing the plaque in the computed tomography (CT) image according to an embodiment of the present invention, in analyzing the plaque of the coronary artery, accurate results to be utilized in the actual clinical environment It is possible to derive
도 1은 일반적인 DICOM 표준 의료 영상의 포맷을 설명하기 위한 예시도이다.1 is an exemplary diagram for describing a format of a general DICOM standard medical image.
도 2는 본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크(plaque)를 분석하기 위한 방법(S200)의 순서도이다.2 is a flowchart of a method S200 for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따라 3-채널 데이터를 생성하는 과정을 설명하기 위한 예시도이다.3 is an exemplary diagram for describing a process of generating 3-channel data according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따라 도 3에서 생성된 3-채널 데이터를 수평면/관상면/시상면 영상으로 각각 재구성하는 과정을 설명하기 위한 예시도이다.FIG. 4 is an exemplary diagram for describing a process of reconstructing three-channel data generated in FIG. 3 into horizontal, coronal, and sagittal images according to an embodiment of the present invention.
도 5a는 본 발명의 일 실시예에 따른 간략 합성곱망(BCN)의 개략적인 블록도이고, 도 5b는 본 발명의 일 실시예에 따른 기계 학습 로직을 설명하기 위한 개략적인 블록도이다.5A is a schematic block diagram of a simplified composite product network (BCN) according to an embodiment of the present invention, and FIG. 5B is a schematic block diagram for explaining machine learning logic according to an embodiment of the present invention.
도 6은 본 발명의 일 실시예에 따른 본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크를 분석하기 위한 방법의 전체 흐름을 개략적으로 도시한 순서도이다.FIG. 6 is a flowchart schematically illustrating the overall flow of a method for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
도 7은 본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크를 분석하기 위한 방법을 수행하도록 구성되는 영상 처리 장치(700)의 개략적인 블록도이다.7 is a schematic block diagram of an image processing apparatus 700 configured to perform a method for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
이하 본 발명의 바람직한 실시예들의 상세한 설명이 첨부된 도면들을 참조하여 설명될 것이다. 도면들 중 동일한 구성들은 가능한 한 어느 곳에서든지 동일한 부호들을 나타내고 있음을 유의하여야 한다. 이하의 설명에서 구체적인 특정 사항들이 나타나고 있는데, 이는 본 발명의 보다 전반적인 이해를 돕기 위해 제공된 것이다. 그리고 본 발명을 설명함에 있어, 관련된 공지 기능 혹은 구성에 대한 구체적인 설명이 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우 그 상세한 설명은 생략한다.DETAILED DESCRIPTION A detailed description of preferred embodiments of the present invention will now be described with reference to the accompanying drawings. It should be noted that the same components in the figures represent the same numerals wherever possible. Specific details are set forth in the following description, which is provided to help a more general understanding of the present invention. In describing the present invention, when it is determined that a detailed description of a related known function or configuration may unnecessarily obscure the subject matter of the present invention, the detailed description thereof will be omitted.
본 명세서에서 사용되는 용어는 본 발명에서의 기능을 고려하면서 가능한 현재 널리 사용되는 일반적인 용어들을 선택하였으나, 이는 본 발명이 속하는 기술 분야에 종사하는 기술자의 의도 또는 판례, 새로운 기술의 출현 등에 따라 달라질 수 있다. 또한, 특정한 경우는 출원인이 임의로 선정한 용어도 있으며, 이 경우 해당되는 본 발명의 설명 부분에서 상세히 그 의미를 기재할 것이다. 따라서 본 명세서에서 사용되는 용어는 단순한 용어의 명칭이 아닌, 그 용어가 가지는 의미와 본 명세서의 전반에 걸친 내용을 토대로 정의되어야 한다.The terminology used herein is to select general terms that are widely used as possible in consideration of the functions of the present invention, but may vary according to the intention or precedent of the person skilled in the art to which the present invention belongs, the emergence of new technologies, etc. have. In addition, in certain cases, there is a term arbitrarily selected by the applicant, and in this case, the meaning will be described in detail in the corresponding description of the present invention. Therefore, the terms used in the present specification should be defined based on the meanings of the terms and the contents throughout the present specification, rather than simple names of the terms.
본 명세서 전체에서 어떤 부분이 어떤 구성요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 포함할 수 있음을 의미한다. 또한, 명세서에 기재된 "...부", "모듈" 등의 용어는 적어도 하나의 기능이나 동작을 처리하는 단위를 의미하며, 이는 하드웨어 또는 소프트웨어로 구현되거나 하드웨어와 소프트웨어의 결합으로 구현될 수 있다. When a part of the present specification is said to "include" any component, this means that it may further include other components, without excluding the other components unless otherwise stated. In addition, the terms "... unit", "module", etc. described in the specification mean a unit for processing at least one function or operation, which may be implemented in hardware or software or a combination of hardware and software. .
도 1은 일반적인 DICOM 표준 의료 영상의 포맷을 설명하기 위한 예시도이다. 1 is an exemplary diagram for describing a format of a general DICOM standard medical image.
의료용 디지털 영상 및 통신(DICOM; Digital Imaging and Communications in Medicine) 표준은 의료용 기기에서 디지털 영상 표현과 통신에 사용되는 다양한 표준을 지칭하는 용어로서, 미국방사선의학회(ACR; American College of Radiology)와 미국전기공업회(NEMA; National Electrical Manufacturers Association)에서 구성한 연합 위원회에서 발표한다. DICOM (Digital Imaging and Communications in Medicine) standard is a term used to refer to various standards used for digital image representation and communication in medical devices. The American College of Radiology (ACR) and the US Electric Presented by a coalition committee formed by the National Electrical Manufacturers Association (NEMA).
도 1은 이러한 DICOM 표준 의료 영상의 포맷을 개략적으로 도시하는데, 도시된 바와 같이 DICOM 영상은 2 바이트(byte), 즉 16 비트(bit)의 크기를 갖고, 영상 관측을 위해 사용자가 직접 창 너비(WW; window width)와 창 수준(WL; window level)을 조정하는 방식을 사용하고 있다. FIG. 1 schematically illustrates the format of such DICOM standard medical images, as shown, DICOM images are 2 bytes, i.e. 16 bits in size, for which the user directly measures the window width (see FIG. We are using the window width (WW) and window level (WL) controls.
CT 영상은 기본적으로 물과 뼈 그리고 공기를 표준으로 하여 설정된 픽셀의 상대적인 선감약계수(linear attenuation coefficient)인 CT 넘버(CT number)를 이용하여 흑·백의 여러 단계를 나타낼 수 있다. 여기서, 창 너비(WW)는 흑백의 여러 단계인 그레이 스케일(gray-scale)로 표현할 수 있는 CT 넘버의 범위를 의미하며, 창 수준(WL)은 그레이 스케일의 중앙값을 의미한다. CT images can represent different stages of black and white using CT numbers, which are the relative linear attenuation coefficients of pixels that are set based on water, bone, and air. Here, the window width (WW) means a range of CT numbers that can be expressed in gray-scale, which is various levels of black and white, and the window level (WL) means the median value of the gray scale.
예컨대, 복부 영상의 창 너비(WW)를 +300으로 설정하고 창 수준(WL)을 0으로 설정하면, 영상에 나타나는 하운스필트 유닛(HU; hounsfield unit)의 범위는 -150에서 +150이다. 그러므로, 흡수치가 -150보다 낮은 물질은 검게 나타나고, 흡수치가 +150보다 높은 물질을 밝게 나타나며, -149에서 +149 사이의 HU를 갖는 물질은 흑과 백 사이의 레벨로 표시될 수 있다. For example, if the window width WW of the abdominal image is set to +300 and the window level WL is set to 0, the range of hounsfield units (HU) appearing in the image is -150 to +150. Therefore, a material with an absorption lower than -150 appears black, a material with an absorption higher than +150 appears bright, and a material having a HU between -149 and +149 can be represented by a level between black and white.
하지만, 이러한 일반적인 DICOM 포맷 영상을 이용하여 관상동맥을, 특히 관상동맥의 내벽/외벽/플라크를 분석하는 경우, 대부분 숙련된 전문가의 정교한 수정을 필요로 하고 분석 알고리즘의 성능 한계로 인해 소프트웨어가 내벽과 외벽을 정밀하게 분석하지 못하여 수많은 수작업을 거쳐야 하는 불편함이 있다.However, when analyzing coronary arteries, especially the inner wall / outer wall / plaque of coronary arteries using these common DICOM format images, most of them require sophisticated modifications by experienced experts and due to the performance limitations of the analysis algorithms, There is an inconvenience of not having to analyze the outer wall precisely and go through a lot of manual work.
도 2는 본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크(plaque)를 분석하기 위한 방법(S200)의 순서도이다. 2 is a flowchart of a method S200 for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
참고로, 이하의 본 발명의 명세서에서는 본 발명의 용이한 이해를 위해서, 관상동맥 컴퓨터 단층촬영 조영술(CCTA; coronary computed tomography angiography) 영상을 예시로 기술하지만, 이는 일 예에 불과할 뿐 CCTA 영상에 한정되지 않고 MRI 영상, X-선 영상 등 다른 의료영상에도 확장 적용가능할 것이다.For reference, in the following specification of the present invention, coronary computed tomography angiography (CCTA) images are exemplified for easy understanding of the present invention, but these are only examples and are limited to CCTA images. It could be extended to other medical images such as MRI image and X-ray image.
본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크를 분석하기 위한 방법(S200)은, 컴퓨터 단층촬영(CT) 영상을 포함하는 의료영상을 수신하는 단계(S210); 상기 CT 영상의 창 너비(WW; window width) 및 창 수준(WL; window level)을 조정하여 n-채널 데이터를 생성하는 단계(S220) ― 여기서, n은 2이상의 자연수임 ―; 상기 n-채널 데이터 각각에서 영상을 직각으로 재구성하여 수평면(axial) 영상, 시상면(sagittal) 영상 및 관상면(coronal) 영상으로 재구성하는 단계(S230); 상기 재구성된 영상들에 대해 합성곱 신경망(CNN; convolutional neural network) 기반으로 기계 학습시키는 단계(S240); 기계 학습된 직각으로 재구성된 영상 중 어느 하나를 포함하는 영상으로부터 각각 마스크 영상을 생성하고, 생성된 마스크 영상들에 기초하여 관상동맥 혈관의 단면 영상을 획득하는 단계(S250); 및 획득된 상기 관상동맥 내벽의 단면 영상 및 획득된 상기 관상동맥 외벽의 단면 영상을 이용하여, 상기 관상동맥 내의 플라크를 분석하는 단계(S260)를 포함할 수 있다. Method (S200) for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention, receiving a medical image including a computed tomography (CT) image (S210); Generating n-channel data by adjusting a window width (WW) and a window level (WL) of the CT image (S220), where n is a natural number of two or more; Reconstructing an image from each of the n-channel data at right angles and reconstructing the image into a horizontal image, a sagittal image, and a coronal image (S230); Machine learning on the reconstructed images based on a convolutional neural network (CNN) (S240); Generating a mask image from each of the images including any one of the machine-learned right-angle reconstructed images, and obtaining a cross-sectional image of the coronary blood vessel based on the generated mask images (S250); And analyzing the plaque in the coronary artery using the obtained cross-sectional image of the coronary inner wall and the obtained cross-sectional image of the coronary outer wall (S260).
참고로, 도 2에 도시된 각각의 단계(S210 내지 S260)는 본 발명의 용이한 이해를 위한 일 예에 해당하고, 따라서 도 2에 도시되지 않은 추가의 단계가 더 수행될 수도 있음은 명백할 것이다. For reference, each step S210 to S260 shown in FIG. 2 corresponds to an example for easy understanding of the present invention, and thus it will be apparent that additional steps not shown in FIG. 2 may be further performed. will be.
S210은 CT 영상, 예컨대 관상동맥 컴퓨터 단층촬영 조영술(CCTA; coronary computed tomography angiography) 영상을 수신하는 단계에 해당한다. 컴퓨터 단층촬영(CT)은 컴퓨터 처리가 만들어내는 단층 촬영을 이용하는 의학 화상 처리 방식 중 하나이며, 상기 CCTA 영상은 임의의 컴퓨터 단층촬영(CT) 장비/장치/기구에 의해서 획득될 수 있는 임의의 관상동맥 CT 영상을 포함할 수 있다. S210 corresponds to receiving a CT image, such as coronary computed tomography angiography (CCTA) image. Computed tomography (CT) is one of the medical image processing methods using tomography produced by computer processing, wherein the CCTA image is any coronal image that can be obtained by any computed tomography (CT) equipment / device / instrument Arterial CT imaging.
S220은 수신된 CT 영상의 창 너비(WW) 및 창 수준(WL)을 조정하여 n-채널 데이터(n-channel data)를 생성하는 단계에 해당한다. 여기서, n은 2 이상의 자연수이며, 이하의 설명에서 본 발명의 용이한 이해를 위해 3-채널 데이터를 예시로 기술하기로 하지만, 다른 복수의 채널 데이터가 추가적으로 또는 대체적으로 구현될 수 있음은 명백할 것이다.S220 corresponds to a step of generating n-channel data by adjusting a window width WW and a window level WL of the received CT image. Here, n is a natural number of two or more, and in the following description, three-channel data will be described as an example for easy understanding of the present invention, but it will be apparent that other plurality of channel data may additionally or alternatively be implemented. will be.
본 발명은 종래의 방식과 비교하여 보다 용이하고 정확하게 관상동맥 플라크를 분석할 수 있는 방법 및 장치를 제공하는 것을 목적으로 하고, 그러한 방법/장치를 구현함에 있어 영상의 기계 학습(예컨대, CNN)을 수반하며, 그러한 기계 학습에 최적화된 새로운 의료영상포맷을 갖는 3-채널 데이터를 제안하는 것을 특징으로 하며, 이는 종래 2 바이트 크기의 단일 채널을 갖는 DICOM 의료 영상과는 명백하게 대비되는 것이다. It is an object of the present invention to provide a method and apparatus for analyzing coronary plaques more easily and accurately compared to conventional methods, and to implement machine learning (eg, CNN) of images in implementing such methods / apparatus. It entails and proposes three-channel data with a new medical image format optimized for such machine learning, which is a clear contrast to conventional DICOM medical images with a single channel of 2 byte size.
보다 구체적으로, 본 발명의 일 실시예에 따른 3-채널 데이터는 CT 영상의 창 너비(WW)와 창 수준(WL)을 각각 조정한 3개의 데이터로 구성되고, 이들은 각각 관상동맥 내강(lumen) 관찰을 위한 데이터, 칼슘 분석을 위한 데이터 그리고 지질 플라크를 위한 데이터로 구성될 수 있다. More specifically, the three-channel data according to an embodiment of the present invention is composed of three data, each of which adjusts the window width (WW) and the window level (WL) of the CT image, each of which is coronary lumen Data for observation, data for calcium analysis, and data for lipid plaques.
예컨대, 관상동맥 내강 관찰을 위한 영상은 창 너비 WW1 및 창 수준 WL1로 설정될 수 있고, 칼슘 분석을 위한 영상은 창 너비 WW2 및 창 수준 WL2로 설정될 수 있으며, 지질 플라크를 위한 영상은 창 너비 WW3 및 창 수준 WL3로 설정될 수 있다. 예를 들어, 관상동맥 루멘 관찰을 위해 740/220, 칼슘 분석을 위해 1500/550 및 지질 플라크를 위해 500/100의 값으로 창 너비/창 수준을 설정하는 것이 가능하며, 이러한 창 너비/창 수준의 구체적인 수치는 당업계에서 각각 '관상동맥 루멘 관찰' '칼슘 분석' 및 '지질 플라크'를 위한 최적의 수치로서 실험적으로 도출된 값에 해당할 수 있지만, 예시된 수치 이외의 다른 수치들이 추가로 또는 대체하여 사용될 수 있음은 명백할 것이다. For example, the image for coronary lumen observation can be set to window width WW 1 and window level WL 1 , and the image for calcium analysis can be set to window width WW 2 and window level WL 2 , for lipid plaques. The image can be set to window width WW 3 and window level WL 3 . For example, it is possible to set the window width / window level to values of 740/220 for coronary lumen observation, 1500/550 for calcium analysis and 500/100 for lipid plaque, Specific values of may correspond to experimentally derived values as optimal values for coronary lumen observation, calcium analysis and lipid plaque, respectively, in the art, but additional values other than those illustrated Or alternatively, it may be used.
참고로, 관상동맥 루멘 관찰을 위한 제1 채널의 창 너비/창 수준인 740/220은, 논문 『Eur Radiol. 2016 Sep; 26(9): 3190-8, doi: 10.1007/s00330-015-4121-5. Epub 2015 Dec 2. "Optimal boundary detection method and window settings for coronary atherosclerotic plaque volume analysis in coronary computed tomography angiography: comparison with intravascular ultrasound."』에서 임상적으로 의미있는 수치로서 기술된 수치에 해당한다.For reference, the window width / window level of the first channel for coronary lumen observation is described in Eur Radiol. 2016 Sep; 26 (9): 3190-8, doi: 10.1007 / s00330-015-4121-5. Epub 2015 Dec 2. "Optimal boundary detection method and window settings for coronary atherosclerotic plaque volume analysis in coronary computed tomography angiography: comparison with intravascular ultrasound."
도 3은 본 발명의 일 실시예에 따라 3-채널 데이터를 생성하는 과정을 설명하기 위한 예시도이다. 참고로, 도 3에서는 관상동맥 내강의 영상을 예시적으로 도시한다.3 is an exemplary diagram for describing a process of generating 3-channel data according to an embodiment of the present invention. For reference, FIG. 3 exemplarily shows an image of a coronary lumen.
도 3에 도시된 바와 같이, 본 발명의 일 실시예에 따르면, 동일한 관상동맥 내강 영상에 대해 창 너비/창 수준을 각각 WW1/WL1, WW2/WL2 및 WW3/WL3로 조정하여 3개의 채널 데이터(ch1, ch2 및 ch3)를 생성할 수 있다. 본 발명의 추가의 실시예에 따르면, 이렇게 생성된 3-채널 데이터(ch1, ch2 및 ch3)는 [0, 255]의 스케일로 재구성될 수 있으며, 이를 위해 [-150HU, 590HU], [-200HU, 1300HU] 및 [-150HU, 350HU]의 상이한 자르기 창이 사용될 수 있다.As shown in Figure 3, according to one embodiment of the present invention, to adjust the window width / window level for the same coronary artery lumen images in each WW 1 / WL 1, WW 2 / WL 2 and WW 3 / WL 3 Three channel data ch1, ch2 and ch3 can be generated. According to a further embodiment of the invention, the three-channel data ch1, ch2 and ch3 thus generated can be reconstructed on a scale of [0, 255], for this [-150HU, 590HU], [-200HU] , 1300 HU] and [-150 HU, 350 HU] different crop windows can be used.
이와 같은 3-채널 데이터를 구성하는 이유는 종래의 CT 영상 분석 방법에서 사용하는 기법을 모사하기 위한 것으로서, 실제로 영상 분석시 위 제시된 WW1/WL1, WW2/WL2 및 WW3/WL3로 창 너비/창 수준을 설정하여 관상동맥 내 플라크를 분석하며, 그러므로 본 발명은 실제 작업 환경을 모사하여 딥러닝 모델(deep learning model)이 이를 학습하도록 했다는 것에 중요한 의미를 갖는다. The reason for constructing such three-channel data is to simulate the technique used in the conventional CT image analysis method. In fact, the above-described WW 1 / WL 1 , WW 2 / WL 2 and WW 3 / WL 3 are presented. By setting the window width / window level to analyze coronary plaques, the present invention therefore has an important meaning that the deep learning model simulates the actual working environment to allow it to learn.
다른 말로, 본 발명에 따른 새로운 3-채널 데이터의 목적은, 실제 전문가가 수작업으로 플라크를 분석할 때 그 특성에 맞게 데이터 표시 창의 너비와 수준을 조절하는 작업 과정을 모사하는 것이다. In other words, the purpose of the new three-channel data according to the present invention is to simulate the process of adjusting the width and level of the data display window to suit its characteristics when a real expert analyzes plaque by hand.
상술한 과정을 통해 S220에서 3-채널 데이터가 생성된 이후에, S230은 그렇게 생성된 3-채널 데이터 각각에서 영상을 직각으로 재구성하여 수평면(axial) 영상과, 관상면(coronal) 영상과, 그리고 시상면(sagittal) 영상으로 재구성하는 단계에 해당한다. After the three-channel data is generated in S220 through the above-described process, S230 reconstructs the image at right angles from each of the three-channel data generated in such a manner so that the horizontal image, the coronal image, and Corresponds to the reconstruction of sagittal images.
참고로, 석회화된 플라크는 관상동맥 내에서 입체적으로 자리를 잡기 때문에 수평면 영상만을 학습해서는 플라크의 전반적인 특성을 학습하는데에 한계가 있다. 그러므로, 본 발명에서는 수평면 영상뿐만 아니라, 수평면 영상을 회전시켜 직각으로 재구성함으로써 관상면(coronal) 영상 및 시상면(sagittal) 영상을 추가로 생성하고 이들을 영상 기계 학습에 이용하는 것을 특징으로 한다. For reference, because calcified plaques are three-dimensionally located in the coronary arteries, there is a limit in learning the overall characteristics of the plaques only by learning horizontal images. Therefore, the present invention is characterized by additionally generating a coronal image and a sagittal image by rotating the horizontal image as well as the horizontal image and reconstructing the image at right angles, and using them for image machine learning.
도 4는 본 발명의 일 실시예에 따라 도 3에서 생성된 3-채널 데이터를 수평면/관상면/시상면 영상으로 각각 재구성하는 과정을 설명하기 위한 예시도이다. 도 4에 도시된 바와 같이, 3-채널 데이터(ch1, ch2 및 ch3) 각각이 수평면 영상, 관상면 영상 및 시상면 영상으로 재구성된다.FIG. 4 is an exemplary diagram for describing a process of reconstructing three-channel data generated in FIG. 3 into horizontal, coronal, and sagittal images according to an embodiment of the present invention. As shown in Fig. 4, each of the three-channel data ch1, ch2 and ch3 is reconstructed into a horizontal plane image, a coronal plane image and a sagittal plane image.
여기서, 수평면 영상의 크기는 64 × 64로 고정되어 있으나, 혈관 길이는 사람과 사람의 질병의 정도에 따라 상이하기 때문에 64 × 64 × 64 크기의 정육면체를 샘플링 단위로 정의할 수 있으며, 샘플링 후 남은 부분은 학습 대상에서 배제될 수 있다.Here, the size of the horizontal plane image is fixed to 64 × 64, but since the blood vessel length is different according to the degree of disease of a person and a person, a 64 × 64 × 64 size cube can be defined as a sampling unit. The part may be excluded from the learning object.
참고로, 도 4에서는 본 발명의 다양한 실시예들의 용이한 이해를 위해서 '수평면 영상' '관상면 영상' 및 '시상면 영상'을 예시적으로 도시하였으나, 다른 형태의 혈관 영상이 추가적으로 또는 대체적으로 사용될 수 있음은 당업계에서 명백할 것이다.For reference, FIG. 4 exemplarily shows a 'horizontal plane image', 'coronal plane image' and 'sagittal plane image' for easy understanding of various embodiments of the present invention. It will be apparent in the art that it can be used.
이와 같이, CT 영상의 창 너비(WW) 및 창 수준(WL)을 조정하여 3-채널 데이터를 생성하고(S220), 생성된 3-채널 데이터 각각에 대해 수평면/시상면 및 관상면 영상으로 재구성한(S230) 이후에, S240은 재구성된 영상들에 대해 합성곱 신경망(CNN)을 이용하여 기계 학습시키는 단계에 해당한다. As such, 3-channel data is generated by adjusting the window width (WW) and the window level (WL) of the CT image (S220), and reconstructing the horizontal / sagittal and coronal images for each of the generated 3-channel data. After S230, S240 corresponds to machine learning a reconstructed image using a composite product neural network (CNN).
참고로, 합성곱 신경망(CNN; convolutional neural network)은 하나 또는 여러 개의 합성곱층과 그 위에 올려진 일반적인 인공 신경망층으로 이루어져 합성곱층에서 전처리를 수행하는 구조를 갖는 신경망에 해당하며, 적용 예에 따라 컨볼루션 신경망, 회선 신경망, 뇌회로망 등으로 지칭될 수도 있다. For reference, a convolutional neural network (CNN) corresponds to a neural network having a structure for performing pretreatment in a compound multiplication layer consisting of one or several compound multiplication layers and a general artificial neural network layer mounted thereon. It may also be referred to as a convolutional neural network, a circuit neural network, a brain network, and the like.
참고로, 일반적인 합성곱 신경망(CNN) 구조는 관상동맥과 같이 세밀하고 정교한 영역화 작업에는 적합하지 않다. 따라서, 종래의 3-채널 컬러 영상에서 영역화를 수행하던 보통의 합성곱 신경망(CNN) 구조와는 달리, 본 발명에서는 압축층(compression layer)과 압축해제층(decompression layer)으로 구성된, 소위 "간략 합성곱망(BCN; brief convolutional network)"의 구조를 사용한다.For reference, the general composite neural network (CNN) structure is not suitable for detailed and sophisticated segmentation work such as coronary arteries. Therefore, unlike the conventional CNN structure that performs the segmentation in the conventional three-channel color image, in the present invention, a so-called " compression layer and decompression layer " Use the structure of a brief convolutional network (BCN).
도 5a는 이러한 간략 합성곱망(BCN)의 개략적인 블록도를 도시한다. 도 5a에 도시된 바와 같이, 본 발명의 일 실시예에 따른 간략 합성곱망(BCN)은 3개의 합성곱층과 1개의 맥스-풀링(max-pooling)층으로 구성되는 압축층 2개와, 3개의 합성곱층과 1개의 업-샘플링(up-sampling)층으로 구성되는 압축해제층 2개로 구성될 수 있다. 여기서, 맥스-풀링층과 업-샘플링층은 2 픽셀 스트라이딩을 사용할 수 있다.FIG. 5A shows a schematic block diagram of this simplified composite product network (BCN). As shown in FIG. 5A, a simple synthetic product network (BCN) according to an embodiment of the present invention includes two compressed layers consisting of three composite product layers and one max-pooling layer, and three composite products. It may consist of two decompression layers consisting of a product layer and one up-sampling layer. Here, the max-pooling layer and the up-sampling layer may use 2 pixel striding.
참고로, 합성곱층 모두는 0.0005의 정규화 패널티(L2-weight)가 적용되었으며, ReLU(Rectified Linear Unit)을 이용하여 활성화되었다. 또한, 그래디언트가 약해지거나 수렴하는 것을 방지하기 위해서 합성곱망의 활성화 이전에 배치 정규화(batch normalization)가 적용되었으며, 신경망의 과적합(overfitting)을 방지하기 위해서 각각의 합성곱망 끝에 드롭아웃(dropout) 기법이 사용되었으며 그 비율을 0.5로 설정하였다.For reference, all of the composite product layers were applied with a normalization penalty of 0.0005 (L2-weight), and activated using ReLU (Rectified Linear Unit). In addition, batch normalization was applied prior to activation of the composite product network to prevent gradients from weakening or converging, and a dropout technique at the end of each composite product network to prevent overfitting of the neural network. Was used and the ratio was set to 0.5.
그러므로, n × m 크기의 입력 영상을 X, 그리고 이에 대응하는 같은 크기의 출력 영상을 Y라고 가정하면, 상기 제안된 간략 합성곱망(BCN)을 이용하여 x∈X 및 y∈Y에 대한 확률분포 p(y|x)를 이용해 출력 영상에 대한 레이블링(labeling)을 수행하는 것이 가능하다.Therefore, if we assume that an input image of size n × m is X and the output image of the same size is Y, then the probability distribution for x ∈ X and y 하여 Y is proposed using the proposed simple synthetic product network (BCN). It is possible to perform labeling on the output image using p (y | x).
도 5b는 본 발명의 일 실시예에 따른 기계 학습 로직을 설명하기 위한 개략적인 블록도이다. 도 5b에 도시된 바와 같이, 본 발명의 일 실시예에 따른 관상동맥 내벽/외벽/플라크 영역화를 위한 기계 학습이 간략 합성곱망(BCN)을 연속적으로 적층한 구조의 망을 이용하여 수행됨을 특징으로 한다.5B is a schematic block diagram for explaining machine learning logic according to an embodiment of the present invention. As shown in FIG. 5B, machine learning for coronary inner wall / outer wall / plaque areaization according to an embodiment of the present invention is performed using a network of simple stacked composite product networks (BCNs). It is done.
즉, 본 발명에 따른 영상 기계 학습에 이용되는 합성곱 신경망(CNN)은 2개의 간략 합성곱망(BCN)을 연속적으로 적층하여 구성될 수 있다. 여기서, 상기 2개의 단략 합성곱망(BCN) 중 선행하는 제1 BCN(도 5b에서 좌측 BCN)에서는 자기부호화(auto-encoder)를 이용하여 사전-학습(pre-training)이 수행된다.That is, the composite product neural network (CNN) used in the image machine learning according to the present invention may be configured by successively stacking two simple composite product networks (BCN). Here, pre-training is performed by using an auto-encoder in the preceding first BCN (left BCN in FIG. 5B) of the two simplified composite product networks BCN.
보다 구체적으로, 관상동맥 내벽 및 외벽 예측에 앞서 더 우수한 학습 초기값을 획득하기 위해서 간략 합성곱망(BCN) 구조를 이용하여 사전-학습이 진행되고, 사전-학습은 입력으로부터 계산되는 출력이 입력 자체와 유사하게 되도록 하는 자기부호화기(auto-encoder)를 사용하는 것을 특징으로 한다.More specifically, pre-learning is carried out using a simple synthetic product network (BCN) structure in order to obtain better initial values prior to coronary inner and outer wall prediction, where the output computed from the input is the input itself. It is characterized by using an auto-encoder to be similar to.
그러므로, 제1 BCN에 의한 자기부호화기 과정을 통해, 영상 잡음을 감소시킬 수 있을 뿐만 아니라 관상동맥의 내벽과 외벽의 경계를 명확하게 하며 석회화된 플라크에 대한 특징을 향상시켜 학습을 용이하게 한다.Therefore, through the self-encoder process by the first BCN, not only can the image noise be reduced, but also the boundary between the inner wall and the outer wall of the coronary artery is cleared, and the characteristics of the calcified plaques are improved to facilitate the learning.
도 5b에 도시된 바와 같이, 사전-학습을 위한 제1 BCN에 연속하여 제2 BCN 구조로 영상 학습이 진행되면, 관상동맥 내벽 및 외벽의 예측 모델이 완성될 수 있다. 참고로, 예측 모델은 수평면 영상, 시상면 영상 및 관상면 영상 각각에 대해서 개별적으로 학습되며, 관상동맥 내벽 및 관상동맥 외벽에 대한 학습 및 예측도 각각 개별적으로 또는 통합적으로 수행될 수 있다.As shown in FIG. 5B, when image learning is performed in a second BCN structure subsequent to the first BCN for pre-learning, a predictive model of the coronary inner and outer walls may be completed. For reference, the prediction model is individually learned for the horizontal plane image, the sagittal plane image, and the coronal plane image, and the learning and prediction for the coronary inner wall and the coronary outer wall may be separately or collectively performed.
이와 같이, 재구성된 수평면/시상면/관상면 영상에 대해서 이중 간략 합성곱망(BCN) 기반으로 기계 학습이 진행되면(S240), S250은 기계 학습된 수평면 영상, 시상면 영상 및 관상면 영상으로부터 각각 마스크(mask) 영상을 생성하고, 생성된 마스크 영상들에 기초하여 관상동맥 혈관의 단면 영상을 획득하는 단계에 해당한다. As such, when machine learning is performed on the reconstructed horizontal plane, sagittal plane, and coronal plane image based on the double simple compound network (BCN) (S240), S250 is respectively obtained from the machine-learned horizontal plane, sagittal plane, and coronal plane images. Generating a mask image and obtaining a cross-sectional image of a coronary vessel based on the generated mask images.
부언하면, 제1 BCN의 입력은 영상이고 출력 또한 영상이므로 자기 자신을 학습하게 되고, BCN 구조 내부의 다수의 합성곱층은 학습 과정에서 영상의 특징들을 학습한다. 즉, 의미가 없는 것들(예컨대, 노이즈)은 학습 과정에서 자연스럽게 사라지게 되고, 의미가 있는 것들(예컨대, 플라크)만 남아 있게 된다.In other words, since the input of the first BCN is an image and the output is also an image, one learns himself, and a plurality of composite products in the BCN structure learn features of the image in the learning process. That is, meaningless things (e.g. noise) will naturally disappear in the learning process, and only meaningful things (e.g. plaques) will remain.
이러한 방식으로 자기-부호화 기법으로 학습된 층들에 영상을 입력하게 되면 동일한 영상이 출력되는 것처럼 보이겠지만, 실제로는 노이즈가 감소되고 플라크는 그대로 남아있는 영상을 획득하는 것이 가능하게 된다.In this way, if the image is input to the layers learned by the self-encoding technique, the same image may appear to be output, but in reality, it is possible to acquire an image in which noise is reduced and the plaque remains as it is.
그 후 사전-학습된 BCN과 학습되지 않은 BCN을 적층하고, 적층 구조에서 보면 입력은 영상이고 출력은 마스크이며, 이 상태로 적층된 층들을 학습시킨다. 이러한 과정을 통해 학습이 완료되면 사전-학습된 제1 BCN에도 일부분 가중치-업데이트(weight-update)가 발생하는데, 이때 제2 BCN에 있는 모든 합성곱층의 가중치가 랜덤-가중치에서 학습된 가중치로 모두 업데이트된다. 그러므로, 학습에 사용되지 않은 새로운 영상을 입력시킬 경우에 학습된 모든 가중치를 거쳐 마스크가 출력될 수 있다.The pre-learned BCN and the unlearned BCN are then stacked, and in the stacked structure, the input is an image and the output is a mask and the stacked layers are learned in this state. When the learning is completed through this process, a partial weight-update occurs in the pre-learned first BCN, where the weights of all the composite product layers in the second BCN are all the weights learned from the random-weighted values. Is updated. Therefore, when inputting a new image not used for learning, a mask may be output through all learned weights.
도 6은 본 발명의 일 실시예에 따른 본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크를 분석하기 위한 방법의 전체 흐름을 개략적으로 도시한 순서도이다. FIG. 6 is a flowchart schematically illustrating the overall flow of a method for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
재구성된 수평면/시상면/관상면 영상에 대해서 이중 간략 합성곱망(BCN)(50) 기반으로 기계 학습이 진행되면, 그 기계 학습의 결과로서 각각의 채널(ch1, ch2 및 ch3)에 대한 마스크 영상(mask image)(61, 62, 63)이 생성될 수 있고, 이렇게 생성된 마스크 영상(60)에 기초하여 관상동맥 혈관의 단면 영상(70)이 획득될 수 있다. 참고로, 도 6의 전체 흐름도에서는 관상동맥 내강의 영역화를 예시로 도시하였다. When machine learning is performed on the reconstructed horizontal plane / sagittal plane / coronal plane image based on the double simple compound network (BCN) 50, the mask image for each channel (ch1, ch2 and ch3) as a result of the machine learning. The mask images 61, 62, and 63 may be generated, and a cross-sectional image 70 of the coronary vessel may be obtained based on the mask image 60 generated as described above. For reference, the entire flow chart of FIG. 6 illustrates the regionalization of the coronary lumen.
여기서, 입력 영상 각 픽셀에 대한 출력 영상 각 픽셀의 확률은 학습된 간략 합성곱망(BCN) 구조를 통해 획득될 수 있으며, 따라서 수평면, 시상면 및 관상면 영상에 대한 확률 맵 fa = pa(y|x), fc = pc(y|x) 및 fs = ps(y|x)을 획득할 수 있다. 게다가, 보다 견고한 예측 결과를 획득하기 위해서, 각각의 픽셀에 대한 확률을 증폭한 증폭특징(amplified feature)을 아래와 같이 정의한다. In this case, the probability of each pixel of the output image for each pixel of the input image may be obtained through the trained simplified composite product network (BCN) structure, and thus the probability maps f a = p a (for horizontal, sagittal and coronal images). y | x), f c = p c (y | x), and f s = p s (y | x) can be obtained. In addition, to obtain a more robust prediction result, an amplified feature that amplifies the probability for each pixel is defined as follows.
fm:= exp(pa(y|x)) + exp(pc(y|x)) + exp(ps(y|x)) f m : = exp (p a (y | x)) + exp (p c (y | x)) + exp (p s (y | x))
이와 같이, 각각의 픽셀에 대한 레이블을 결정하기 위한 특징벡터 {fa, fc, fs, fm}가 정의되고, 출력 영상의 각 픽셀에 대한 레이블 결정은 그래디언트 부스팅(gradient boosting) 기법을 사용하였으며, 그래디언트 부스팅 모델은 추출된 특징벡터와 레이블 참값으로 학습된다. As such, a feature vector {f a , f c , f s , f m } for determining the label for each pixel is defined, and label determination for each pixel of the output image uses a gradient boosting technique. The gradient boosting model is trained on the extracted feature vectors and label true values.
기계 학습의 결과로 마스크 영상(60)이 생성되고, 그에 기초하여 혈관의 단면 영상(70)이 획득되면(S250), S260은 획득된 관상동맥 내벽의 단면 영상 및 획득된 관상동맥 외벽의 단면 영상을 이용하여 관상동맥 내의 플라크를 분석하는 단계(S260)에 해당한다. 일반적으로, 관상동맥의 외벽의 영역에서 관상동맥 내벽의 영역을 제외하면 나머지 영역이 플라크 영역에 해당할 수 있다.When the mask image 60 is generated as a result of the machine learning, and based on the acquired cross-sectional image 70 of the blood vessel (S250), S260 is a cross-sectional image of the acquired coronary inner wall and the obtained cross-sectional image of the coronary outer wall. Corresponds to the step (S260) of analyzing the plaque in the coronary artery using. In general, except for the region of the inner wall of the coronary artery from the region of the outer wall of the coronary artery, the remaining region may correspond to the plaque region.
참고로, 도 2의 도시된 일련의 단계들(S210 내지 S260)은 관상동맥 내벽과 관상동맥 외벽에 대해 각각 개별적으로 또는 통합적으로 수행될 수 있다.For reference, the illustrated series of steps S210 to S260 of FIG. 2 may be performed separately or integrally with respect to the coronary inner wall and the coronary outer wall, respectively.
도 7은 본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크를 분석하기 위한 방법을 수행하도록 구성되는 영상 처리 장치(700)의 개략적인 블록도이다.7 is a schematic block diagram of an image processing apparatus 700 configured to perform a method for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention.
도 7에 도시된 바와 같이, 본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크를 분석하기 위한 방법을 실행하도록 구성되는 영상 처리 장치(700)는, 컴퓨터 단층촬영(CT) 영상을 포함하는 의료영상을 수신하도록 구성되는 영상 수신부(720)와, 상기 영상 수신부(720)에서 수신된 심장 영상을 처리하도록 구성되는 영상 처리부(730)와, 상기 영상 처리부(730)에서 출력된 관상동맥 내벽 영상, 외벽 영상 및 플라크 영상 중 적어도 하나를 디스플레이하도록 구성되는 디스플레이부(740)와, 상기 영상 수신부(720), 상기 영상 처리부(730) 및 상기 디스플레이부(740)를 제어하도록 구성되는 제어부(710)를 포함할 수 있다.As shown in FIG. 7, an image processing apparatus 700 configured to execute a method for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention is a computed tomography (CT) image. An image receiver 720 configured to receive a medical image including an image, an image processor 730 configured to process a cardiac image received by the image receiver 720, and a coronal tube output from the image processor 730. A controller configured to display at least one of an artery inner wall image, an outer wall image, and a plaque image, and a controller configured to control the image receiver 720, the image processor 730, and the display 740. 710 may be included.
참고로, 도 7에 도시된 블록도의 영상 처리 장치(700)의 각 엘리먼트들은 본원 발명의 용이한 이해를 위한 일 예에 불과할 뿐, 도 7에 도시된 엘리먼트 이외의 엘리먼트가 영상 처리 장치(700)에 추가적으로 포함될 수 있음은 명백할 것이다.For reference, each element of the image processing apparatus 700 of the block diagram illustrated in FIG. 7 is merely an example for easy understanding of the present disclosure, and elements other than the element illustrated in FIG. 7 may be used as the image processing apparatus 700. Will be included in the
여기서, 상기 영상 처리부(730)는, 상기 CT 영상의 창 너비(WW) 및 창 수준(WL)을 조정하여 n-채널 데이터를 생성하도록 구성되는 n-채널데이터생성유닛(731) ― 여기서, n은 2 이상의 자연수임 ―; 상기 n-채널 데이터 각각에서 영상을 직각으로 재구성하여 수평면 영상, 시상면 영상 및 관상면 영상으로 재구성하도록 구성되는 영상재구성유닛(732); 상기 재구성된 영상들에 대해 합성곱 신경망(CNN) 기반으로 기계 학습시키도록 구성되는 기계학습유닛(733); 기계 학습된 직각으로 재구성된 영상 중 적어도 하나를 포함하는 영상으로부터 수평면 영상, 시상면 영상 및 관상면 영상으로부터 각각 마스크 영상을 생성하고, 생성된 마스크 영상들에 기초하여 관상동맥 혈관의 단면 영상을 획득하도록 구성되는 영상획득유닛(734); 및 획득된 상기 관상동맥 내벽의 단면 영상 및 획득된 상기 관상동맥 외벽의 단면 영상을 이용하여, 상기 관상동맥 내의 플라크를 분석하도록 구성되는 플라크분석유닛(735)을 포함할 수 있다.Here, the image processing unit 730 is configured to adjust the window width (WW) and the window level (WL) of the CT image to generate n-channel data, where n-channel data generation unit 731, where n Is at least two natural numbers; An image reconstruction unit 732, configured to reconstruct an image at right angles from each of the n-channel data to reconstruct it into a horizontal plane image, a sagittal plane image, and a coronal plane image; A machine learning unit (733) configured to machine learn based on a composite product neural network (CNN) for the reconstructed images; A mask image is generated from a horizontal plane image, a sagittal plane image, and a coronal plane image, respectively, from an image including at least one of a machine-learned orthogonally reconstructed image, and a cross-sectional image of a coronary vessel is obtained based on the generated mask images. An image acquisition unit 734, configured to be configured to be; And a plaque analysis unit 735 configured to analyze plaque in the coronary artery using the obtained cross-sectional image of the coronary inner wall and the acquired cross-sectional image of the coronary outer wall.
영상 처리부(730)의 각 유닛들(731 내지 735)의 구체적인 기능들 및 동작들은 이미 상술하였으므로 본 단락에서는 이를 생략하기로 한다.Since specific functions and operations of the units 731 to 735 of the image processor 730 have been described above, the description thereof will be omitted.
또한, 상기 제어부(710)는 영상 수신부(720), 영상 처리부(730) 및 디스플레이부(740)를 총괄적으로 제어하도록 구성될 수 있다. 예컨대, 상기 제어부(710)는 단일의 제어기(controller)로서 구현될 수 있거나, 또는 복수의 마이크로제어기(micro-controller)로서 구현될 수도 있다.In addition, the controller 710 may be configured to collectively control the image receiver 720, the image processor 730, and the display 740. For example, the controller 710 may be implemented as a single controller, or may be implemented as a plurality of micro-controllers.
실험 결과 및 분석Experiment Results and Analysis
참고로, 본 발명자들은 기계 학습된 모델의 성능을 10명의 관상동맥 데이터를 이용하여 실험하였고, 3-채널 데이터에 대한 효과를 확인하기 위해서 1 채널영상으로 획득한 결과와 3 채널영상으로 획득한 결과를 비교하였다. 또한, 모델 학습 단계와 마찬가지로 입력 데이터를 3-채널 데이터로 재구성하고, 이를 다시 수평면/시상면/관상면 영상으로 재구성하여 실험을 수행하였다. 모델 성능을 평가하기 위해서 참값과의 체적 일치도(DSC; Dice Similarity Coefficient)를 비교하였으며, 그 결과는 아래의 표와 같다.For reference, the present inventors experimented with the performance of the machine-learned model using 10 coronary artery data, the results obtained with 1 channel image and 3 channel image to confirm the effect on the 3-channel data Was compared. In addition, as in the model training step, the experiment was performed by reconstructing the input data into three-channel data and reconstructing the image into horizontal, sagittal, and coronal images. To evaluate the performance of the model, we compared Dice Similarity Coefficient (DSC) with the true value and the results are shown in the table below.
Figure PCTKR2017005764-appb-I000001
Figure PCTKR2017005764-appb-I000001
제안한 모델의 임상적 유효성을 검증하기 위해 간략 합성곱망(BCN) 모델에서 획득한 플라크의 부피와 임상적 참값인 IVUS에서의 플라크와 CT 영상에서의 플라크 영역을 서로 일치시키기 위해 두 개의 모달리티(modality) 영상에서 플라크의 위치를 지정하였다. 아래의 표는 간략 합성곱망(BCN)으로 측정한 플라크 부피와 임상적 참값 IVUS에서 측정한 플라크 부피를 비교한 결과를 나타낸다.In order to verify the clinical validity of the proposed model, two modalities are used to match the plaque volume obtained from the simple synthetic product network (BCN) model with the plaque region in the IVUS, which is the clinical true value, and the plaque region in the CT image. Plaques were positioned in the image. The table below shows the results of comparing the plaque volume measured by simple synthetic product network (BCN) with the plaque volume measured by clinical true value IVUS.
Figure PCTKR2017005764-appb-I000002
Figure PCTKR2017005764-appb-I000002
본 발명과 관련하여 상기한 설명들 및 기술들을 통해 확인할 수 있는 바와 같이, 본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크를 분석하기 위한 방법 및 그 영상 처리 장치에 의하면, 딥러닝을 이용하여 컴퓨터 단층촬영(CT) 영상에서 관상동맥 내 플라크의 분석을 보다 용이하게 그리고 정확하게 수행할 수 있다.As can be seen through the above descriptions and techniques in connection with the present invention, according to the method and an image processing apparatus for analyzing plaque in a computed tomography (CT) image according to an embodiment of the present invention, Running can be used to more easily and accurately analyze coronary plaques in computed tomography (CT) images.
또한, 본 발명의 일 실시예에 따른 컴퓨터 단층촬영(CT) 영상에서 플라크를 분석하기 위한 방법 및 그 영상 처리 장치에 의하면, 관상동맥의 플라크를 분석함에 있어 실제 임상환경에서 활용될 수 있도록 정확한 결과를 도출하는 것이 가능하다. In addition, according to the method and the image processing apparatus for analyzing the plaque in the computed tomography (CT) image according to an embodiment of the present invention, in analyzing the plaque of the coronary artery, accurate results to be utilized in the actual clinical environment It is possible to derive
상술한 본 발명의 일 실시 예들은 컴퓨터에서 실행될 수 있는 프로그램으로 작성가능하고, 컴퓨터로 판독 가능한 기록매체를 이용하여 상기 프로그램을 동작시키는 범용 디지털 컴퓨터에서 구현될 수 있다.The above-described embodiments of the present invention can be written as a program that can be executed in a computer, and can be implemented in a general-purpose digital computer that operates the program using a computer-readable recording medium.
컴퓨터 판독 가능 매체는 컴퓨터에 의해 액세스될 수 있는 임의의 가용 매체일 수 있고, 휘발성 및 비휘발성 매체, 분리형 및 비분리형 매체를 모두 포함한다. 또한, 컴퓨터 판독가능 매체는 컴퓨터 저장 매체 및 통신 매체를 모두 포함할 수 있다. 컴퓨터 저장 매체는 컴퓨터 판독가능 명령어, 데이터 구조, 프로그램 모듈 또는 기타 데이터와 같은 정보의 저장을 위한 임의의 방법 또는 기술로 구현된 휘발성 및 비휘발성, 분리형 및 비분리형 매체를 모두 포함한다. 통신 매체는 전형적으로 컴퓨터 판독가능 명령어, 데이터 구조, 프로그램 모듈, 또는 반송파와 같은 변조된 데이터 신호의 기타 데이터, 또는 기타 전송 메커니즘을 포함하며, 임의의 정보 전달 매체를 포함한다.Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media. In addition, computer readable media may include both computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transmission mechanism, and includes any information delivery media.
전술한 본 발명의 설명은 예시를 위한 것이며, 본 발명이 속하는 기술분야의 통상의 지식을 가진 자는 본 발명의 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 쉽게 변형이 가능하다는 것을 이해할 수 있을 것이다. 그러므로 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며 한정적이 아닌 것으로 이해해야만 한다. 예를 들어, 단일형으로 설명되어 있는 각 구성 요소는 분산되어 실시될 수도 있으며, 마찬가지로 분산된 것으로 설명되어 있는 구성 요소들도 결합된 형태로 실시될 수 있다.The foregoing description of the present invention is intended for illustration, and it will be understood by those skilled in the art that the present invention may be easily modified in other specific forms without changing the technical spirit or essential features of the present invention. will be. Therefore, it should be understood that the embodiments described above are exemplary in all respects and not restrictive. For example, each component described as a single type may be implemented in a distributed manner, and similarly, components described as distributed may be implemented in a combined form.
본 발명의 범위는 상기 상세한 설명보다는 후술하는 특허청구범위에 의하여 나타내어지며, 특허청구범위의 의미 및 범위 그리고 그 균등 개념으로부터 도출되는 모든 변경 또는 변형된 형태가 본 발명의 범위에 포함되는 것으로 해석되어야 할 것이다.The scope of the present invention is shown by the following claims rather than the above description, and all changes or modifications derived from the meaning and scope of the claims and their equivalents should be construed as being included in the scope of the present invention. something to do.
본 발명은 딥러닝(deep learning) 기법을 이용하여 컴퓨터 단층촬영 영상에서 자동으로 관상동맥의 내벽과 외벽의 마스크를 생성하고 그에 따라 관상동맥 내 플라크의 보다 용이하고 정확한 분석이 가능하므로 의료 분야에서 널리 이용이 가능한 것이다.According to the present invention, a deep learning technique is used to automatically generate a mask of an inner wall and an outer wall of a coronary artery from a computed tomography image, thereby enabling easier and accurate analysis of plaques in a coronary artery. It is available.

Claims (10)

  1. 의료영상에서 플라크(plaque)를 분석하기 위한 방법으로서, As a method for analyzing plaque in medical images,
    컴퓨터 단층촬영(CT) 영상을 포함하는 의료영상을 수신하는 단계; Receiving a medical image including a computed tomography (CT) image;
    상기 CT 영상의 창 너비(WW; window width) 및 창 수준(WL; window level)을 조정하여 n-채널 데이터를 생성하는 단계 ― 여기서, n은 2 이상의 자연수임 ― ; Generating n-channel data by adjusting a window width (WW) and a window level (WL) of the CT image, wherein n is a natural number of two or more;
    상기 n-채널 데이터 각각에서 영상을 직각(orthogonal)으로 재구성하여 수평면(axial) 영상, 시상면(sagittal) 영상 및 관상면(coronal) 영상으로 재구성하는 단계; Reconstructing an image into an orthogonal image in each of the n-channel data and reconstructing the image into a horizontal image, a sagittal image, and a coronal image;
    상기 재구성된 영상들에 대해 합성곱 신경망(CNN; convolutional neural network) 기반으로 기계 학습시키는 단계; 및Machine learning the reconstructed images based on a convolutional neural network (CNN); And
    기계 학습된 직각으로 재구성된 영상 중 적어도 하나를 포함하는 영상으로부터 마스크 영상을 생성하고, 생성된 마스크 영상들에 기초하여 관상동맥 혈관의 단면 영상을 획득하는 단계Generating a mask image from an image including at least one of the machine-learned orthogonally reconstructed images, and obtaining a cross-sectional image of the coronary vessel based on the generated mask images
    를 포함하고,Including,
    상기 단계들이 관상동맥 내벽과 관상동맥 외벽에 대해 각각 개별적으로 또는 통합적으로 수행되는 것을 특징으로 하는, Characterized in that the steps are performed separately or integrally on the coronary inner wall and coronary outer wall, respectively,
    의료영상에서 플라크를 분석하기 위한 방법.Method for analyzing plaque in medical images.
  2. 제 1 항에 있어서, The method of claim 1,
    획득된 상기 관상동맥 내벽의 단면 영상 및 획득된 상기 관상동맥 외벽의 단면 영상을 이용하여, 상기 관상동맥 내의 플라크를 분석하는 단계를 더 포함하는,Analyzing the plaque in the coronary artery using the obtained cross-sectional image of the coronary inner wall and the acquired cross-sectional image of the coronary outer wall,
    의료영상에서 플라크를 분석하기 위한 방법.Method for analyzing plaque in medical images.
  3. 제 1 항에 있어서, The method of claim 1,
    상기 CT 영상의 창 너비(WW) 및 창 수준(WL)을 조정하여 n-채널 데이터를 생성하는 단계는, Generating n-channel data by adjusting the window width (WW) and the window level (WL) of the CT image,
    관상동맥 내강(lumen) 관찰을 위한 WW1 및 WL1, 칼슘 분석을 위한 WW2 및 WL2 및 지질 플라크를 위한 WW3 및 WL3를 설정함으로써 상기 n-채널 데이터를 생성하는 단계를 포함하고, 여기서 n은 3인, Generating the n-channel data by setting WW 1 and WL 1 for coronary lumen observation, WW 2 and WL 2 for calcium analysis and WW 3 and WL 3 for lipid plaques, Where n is 3,
    의료영상에서 플라크를 분석하기 위한 방법.Method for analyzing plaque in medical images.
  4. 제 1 항에 있어서, The method of claim 1,
    상기 합성곱 신경망(CNN)은 2개의 간략 합성곱망(BCN; brief convolutional network)을 연속적으로 적층하여 구성되는, The composite product neural network (CNN) is configured by successively stacking two brief convolutional networks (BCN),
    의료영상에서 플라크를 분석하기 위한 방법.Method for analyzing plaque in medical images.
  5. 제 4 항에 있어서, The method of claim 4, wherein
    상기 2개의 간략 합성곱망(BCN) 중 선행하는 제1 BCN에서 자기부호화기(auto-encoder)를 이용하여 사전 학습이 수행되는, Pre-learning is performed using an auto-encoder in a preceding first BCN of the two simple synthetic product networks (BCN),
    의료영상에서 플라크를 분석하기 위한 방법.Method for analyzing plaque in medical images.
  6. 제 1 항 내지 제 5 항 중 어느 한 항에 따른 방법을 수행하도록 구성되는 영상 처리 장치로서, An image processing apparatus configured to perform the method according to any one of claims 1 to 5,
    컴퓨터 단층촬영(CT) 영상을 수신하도록 구성되는 영상 수신부와, An image receiver configured to receive a computed tomography (CT) image;
    상기 영상 수신부에서 수신된 심장 영상을 처리하도록 구성되는 영상 처리부와, An image processor configured to process a heart image received by the image receiver;
    상기 영상 처리부에서 출력된 관상동맥 내벽 영상, 외벽 영상 및 플라크 영상 중 적어도 하나를 디스플레이하도록 구성되는 디스플레이부와, A display unit configured to display at least one of a coronary inner wall image, an outer wall image, and a plaque image output from the image processor;
    상기 영상 수신부, 상기 영상 처리부 및 상기 디스플레이부를 제어하도록 구성되는 제어부A controller configured to control the image receiver, the image processor, and the display unit
    를 포함하고, Including,
    상기 영상 처리부는,The image processor,
    상기 CT 영상의 창 너비(WW) 및 창 수준(WL)을 조정하여 n-채널 데이터를 생성하도록 구성되는 n-채널데이터생성유닛 ― 여기서, n은 2 이상의 자연수임 ―;An n-channel data generation unit, configured to adjust the window width WW and window level WL of the CT image to generate n-channel data, where n is a natural number of two or more;
    상기 n-채널 데이터 각각에서 영상을 직각으로 재구성하여 수평면 영상, 시상면 영상 및 관상면 영상으로 재구성하도록 구성되는 영상재구성유닛; An image reconstruction unit configured to reconstruct an image at right angles from each of the n-channel data to reconstruct it into a horizontal plane image, a sagittal plane image, and a coronal plane image;
    상기 재구성된 영상들에 대해 합성곱 신경망(CNN) 기반으로 기계 학습시키도록 구성되는 기계학습유닛; 및A machine learning unit configured to machine learn based on a composite product neural network (CNN) for the reconstructed images; And
    기계 학습된 직각으로 재구성된 영상 중 적어도 하나를 포함하는 영상으로부터 각각 마스크 영상을 생성하고, 생성된 마스크 영상들에 기초하여 관상동맥 혈관의 단면 영상을 획득하도록 구성되는 영상획득유닛An image acquisition unit, configured to generate a mask image from an image including at least one of the machine-learned right-angle reconstructed images, and obtain a cross-sectional image of the coronary vessel based on the generated mask images
    을 포함하는,Including,
    영상 처리 장치.Image processing device.
  7. 제 6 항에 있어서, The method of claim 6,
    상기 영상 처리부는, 획득된 상기 관상동맥 내벽의 단면 영상 및 획득된 상기 관상동맥 외벽의 단면 영상을 이용하여, 상기 관상동맥 내의 플라크를 분석하도록 구성되는 플라크분석유닛을 더 포함하는,The image processing unit may further include a plaque analysis unit configured to analyze plaque in the coronary artery using the obtained cross-sectional image of the coronary inner wall and the obtained cross-sectional image of the coronary outer wall.
    영상 처리 장치.Image processing device.
  8. 제 6 항에 있어서, The method of claim 6,
    상기 n-채널데이터생성유닛은, 관상동맥 내강(lumen) 관찰을 위한 WW1 및 WL1, 칼슘 분석을 위한 WW2 및 WL2 및 지질 플라크를 위한 WW3 및 WL3를 설정함으로써 상기 n-채널 데이터를 생성하도록 구성되고, 여기서 n은 3인, The n-channel data generation unit sets the n-channel by setting WW 1 and WL 1 for coronary lumen observation, WW 2 and WL 2 for calcium analysis, and WW 3 and WL 3 for lipid plaques. Generate data, where n is 3,
    영상 처리 장치.Image processing device.
  9. 제 6 항에 있어서, The method of claim 6,
    상기 합성곱 신경망(CNN)은 2개의 간략 합성곱망(BCN; brief convolutional network)을 연속적으로 적층하여 구성되는, The composite product neural network (CNN) is configured by successively stacking two brief convolutional networks (BCN),
    영상 처리 장치.Image processing device.
  10. 제 9 항에 있어서, The method of claim 9,
    상기 2개의 간략 합성곱망(BCN) 중 선행하는 제1 BCN에서 자기부호화기를 이용하여 사전 학습이 수행되는, Pre-learning is performed using a self-encoder in the preceding first BCN of the two simple synthetic product networks (BCN),
    영상 처리 장치.Image processing device.
PCT/KR2017/005764 2017-02-22 2017-06-02 Method and device analyzing plaque from computed tomography image WO2018155765A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020170023605A KR101902883B1 (en) 2017-02-22 2017-02-22 A method for analyzing plaque in a computed tomography image and an apparatus thereof
KR10-2017-0023605 2017-02-22

Publications (1)

Publication Number Publication Date
WO2018155765A1 true WO2018155765A1 (en) 2018-08-30

Family

ID=63254360

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/005764 WO2018155765A1 (en) 2017-02-22 2017-06-02 Method and device analyzing plaque from computed tomography image

Country Status (2)

Country Link
KR (1) KR101902883B1 (en)
WO (1) WO2018155765A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109813276A (en) * 2018-12-19 2019-05-28 五邑大学 A kind of antenna for base station has a down dip angle measuring method and its system
CN109859201A (en) * 2019-02-15 2019-06-07 数坤(北京)网络科技有限公司 A kind of noncalcified plaques method for detecting and its equipment
CN112700445A (en) * 2021-03-23 2021-04-23 上海市东方医院(同济大学附属东方医院) Image processing method, device and system
WO2022247168A1 (en) * 2021-05-24 2022-12-01 山东省人工智能研究院 Positional convolutional attention network-based vascular plaque ct image segmentation method

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102250164B1 (en) 2018-09-05 2021-05-10 에이아이메딕(주) Method and system for automatic segmentation of vessels in medical images using machine learning and image processing algorithm
CN109447969B (en) * 2018-10-29 2021-08-10 北京青燕祥云科技有限公司 Liver occupation lesion identification method and device and implementation device
KR102219378B1 (en) * 2018-10-31 2021-02-24 주식회사 휴이노 Method, system and non-transitory computer-readable recording medium for recognizing arrhythmia by using artificial neural network
US10387752B1 (en) * 2019-01-22 2019-08-20 StradVision, Inc. Learning method and learning device for object detector with hardware optimization based on CNN for detection at distance or military purpose using image concatenation, and testing method and testing device using the same
KR102206621B1 (en) * 2019-03-11 2021-01-22 가천대학교 산학협력단 Programs and applications for sarcopenia analysis using deep learning algorithms
CN111583260A (en) * 2020-06-10 2020-08-25 中国医学科学院阜外医院 Plaque vulnerability prediction method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150112182A1 (en) * 2013-10-17 2015-04-23 Siemens Aktiengesellschaft Method and System for Machine Learning Based Assessment of Fractional Flow Reserve
KR101529211B1 (en) * 2014-02-11 2015-06-16 연세대학교 산학협력단 Apparatus and method for analysing plaque change
JP5926728B2 (en) * 2010-07-26 2016-05-25 ケイジャヤ、エルエルシー Visualization adapted for direct use by physicians
JP5980486B2 (en) * 2010-09-27 2016-08-31 ゼネラル・エレクトリック・カンパニイ System and method for visualization and quantification of vascular stenosis using spectral CT analysis
US20170046616A1 (en) * 2015-08-15 2017-02-16 Salesforce.Com, Inc. Three-dimensional (3d) convolution with 3d batch normalization

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9155512B2 (en) 2013-12-18 2015-10-13 Heartflow, Inc. Systems and methods for predicting coronary plaque vulnerability from patient-specific anatomic image data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5926728B2 (en) * 2010-07-26 2016-05-25 ケイジャヤ、エルエルシー Visualization adapted for direct use by physicians
JP5980486B2 (en) * 2010-09-27 2016-08-31 ゼネラル・エレクトリック・カンパニイ System and method for visualization and quantification of vascular stenosis using spectral CT analysis
US20150112182A1 (en) * 2013-10-17 2015-04-23 Siemens Aktiengesellschaft Method and System for Machine Learning Based Assessment of Fractional Flow Reserve
KR101529211B1 (en) * 2014-02-11 2015-06-16 연세대학교 산학협력단 Apparatus and method for analysing plaque change
US20170046616A1 (en) * 2015-08-15 2017-02-16 Salesforce.Com, Inc. Three-dimensional (3d) convolution with 3d batch normalization

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109813276A (en) * 2018-12-19 2019-05-28 五邑大学 A kind of antenna for base station has a down dip angle measuring method and its system
CN109859201A (en) * 2019-02-15 2019-06-07 数坤(北京)网络科技有限公司 A kind of noncalcified plaques method for detecting and its equipment
CN112700445A (en) * 2021-03-23 2021-04-23 上海市东方医院(同济大学附属东方医院) Image processing method, device and system
CN112700445B (en) * 2021-03-23 2021-06-29 上海市东方医院(同济大学附属东方医院) Image processing method, device and system
WO2022247168A1 (en) * 2021-05-24 2022-12-01 山东省人工智能研究院 Positional convolutional attention network-based vascular plaque ct image segmentation method

Also Published As

Publication number Publication date
KR20180097035A (en) 2018-08-30
KR101902883B1 (en) 2018-10-01

Similar Documents

Publication Publication Date Title
WO2018155765A1 (en) Method and device analyzing plaque from computed tomography image
US20230104045A1 (en) System and method for ultrasound analysis
WO2020122357A1 (en) Method and device for reconstructing medical image
Barnard et al. Machine learning for automatic paraspinous muscle area and attenuation measures on low-dose chest CT scans
Masoudi et al. Adipose tissue segmentation in unlabeled abdomen MRI using cross modality domain adaptation
Almeida et al. Lung ultrasound for point-of-care COVID-19 pneumonia stratification: computer-aided diagnostics in a smartphone. First experiences classifying semiology from public datasets
Sharma et al. Heart disease prediction using convolutional neural network
WO2020179950A1 (en) Deep learning-based method and device for prediction of progression of brain disease
Lucassen et al. Deep learning for detection and localization of B-lines in lung ultrasound
Hou et al. Exploring effective DNN models for forensic age estimation based on panoramic radiograph images
CN114241187A (en) Muscle disease diagnosis system, device and medium based on ultrasonic bimodal images
Sharifrazi et al. Hypertrophic cardiomyopathy diagnosis based on cardiovascular magnetic resonance using deep learning techniques
CN110827275B (en) Liver nuclear magnetic artery image quality grading method based on raspberry pie and deep learning
WO2014163334A1 (en) Method for modeling and analyzing computational fluid dynamics on basis of material properties
WO2023027248A1 (en) Data generation method, and training method and apparatus using same
CN108280832A (en) Medical image analysis method, medical image analysis system and storage medium
CN111798427B (en) System for detecting karyokiness in gastrointestinal stromal tumor based on migration learning
US9123260B2 (en) Receiver operating characteristic-based training
CN107256544A (en) A kind of prostate cancer image diagnosing method and system based on VCG16
Suyuti et al. Pneumonia Classification of Thorax Images using Convolutional Neural Networks
WO2023085910A1 (en) Image learning method, device, program, and recording medium using generative adversarial network
WO2023210893A1 (en) Apparatus and method for analyzing ultrasound images
WO2024049208A1 (en) Device and method for measuring air distribution in abdomen
CN112132790B (en) DAC-GAN model construction method and application thereof in mammary gland MR image
EP4312229A1 (en) Information processing apparatus, information processing method, program, trained model, and learning model generation method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17897931

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17897931

Country of ref document: EP

Kind code of ref document: A1