WO2019168310A1 - Device for spatial normalization of medical image using deep learning and method therefor - Google Patents

Device for spatial normalization of medical image using deep learning and method therefor Download PDF

Info

Publication number
WO2019168310A1
WO2019168310A1 PCT/KR2019/002264 KR2019002264W WO2019168310A1 WO 2019168310 A1 WO2019168310 A1 WO 2019168310A1 KR 2019002264 W KR2019002264 W KR 2019002264W WO 2019168310 A1 WO2019168310 A1 WO 2019168310A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
spatial
learning
mri
normalized
Prior art date
Application number
PCT/KR2019/002264
Other languages
French (fr)
Korean (ko)
Inventor
이동영
김유경
이재성
변민수
신성아
강승관
Original Assignee
서울대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020180123300A external-priority patent/KR102219890B1/en
Application filed by 서울대학교산학협력단 filed Critical 서울대학교산학협력단
Priority to US16/965,815 priority Critical patent/US11475612B2/en
Publication of WO2019168310A1 publication Critical patent/WO2019168310A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • a spatial normalization apparatus and method for medical images using deep learning are provided.
  • SN spatial or quantitative standardization
  • PET brain positron emission tomography
  • SPECT single photon emission computed tomography
  • fMRI functional magnetic resonance imaging
  • EEG electroencephalogram
  • MEG magnetic brain waves Magnetoencephalography
  • MRI magnetic resonance imaging
  • CT computed tomography
  • This method has the advantage of obtaining accurate data, but it is time and cost constrained because of the high cost of using expensive equipment.
  • SN spatial quantitative normalization of brain positron emission tomography (PET) images is performed by using average templates obtained from various samples, but because they do not accurately reflect various characteristics between images such as patients and normal people. Accurate analysis is difficult.
  • One embodiment of the present invention is to create an individually adaptable template using deep learning, and to spatially normalize individual medical functional images based on the generated template.
  • the apparatus for spatial normalization of medical images is adapted to generate an adaptive template for spatially normalizing functional medical images based on prestored learning data when a plurality of functional medical images are input into an in-depth learning architecture.
  • Type template generation unit learns by repeating the process of generating the image based on the adaptive template through the generational adversarial network (GAN) for the functional medical image of the user input and determine the authenticity of the generated image
  • GAN generational adversarial network
  • a spatial normalization unit for providing a functional medical image of the user who is spatially normalized based on the learning result.
  • the adaptive template generator measures the difference between the adaptive template result derived from the deep learning architecture and the MRI-based spatial normalized image, which is pre-stored learning data, and can be individually adapted through deep learning to minimize the measured difference. You can create an adaptive template.
  • the learning unit generates a spatial normalized image by spatial normalization of the functional medical image of the user based on the adaptive template, and compares the spatial normalized image with the MRI-based spatial normalized image, which is pre-stored training data, to determine authenticity.
  • the determination result value is generated, the process of generating and determining a spatial normalized image is repeated in turn. If the determination result does not distinguish between the spatial normalized image and the MRI-based spatial normalized image which is previously stored training data, the corresponding learning is performed. You can finish it.
  • the learner may repeatedly learn to minimize the Jensen-Shannon Divergence between the data probability distribution of the spatial normalized image and the normalized MRI data probability distribution.
  • the learning unit may repeatedly learn through a process of solving the min-max problem by using the following equation.
  • D () is a generator for generating an image
  • G () is a discriminator for determining an image
  • z is a functional medical image in a native space
  • x is an MRI-based SN result
  • E is the expected value for a given probability distribution
  • Is the fidelity loss value between MRI-based SN results
  • m is the batch size
  • Ii MNI is the image representing the MRI-based SN results in the MNI space
  • Ii Native is the functional medical image of the basic space.
  • Generating an adaptive template by applying a plurality of functional medical images corresponding to various environments to an in-depth learning architecture Genetic adversarial for a functional medical image of an input user network (GAN) to generate an image based on the adaptive template and repeat the process of determining the authenticity of the generated image, and when the learning is completed, functional medical image of the spatial normalized user based on the learning result Providing a step.
  • GAN input user network
  • an error generated can be minimized and a high accuracy result can be obtained.
  • spatial normalization may be accurately reflected to accurately reflect various characteristics of each individual PET image.
  • FIG. 1 is a flowchart illustrating a process of spatial normalization of a medical image according to an exemplary embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a spatial normalization apparatus for medical images using deep learning according to an exemplary embodiment of the present invention.
  • FIG 3 is an exemplary view for explaining an adaptive template unit and a learning unit according to an embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating a process of spatial normalization of a medical image according to an exemplary embodiment of the present invention.
  • FIG. 5 is an exemplary diagram for describing an in-depth learning architecture and a generative host neural network according to an embodiment of the present invention.
  • FIG. 6 is a diagram for comparing spatial normalized images through a method, an average template method, and a method of utilizing magnetic resonance image information according to an embodiment of the present invention.
  • FIG. 7 is a graph showing the ratio of the standardized understanding value of the method and the magnetic resonance image information utilization method and the ratio of the standardized understanding value of the average template method and the magnetic resonance image information utilization method according to an embodiment of the present invention. .
  • FIG. 1 is a flowchart illustrating a process of spatial normalization of a medical image according to an exemplary embodiment of the present invention.
  • the spatial normalization apparatus 100 trains a result of generating a spatially normalized PET from a functional medical image by using an extracted deformation field. Deep learning is performed using as a training target.
  • the spatial normalization apparatus 100 may perform spatial normalization of a functional medical image (PET) without MRI (3D magnetic resonance imaging) or CT through deep learning.
  • PET functional medical image
  • MRI magnetic resonance imaging
  • CT deep learning
  • Functional medical images used hereinafter show positron emission tomography (PET), which can represent physiological, chemical, and functional images of the human body in three dimensions using radiopharmaceuticals that emit positrons. It includes all functional medical images that are difficult to normalize spatially using independent information such as fMRI, EEG, and MEG.
  • PET positron emission tomography
  • the spatial normalization apparatus 100 that performs spatial normalization on the functional medical image PET through the learned artificial neural network will be described in detail with reference to FIG. 2.
  • FIG. 2 is a block diagram illustrating a spatial normalization apparatus for medical images using deep learning according to an embodiment of the present invention
  • FIG. 3 is a diagram illustrating an adaptive template unit and a learning unit according to an embodiment of the present invention. It is an illustration.
  • the apparatus 100 for normalizing a medical image includes an adaptive template generator 110, a learner 120, and a spatial normalizer 130.
  • the adaptive template generator 110 generates an adaptive template 200 for spatial normalization of a functional medical image.
  • the adaptive template generator 110 When the adaptive template generator 110 receives a plurality of functional medical images, the adaptive template generator 110 inputs a convolutional auto-encoder (CAE) to generate an adaptive template 200 that can be individually adapted through in-depth learning.
  • CAE convolutional auto-encoder
  • the adaptive template generator 110 may perform in-depth learning using the previously stored learning data 300.
  • the prestored learning data 300 includes an image obtained by performing spatial normalization based on magnetic resonance imaging (MRI).
  • MRI magnetic resonance imaging
  • the deep learning architecture includes a convolutional neural network, which represents an artificial neural network that understands an input image through calculation, extracts features to obtain information, or generates a new image.
  • the learner 120 when the learning unit 120 receives a functional medical image of the user, the learner 120 generates a spatial normalized image using the adaptive template 200. The learner 120 determines the authenticity based on the generated spatial normalized image and the previously stored learning data.
  • the learning unit 120 when the learning unit 120 receives the functional medical image of the user, the learning unit 120 performs repetitive learning on the received functional medical image through a generative adversarial network (GAN).
  • GAN generative adversarial network
  • GAN Genetic Adversarial Network
  • GANs Genetic antagonists
  • GANs learn and derive results through competition between two neural network models: generators (G) and discriminators (D).
  • the generator (G) learns the actual data and generates the data based on the purpose of generating data close to the actual, and the discriminator (D) determines whether the data generated by the generator (G) is real or false. Learn.
  • the learner 120 generates a spatial normalized image based on the functional medical image of the user input using the generated antagonistic neural network, and repeats the process of determining the authenticity of the generated image. Through such iterative learning, the learning unit 120 may derive a result very close to the result of spatial normalization of the functional medical image using the image of performing spatial normalization based on magnetic resonance imaging (MRI).
  • MRI magnetic resonance imaging
  • the learner 120 generates an image by spatially normalizing the functional medical image of the user until the generated spatial normalized image and the previously stored learning data cannot be determined, and determines whether the generated image is authentic. Repeat in turn.
  • the spatial normalization unit 130 performs spatial normalization of the user functional medical image through the trained completed generative antagonist network.
  • the spatial normalization unit 130 provides a functional medical image of the spatial normalized user.
  • the spatial normalization apparatus 100 of the medical image may be a server, a terminal, or a combination thereof.
  • Terminals are collectively referred to as devices having arithmetic processing capability by providing a memory and a processor.
  • devices having arithmetic processing capability by providing a memory and a processor.
  • a personal computer a handheld computer, a personal digital assistant, a mobile phone, a smart device, a tablet, and the like.
  • the server is a memory that stores a plurality of modules (module), the processor is connected to the memory and reacts to the plurality of modules, the processor for processing the service information or action information for controlling the service information provided to the terminal, communication Means, and a user interface (UI) display means.
  • module modules
  • UI user interface
  • Non-volatile memory such as highspeed random access memory, magnetic disk storage, flash memory devices, and other non-volatile solid-state memory devices. And various kinds of memories.
  • the communication means transmits and receives service information or action information with the terminal in real time.
  • the UI display means outputs service information or action information of the device in real time.
  • the UI display means may be a separate device that directly or indirectly outputs or displays the UI, or may be part of the device.
  • FIG. 4 is a flowchart illustrating a process of spatial normalization of a medical image according to an exemplary embodiment of the present invention
  • FIG. 5 is an exemplary diagram for describing a deep learning architecture and a generative antagonistic neural network according to an exemplary embodiment of the present invention. to be.
  • the spatial normalization apparatus 100 generates an adaptive template by inputting a plurality of functional medical images corresponding to various environments into an in-depth learning architecture (S410).
  • the deep learning architecture (CAE) is formed of a plurality of layers, and all the convolutional layers extract features in a 3D manner.
  • deep learning architecture (CAE) performs operations using stride convolution, and applies an exponential linear unit (ELU) activation function after the convolution.
  • ELU exponential linear unit
  • CAE deep learning architecture
  • the spatial normalization apparatus 100 may measure a difference between the adaptive template result derived through the deep learning architecture and the spatial normalization result based on the pre-stored MRI.
  • the spatial normalization apparatus 100 performs deep learning so that the loss function (LCAE) of Equation 1 is minimized in the process of measuring the difference between the output value of the deep learning architecture (CAE) and the pre-stored MRI based spatial normalization result. Can be done.
  • LCAE loss function
  • IMNI is the image of MRI based spatial normalization (label) in the MNI space
  • INative is the functional medical image of the input user in the base space
  • N is the number of voxels in the MNI space. Indicates.
  • the spatial normalization apparatus 100 may generate an adaptive template capable of performing spatial normalization individually through in-depth learning of a deep learning architecture (CAE).
  • CAE deep learning architecture
  • the spatial normalization apparatus 100 receives a functional medical image of the user (S420).
  • the spatial normalization apparatus 100 may receive a functional medical image of a user from an associated user terminal (not shown) or a server (not shown), wherein the communication network is a wired communication network, a short-range or long-range wireless communication network, They can include any type of communication network that carries data, such as a mixed network.
  • the spatial normalization apparatus 100 In operation S430, the spatial normalization apparatus 100 generates an image by applying a functional medical image of the user to an adaptive template, and determines whether the generated image is true or not (S430).
  • the spatial normalization apparatus 100 may perform training while sequentially updating the generation model and the distinction model through the generative host neural network (GAN).
  • GAN generative host neural network
  • the spatial normalization apparatus 100 spatially normalizes a functional medical image of a user based on an adaptive template through a generator G of a generative host neural network GAN, and generates a spatial normalized image.
  • the spatial normalization apparatus 100 may determine the authenticity by comparing the spatial normalized image with the MRI-based spatial normalized image, which is pre-stored training data, through a discriminant (D) of the GN. have.
  • FIG. 5A is an exemplary diagram illustrating a network structure of a generator G of a generating antagonist neural network
  • FIG. 4B is a discriminator of a generating antagonist neural network GAN. It is an exemplary figure which shows the network structure of D).
  • each red orange box represents a 3D strided convolutional kernel, where s is the size of the stride and k is the size of the kernel.
  • Each blue box represents a batch normalization combined with an activation function such as Leak-ReLU or ELU.
  • the green box shows the layer completely connected to the number of devices, and the purple box shows the 3D transposed convolutional layer (deconvolution) with two strides and kernel size 3.
  • the generator (G) of the GN may be used as the same neural network as the deep learning architecture (CAE), but is not limited thereto.
  • the spatial normalization apparatus 100 generates the spatial normalized image when the determination result of the discriminator D is generated by using the generator G and the discriminator D of the generative host neural network GAN. Repeat the process in order.
  • the spatial normalization apparatus 100 is learned by generating a generative antagonist network (GAN) solving a min-max problem as shown in Equation 2 below.
  • GAN generative antagonist network
  • z is a PET image of native space (input)
  • x is the MRI-based spatial normalization result (label)
  • E denotes the expectations for a given probability distribution.
  • max D represents finding the discriminator D that maximizes the objective function.
  • Equation 2 the first term E [Log D (x)] is an MRI based spatial normalization result (x), which is a value of an objective function.
  • E [log (1-D (G (z)))] contains the image G (z) created by the constructor, and for the arg max D, the term 1-D (G (z)) ), Maximizing the second term represents minimizing D (G (z)).
  • the spatial normalization apparatus 100 trains the discriminator D to output a large value by inserting a real picture and a small value by inserting a fake picture by using the objective function of the two terms.
  • Min G finds the generation network G that minimizes the objective function.
  • G is included only in the second term, and G, which minimizes the entire function, is G, which minimizes the second term, and ultimately represents G that maximizes D (G (z)).
  • the objective function for generator G is equal to minimizing Jensen-Shannon Divergence between D (x) and D (G (z).
  • the spatial normalization apparatus 100 iteratively learns and updates the constructor (G) and the discriminator (D) so that the Janson-Shannon divergence between the data probability distribution of the spatial normalized image and the normalized MRI data probability distribution is minimized. Can be.
  • the spatial normalizer 100 may add a loss of fidelity between the image generated by the generator (G) and the MRI-based spatial normalization result (label) to the min-max problem.
  • D () is a constructor for generating an image
  • G () is a discriminator for determining an image
  • z is a functional medical image in a native space
  • x is an MRI-based SN result
  • E is the expected value for a given probability distribution
  • Is the fidelity loss value between the MRI-based SN results
  • m is the batch size
  • IiMNI is an image representing the MRI-based SN results in the MNI space
  • IiNative is a functional medical image of the base space.
  • the repeated learner G may generate an image obtained by spatially normalizing a functional medical image of a user having a high similarity with the MRI-based spatial normalization result.
  • the spatial normalization apparatus 100 performs the corresponding learning. You can finish it.
  • the spatial normalization apparatus 100 provides a spatial normalized medical image by spatial normalizing a functional medical image of the user through the learned algorithm (S440).
  • the spatial normalization apparatus 100 may spatially normalize and provide a functional medical image of a user through the generator G in which the learning is completed in the generative host neural network GAN.
  • the spatial normalization apparatus 100 may provide a spatial normalized functional medical image to which the corresponding adaptive template is applied using the adaptive template generated in step S410.
  • the spatial normalization apparatus 100 may apply only to the deep learning architecture (CAE), generate only the adaptive template 200, and provide the functional medical image by spatial normalization using only the adaptive template 200. .
  • CAE deep learning architecture
  • FIG. 6 is a diagram for comparing spatial normalized images through a method, an average template method, and magnetic resonance image information using method according to an embodiment of the present invention
  • FIG. 7 is a method according to an embodiment of the present invention. And the ratio of the standardized understanding value of the method of utilizing MRI information and the standardized understanding value of the average template method and the method of using MRI information.
  • 6A is a spatial normalized image of a PET image using an average template
  • (b) is a spatial normalized image of a PET image using the method proposed by the present invention
  • (c) represents magnetic resonance image information. Spatial normalized images of PET images.
  • the image of (a) shows a large difference in the direction of the arrow and the circled area.
  • the accuracy is very low, while the spatial normalization of the PET image through the deep learning proposed in the present invention, the accuracy is very high.
  • FIG. 7 illustrates a quantitative analysis result of the spatial normalization method and the spatial normalization method using an average template according to an embodiment of the present invention.
  • FIG. 7 is a graph showing a ratio of standardized uptake value (SUVr), and a result of comparing an error with magnetic resonance image information by selecting a certain region of the brain.
  • the deep learning architecture Convolutional auto-encoder
  • the error of the model (hatched block) is less than 5%.
  • the error of the GN proposed in the present invention shows the minimum error.
  • the method of generating an adaptive template through the deep learning architecture proposed in the present invention and spatially normalizing the functional medical image through the generating hostile neural network using the generated adaptive template utilizes the actual magnetic resonance image information. You can see that this is most similar to the normalization method.
  • the program for executing the method according to one embodiment of the present invention can be recorded on a computer readable recording medium.
  • Computer-readable media may include, alone or in combination with the program instructions, data files, data structures, and the like.
  • the media may be those specially designed and constructed or those known and available to those skilled in the computer software arts.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical recording media such as CD-ROMs, DVDs, magnetic-optical media such as floppy disks, and ROM, RAM, flash memory, and the like.
  • Hardware devices specifically configured to store and execute the same program instructions are included.
  • the medium may be a transmission medium such as an optical or metal wire, a waveguide, or the like including a carrier wave for transmitting a signal specifying a program command, a data structure, and the like.
  • Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A device for spatial normalization of a medical image comprises: an adaptive template generation unit for generating an adaptive template for spatially normalizing a functional medical image on the basis of pre-stored learning data, if a plurality of functional medical images are inputted to a deep learning architecture; a learning unit which generates, for an inputted functional medical image of a user, an image on the basis of the adaptive template by means of a generative adversarial network (GAN), and which learns by repeating the process of determining whether the generated image is genuine; and a spatial normalization unit for providing a spatially normalized functional medical image of the user on the basis of a learning result.

Description

딥러닝을 이용한 의료영상의 공간 정규화 장치 및 그 방법Spatial Normalization Device and Method for Medical Image Using Deep Learning
딥러닝을 이용한 의료영상의 공간 정규화 장치 및 그 방법이 제공된다.Provided are a spatial normalization apparatus and method for medical images using deep learning.
의학 영상의 통계적인 분석을 위해서는 개별적인 피험자로부터 얻은 영상을 하나의 공간에 정규화시켜 3차원 화소의 값으로 비교하는 것이 합리적이다.For statistical analysis of medical images, it is reasonable to normalize images from individual subjects in one space and compare them with the values of three-dimensional pixels.
특히, 뇌 양전자 방출 단층 촬영 (PET)과 단일 광자 방출 컴퓨터 단층 촬영 (SPECT) 영상의 통계적 비교를 하거나 객관적 평가를 위해서는 공간 양적 표준화(SN, or anatomical standardization)는 필수적인 절차이다.In particular, spatial or quantitative standardization (SN) is an essential procedure for statistical comparison or objective evaluation of brain positron emission tomography (PET) and single photon emission computed tomography (SPECT) images.
뇌 양전자 방출 단층 촬영 (PET) 영상만 이용하여 공간 양적 표준화(SN)를 수행하는 경우, 많은 오류가 발생한다. 구체적으로 양전자 단층촬영(positron emission tomography, PET), 단일광자 단층촬영(Single photon emission computed tomography, SPECT), 기능적 자기공명영상(functional magnetic resonance imaging, fMRI), 뇌파도(Electroencephalogram, EEG), 자기뇌파검사(Magnetoencephalography, MEG) 등은 기능적인 영상이라는 특징과 공간 분해능의 한계 때문에 단독 정보를 이용한 공간적 정규화가 어렵다.Many errors occur when spatial quantitative normalization (SN) is performed using only brain positron emission tomography (PET) images. Specifically, positron emission tomography (PET), single photon emission computed tomography (SPECT), functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), magnetic brain waves Magnetoencephalography (MEG) is difficult to normalize spatially using single information due to the limitations of functional image and the limitation of spatial resolution.
따라서, 일반적으로 자기공명영상법(MRI) 또는 컴퓨터 단층촬영(CT)를 같이 촬영하고 이를 먼저 공간적 정규화(spatial normalization) 시킨 뒤, 얻어진 변형 벡터장을 기능 영상에 적용하여 공간적 정규화를 시킨다.Therefore, in general, magnetic resonance imaging (MRI) or computed tomography (CT) is photographed together and spatially normalized first, and then the transformed vector field is applied to a functional image to perform spatial normalization.
이러한 방법으로는 정확한 데이터를 얻을 수 있다는 장점이 있지만, 고 비용의 장비 사용으로 인해 많은 비용이 요구되기 때문에 시간 및 비용의 제약이 있다.This method has the advantage of obtaining accurate data, but it is time and cost constrained because of the high cost of using expensive equipment.
또한, 다양한 샘플에서 얻은 평균적 템플릿을 이용하여 뇌 양전자 방출 단층 촬영 (PET) 영상을 공간 양적 표준화(SN)를 진행하는 방법이 있으나, 이는 환자, 정상인 등 영상 간 다양한 특성을 정확하게 반영하지 못하기 때문에, 정확한 분석이 어렵다.In addition, spatial quantitative normalization (SN) of brain positron emission tomography (PET) images is performed by using average templates obtained from various samples, but because they do not accurately reflect various characteristics between images such as patients and normal people. Accurate analysis is difficult.
한편, 최근 많은 분야에서 복잡하고 고차원적인 문제를 성공적으로 해결하기 위해 빅데이터 기반의 딥 학습을 하는 머신 러닝을 이용한 연구가 진행되고 있다.Meanwhile, in recent years, researches using machine learning for deep learning based on big data have been conducted to successfully solve complex and high-level problems.
이에, 이러한 머신 러닝을 이용하여 별도의 자기공명영상법(MRI) 또는 컴퓨터 단층촬영(CT)의 촬영없이도 저 비용으로 정확한 공간적 정규화를 수행하는 기술이 요구된다.Thus, there is a need for a technique for performing accurate spatial normalization at low cost without using separate magnetic resonance imaging (MRI) or computed tomography (CT) imaging.
본 발명의 하나의 실시예는 딥러닝을 이용하여 개별적으로 적응가능한 템플릿을 생성하고, 생성된 템플릿에 기초하여 개별 의료 기능영상을 공간 정규화하기 위한 것이다.One embodiment of the present invention is to create an individually adaptable template using deep learning, and to spatially normalize individual medical functional images based on the generated template.
상기 과제 이외에도 구체적으로 언급되지 않은 다른 과제를 달성하는 데 사용될 수 있다.In addition to the above objects, it can be used to achieve other objects not specifically mentioned.
본 발명의 하나의 실시예에 따른 의료영상의 공간 정규화 장치는 복수 개의 기능적 의료영상을 심층 학습 아키텍처에 입력하면 미리 저장된 학습데이터에 기초하여 기능적 의료영상을 공간적 정규화하기 위한 적응형 템플릿을 생성하는 적응형 템플릿 생성부, 입력받은 사용자의 기능적 의료영상에 대해서 생성적 적대 신경망(Generative adversarial network: GAN)을 통해 적응형 템플릿에 기초한 이미지를 생성하고 생성된 이미지의 진위 여부를 판단하는 과정을 반복하여 학습하는 학습부, 그리고 학습 결과에 기초하여 공간 정규화된 사용자의 기능적 의료영상을 제공하는 공간 정규화부를 포함한다.The apparatus for spatial normalization of medical images according to an embodiment of the present invention is adapted to generate an adaptive template for spatially normalizing functional medical images based on prestored learning data when a plurality of functional medical images are input into an in-depth learning architecture. Type template generation unit, learns by repeating the process of generating the image based on the adaptive template through the generational adversarial network (GAN) for the functional medical image of the user input and determine the authenticity of the generated image And a spatial normalization unit for providing a functional medical image of the user who is spatially normalized based on the learning result.
적응형 템플릿 생성부는, 심층 학습 아키텍처로부터 도출된 적응형 템플릿 결과와 미리 저장된 학습 데이터인 MRI 기반의 공간 정규화 영상과의 차이값을 측정하고, 측정된 차이가 최소화되도록 심층 학습을 통해 개별적으로 적응 가능한 적응형 템플릿을 생성할 수 있다.The adaptive template generator measures the difference between the adaptive template result derived from the deep learning architecture and the MRI-based spatial normalized image, which is pre-stored learning data, and can be individually adapted through deep learning to minimize the measured difference. You can create an adaptive template.
학습부는, 사용자의 기능적 의료영상을 적응형 템플릿에 기초하여 공간 정규화하여 공간 정규화된 이미지를 생성하고, 공간 정규화된 이미지와 미리 저장된 학습 데이터인 MRI 기반의 공간 정규화 영상을 비교하여 진위 여부를 판단하며 판단 결과값이 생성되면 공간 정규화된 이미지를 생성하고 판단하는 과정을 차례로 반복 수행하고, 판단 결과로 상기 공간 정규화된 이미지와 미리 저장된 학습 데이터인 MRI 기반의 공간 정규화 영상을 구별하지 못하면, 해당 학습을 완료할 수 있다.The learning unit generates a spatial normalized image by spatial normalization of the functional medical image of the user based on the adaptive template, and compares the spatial normalized image with the MRI-based spatial normalized image, which is pre-stored training data, to determine authenticity. When the determination result value is generated, the process of generating and determining a spatial normalized image is repeated in turn. If the determination result does not distinguish between the spatial normalized image and the MRI-based spatial normalized image which is previously stored training data, the corresponding learning is performed. You can finish it.
학습부는, 공간 정규화된 이미지의 데이터 확률 분포와 정규화된 MRI 데이터 확률 분포 사이의 잰슨-섀넌 다이버전스(Jensen -Shannon Divergence)가 극소화되도록 반복 학습시킬 수 있다.The learner may repeatedly learn to minimize the Jensen-Shannon Divergence between the data probability distribution of the spatial normalized image and the normalized MRI data probability distribution.
학습부는, 다음 수학식을 이용하여 min-max 문제를 해결하는 과정을 통해 반복하여 학습할 수 있다.The learning unit may repeatedly learn through a process of solving the min-max problem by using the following equation.
Figure PCTKR2019002264-appb-I000001
Figure PCTKR2019002264-appb-I000001
wherewhere
Figure PCTKR2019002264-appb-I000002
Figure PCTKR2019002264-appb-I000002
여기서, D()는 이미지를 생성하는 생성기, G()는 이미지를 판단하는 판별기이며, z는 네이티브 공간의 기능적 의료영상이고, x는 MRI 기반 SN 결과이며,
Figure PCTKR2019002264-appb-I000003
는 각각 생성자 및 판별 자의 매개 변수를 나타내며, E는 주어진 확률 분포에 대한 기대값,
Figure PCTKR2019002264-appb-I000004
은 MRI 기반의 SN 결과 간의 충실도 손실 값, m은 배치 크기이고, Ii MNI는 MNI 공간에서 MRI 기반 SN 결과를 나타내는 이미지이고, Ii Native는 기본 공간의 기능적 의료영상이다.
Here, D () is a generator for generating an image, G () is a discriminator for determining an image, z is a functional medical image in a native space, x is an MRI-based SN result,
Figure PCTKR2019002264-appb-I000003
Are the parameters of the constructor and the discriminator, respectively, and E is the expected value for a given probability distribution,
Figure PCTKR2019002264-appb-I000004
Is the fidelity loss value between MRI-based SN results, m is the batch size, Ii MNI is the image representing the MRI-based SN results in the MNI space, and Ii Native is the functional medical image of the basic space.
본 발명의 하나의 실시예에 따른 다양한 환경에 대응되는 복수개의 기능적 의료영상을 심층 학습 아키텍처에 적용하여 적응형 템플릿을 생성하는 단계, 입력받은 사용자의 기능적 의료영상에 대해서 생성적 적대 신경망(Generative adversarial network: GAN)을 통해 적응형 템플릿에 기초한 이미지를 생성하고 생성된 이미지의 진위 여부를 판단하는 과정을 반복하여 학습하는 단계, 그리고 학습이 완료되면 학습 결과에 기초하여 공간 정규화된 사용자의 기능적 의료영상을 제공하는 단계를 포함한다.Generating an adaptive template by applying a plurality of functional medical images corresponding to various environments to an in-depth learning architecture according to an embodiment of the present invention, Genetic adversarial for a functional medical image of an input user network (GAN) to generate an image based on the adaptive template and repeat the process of determining the authenticity of the generated image, and when the learning is completed, functional medical image of the spatial normalized user based on the learning result Providing a step.
본 발명의 하나의 실시예는 딥 러닝을 이용하여 얻어지는 영상을 통해 최적화된 템플릿을 만들어 공간적 정규화를 진행함으로써, 발생하는 오차를 최소화하며, 정확도가 높은 결과를 얻을 수 있다.According to one embodiment of the present invention, by generating an optimized template through an image obtained by using deep learning and performing spatial normalization, an error generated can be minimized and a high accuracy result can be obtained.
또한, 양전자 방출 단층 촬영 영상만을 이용하여 영상을 정규화함으로써, 정량적, 통계적 분석이 가능하기 때문에, 환자의 진단과 연구에 저비용으로 많은 실험을 진행할 수 있다.In addition, by normalizing the image using only positron emission tomography images, quantitative and statistical analysis is possible, so that many experiments can be carried out at low cost for diagnosis and research of patients.
또한, 기존의 평균 데이터를 이용한 템플릿이 아닌 개별적으로 적응 가능한 최적의 템플릿을 이용하여 공간 정규화함으로써, 각 개인별 PET 영상의 다양한 특성들이 정확하게 반영되도록 공간 정규화할 수 있다.In addition, by performing spatial normalization using an optimally individually adaptable template rather than a template using existing average data, spatial normalization may be accurately reflected to accurately reflect various characteristics of each individual PET image.
도 1은 본 발명의 하나의 실시예에 따른 의료영상의 공간 정규화 하는 과정을 나타내는 흐름도이다.1 is a flowchart illustrating a process of spatial normalization of a medical image according to an exemplary embodiment of the present invention.
도 2는 본 발명의 하나의 실시예에 따른 딥러닝을 이용한 의료영상의 공간정규화 장치를 나타낸 구성도이다.2 is a block diagram illustrating a spatial normalization apparatus for medical images using deep learning according to an exemplary embodiment of the present invention.
도 3은 본 발명의 하나의 실시예에 따른 적응형 템플릿부와 학습부를 설명하기 위한 예시도이다.3 is an exemplary view for explaining an adaptive template unit and a learning unit according to an embodiment of the present invention.
도 4는 본 발명의 하나의 실시예에 따른 의료영상의 공간 정규화 하는 과정을 나타내는 순서도이다.4 is a flowchart illustrating a process of spatial normalization of a medical image according to an exemplary embodiment of the present invention.
도 5는 본 발명의 하나의 실시예에 따른 심층 학습 아키텍처 및 생성적 적대신경망을 설명하기 위한 예시도이다.5 is an exemplary diagram for describing an in-depth learning architecture and a generative host neural network according to an embodiment of the present invention.
도 6은 본 발명의 하나의 실시예에 따른 방법, 평균 템플릿 방법, 자기공명영상 정보 활용 방법을 통해 공간 정규화된 영상을 비교하기 위한 도면이다.FIG. 6 is a diagram for comparing spatial normalized images through a method, an average template method, and a method of utilizing magnetic resonance image information according to an embodiment of the present invention.
도 7은 본 발명의 하나의 실시예에 따른 방법과 자기공명영상 정보 활용 방법의 표준화된 이해 가치의 비율과 평균 템플릿 방법과 자기공명영상 정보 활용 방법과의 표준화된 이해 가치의 비율을 나타낸 그래프이다.7 is a graph showing the ratio of the standardized understanding value of the method and the magnetic resonance image information utilization method and the ratio of the standardized understanding value of the average template method and the magnetic resonance image information utilization method according to an embodiment of the present invention. .
첨부한 도면을 참고로 하여 본 발명의 실시예에 대해 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 본 발명은 여러 가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 동일 또는 유사한 구성 요소에 대해서는 동일한 도면부호가 사용되었다. 또한 널리 알려져 있는 공지기술의 경우 그 구체적인 설명은 생략한다.DETAILED DESCRIPTION Embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art may easily practice the present invention. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. The drawings and description are to be regarded as illustrative in nature and not restrictive. Like reference numerals designate like elements throughout the specification. In addition, in the case of well-known technology, a detailed description thereof will be omitted.
명세서 전체에서, 어떤 부분이 어떤 구성요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 포함할 수 있는 것을 의미한다.Throughout the specification, when a part is said to "include" a certain component, it means that it can further include other components, without excluding other components unless specifically stated otherwise.
이하에서는 도 1을 이용하여 본 발명의 하나의 실시예에 따른 딥러닝을 이용한 의료 영상의 공간 정규화 구성에 관한 전체적인 흐름에 대해서 상세하게 설명한다.Hereinafter, the overall flow of the spatial normalization configuration of a medical image using deep learning according to an embodiment of the present invention will be described in detail with reference to FIG. 1.
도 1은 본 발명의 하나의 실시예에 따른 의료영상의 공간 정규화하는 과정을 나타내는 흐름도이다.1 is a flowchart illustrating a process of spatial normalization of a medical image according to an exemplary embodiment of the present invention.
도 1에 도시한 바와 같이, 일반적으로 MRI 영상(3D MRI in Native space)을 공간 정규화(SyN: spatial normalization)하여 공간 정규화된 MRI(Spatially Normalized MRI)를 생성하는 과정에서, 변형장(Deformation Field)이 추출된다.As shown in FIG. 1, in the process of generating spatially normalized MRI by spatial normalization (SyN), a deformation field is generally used. Is extracted.
본 발명의 하나의 실시예인 공간 정규화 장치(100)는 추출되는 변형장((Deformation Field)을 이용하여 기능적 의료 영상(PET)이 공간 정규화된 기능적 의료 영상(Spatially Normalized PET)을 생성하는 결과를 훈련 목표(Training target)로 이용하여 딥러닝(Deep learning)을 수행한다.According to an embodiment of the present invention, the spatial normalization apparatus 100 trains a result of generating a spatially normalized PET from a functional medical image by using an extracted deformation field. Deep learning is performed using as a training target.
그러면, 공간 정규화 장치(100)는 딥러닝을 통해서 MRI(3차원 자기공명영상)이나 CT 없이도 기능적 의료 영상(PET)을 공간적 정규화를 수행할 수 있다.Then, the spatial normalization apparatus 100 may perform spatial normalization of a functional medical image (PET) without MRI (3D magnetic resonance imaging) or CT through deep learning.
이하에서 사용되는 기능적 의료 영상은 양전자를 방출하는 방사성 의약품을 이용하여 인체에 대한 생리·화학적, 기능적 영상을 3차원으로 나타낼 수 있는 양전자 단층촬영(PET)를 나타내지만, 이에 한정하는 것이 아니라 SPECT, fMRI, EEG, MEG 등 단독 정보를 이용하여 공간적 정규화가 어려운 기능적 의료 영상을 모두 포함한다.Functional medical images used hereinafter show positron emission tomography (PET), which can represent physiological, chemical, and functional images of the human body in three dimensions using radiopharmaceuticals that emit positrons. It includes all functional medical images that are difficult to normalize spatially using independent information such as fMRI, EEG, and MEG.
이하에서는 도 2 를 이용하여 기능적 의료 영상(PET)를 학습된 인공 신경망을 통해 공간 정규화를 수행하는 공간 정규화 장치(100)에 대해서 상세하게 설명한다.Hereinafter, the spatial normalization apparatus 100 that performs spatial normalization on the functional medical image PET through the learned artificial neural network will be described in detail with reference to FIG. 2.
도 2는 본 발명의 하나의 실시예에 따른 딥러닝을 이용한 의료영상의 공간 정규화 장치를 나타낸 구성도이고, 도 3은 본 발명의 하나의 실시예에 따른 적응형 템플릿부와 학습부를 설명하기 위한 예시도이다.2 is a block diagram illustrating a spatial normalization apparatus for medical images using deep learning according to an embodiment of the present invention, and FIG. 3 is a diagram illustrating an adaptive template unit and a learning unit according to an embodiment of the present invention. It is an illustration.
도 2에 도시한 바와 같이, 의료영상의 공간 정규화 장치(100)는 적응형 템플릿 생성부(110), 학습부(120), 그리고 공간 정규화부(130)를 포함한다.As shown in FIG. 2, the apparatus 100 for normalizing a medical image includes an adaptive template generator 110, a learner 120, and a spatial normalizer 130.
먼저, 적응형 템플릿 생성부(110)는 기능적 의료영상을 공간 정규화하기 위한 적응형 템플릿(200)을 생성한다.First, the adaptive template generator 110 generates an adaptive template 200 for spatial normalization of a functional medical image.
적응형 템플릿 생성부(110)는 복수 개의 기능적 의료 영상을 수신하면, 심층 학습 아키텍처(convolutional auto-encoder: CAE)에 입력하여 심층 학습을 통해 개별적으로 적응 가능한 적응형 템플릿(200)을 생성한다.When the adaptive template generator 110 receives a plurality of functional medical images, the adaptive template generator 110 inputs a convolutional auto-encoder (CAE) to generate an adaptive template 200 that can be individually adapted through in-depth learning.
이때 도 3의 (a)에 도시한 바와 같이, 적응형 템플릿 생성부(110)는 미리 저장된 학습 데이터(300)를 이용하여 심층 학습을 수행할 수 있다.In this case, as shown in FIG. 3A, the adaptive template generator 110 may perform in-depth learning using the previously stored learning data 300.
여기서, 미리 저장된 학습 데이터(300)는 자기공명영상법(MRI) 기반의 공간적 정규화를 수행한 이미지를 포함한다.Here, the prestored learning data 300 includes an image obtained by performing spatial normalization based on magnetic resonance imaging (MRI).
그리고 심층 학습 아키텍처는 컨벌루션 신경망(convolutional neural network)을 포함하며, 입력된 이미지를 계산을 거쳐 이해하고, 특징을 추출하여 정보를 회득하거나 새로운 이미지를 생성하는 인공 신경망을 나타낸다.In addition, the deep learning architecture includes a convolutional neural network, which represents an artificial neural network that understands an input image through calculation, extracts features to obtain information, or generates a new image.
다음으로 도 3의 (b)에 도시한 바와 같이, 학습부(120)는 사용자의 기능적 의료 영상을 입력 받으면, 적응형 템플릿(200)을 이용하여 공간적 정규화된 이미지를 생성한다. 그리고 학습부(120)는 생성된 공간적 정규화된 이미지와 미리 저장된 학습 데이터를 기반으로 진위 여부를 판단한다.Next, as shown in (b) of FIG. 3, when the learning unit 120 receives a functional medical image of the user, the learner 120 generates a spatial normalized image using the adaptive template 200. The learner 120 determines the authenticity based on the generated spatial normalized image and the previously stored learning data.
다시 말해, 학습부(120)는 사용자의 기능적 의료영상을 입력 받으면, 입력 받은 사용자의 기능적 의료영상에 대해서 생성적 적대 신경망(Generative adversarial network: GAN)을 통해 반복 학습을 수행한다.In other words, when the learning unit 120 receives the functional medical image of the user, the learning unit 120 performs repetitive learning on the received functional medical image through a generative adversarial network (GAN).
여기서, 생성적 적대 신경망(Generative Adversarial Network: GAN)은 원본 데이터 분포와 유사한 이미지를 생성하기 위한 머신 러닝으로, 진짜 같은 가짜를 쉽고 빠르게 만들 수 있는 기술로 활용되고 있다. 생성적 적대 신경망(GAN)은 생성자(Generator: G)와 판별자(Discriminator: D)의 두 신경망 모델의 경쟁을 통해 학습하고 결과물을 도출해낸다. 생성자(G)는 실제에 가까운 데이터를 생성하는 것을 목적으로 실제 데이터를 학습하고 이를 바탕으로 데이터를 생성하고, 판별자(D)는 생성자(G)가 생성한 데이터를 실제인지, 거짓인지 판별하도록 학습한다.Here, the Genetic Adversarial Network (GAN) is a machine learning method for generating an image similar to the original data distribution, and is used as a technology for easily and quickly making a real fake. Genetic antagonists (GANs) learn and derive results through competition between two neural network models: generators (G) and discriminators (D). The generator (G) learns the actual data and generates the data based on the purpose of generating data close to the actual, and the discriminator (D) determines whether the data generated by the generator (G) is real or false. Learn.
학습부(120)는 이와 같은 생성적 적대 신경망을 이용하여 입력받은 사용자의 기능적 의료영상에 기초하여 공간적 정규화된 이미지를 생성하고, 생성된 이미지의 진위 여부를 판단하는 과정을 반복한다. 이와 같은 반복 학습을 통해 학습부(120)는 자기공명영상법(MRI) 기반의 공간적 정규화를 수행한 이미지를 이용하여 기능적 의료 영상을 공간적 정규화하는 결과와 매우 근접한 결과를 도출할 수 있다.The learner 120 generates a spatial normalized image based on the functional medical image of the user input using the generated antagonistic neural network, and repeats the process of determining the authenticity of the generated image. Through such iterative learning, the learning unit 120 may derive a result very close to the result of spatial normalization of the functional medical image using the image of performing spatial normalization based on magnetic resonance imaging (MRI).
학습부(120)는 생성된 공간적 정규화된 이미지와 미리 저장된 학습 데이터를 판별할 수 없을 때까지 사용자의 기능적 의료 영상에 대해서 공간적 정규화하여 이미지를 생성하고, 생성된 이미지의 진위 여부를 판단하는 과정을 차례로 반복 수행한다.The learner 120 generates an image by spatially normalizing the functional medical image of the user until the generated spatial normalized image and the previously stored learning data cannot be determined, and determines whether the generated image is authentic. Repeat in turn.
다음으로 공간 정규화부(130)는 학습된 완료된 생성적 적대 신경망을 통해, 사용자 기능적 의료영상을 공간 정규화를 수행한다. 그리고 공간 정규화부(130)는 공간 정규화된 사용자의 기능적 의료영상을 제공한다.Next, the spatial normalization unit 130 performs spatial normalization of the user functional medical image through the trained completed generative antagonist network. The spatial normalization unit 130 provides a functional medical image of the spatial normalized user.
한편, 의료영상의 공간 정규화 장치(100)는 서버, 단말, 또는 이들이 결합된 형태일 수 있다.Meanwhile, the spatial normalization apparatus 100 of the medical image may be a server, a terminal, or a combination thereof.
단말은 각각 메모리(memory), 프로세서(processor)를 구비함으로써 연산 처리 능력을 갖춘 장치를 통칭하는 것이다. 예를 들어, 퍼스널 컴퓨터(personal computer), 핸드헬드 컴퓨터(handheld computer), PDA(personal digital assistant), 휴대폰, 스마트 기기, 태블릿(tablet) 등이 있다.Terminals are collectively referred to as devices having arithmetic processing capability by providing a memory and a processor. For example, there are a personal computer, a handheld computer, a personal digital assistant, a mobile phone, a smart device, a tablet, and the like.
서버는 복수개의 모듈(module)이 저장되어 있는 메모리, 그리고 메모리에 연결되어 있고 복수개의 모듈에 반응하며, 단말에 제공하는 서비스 정보 또는 서비스 정보를 제어하는 액션(action) 정보를 처리하는 프로세서, 통신 수단, 그리고 UI(user interface) 표시 수단을 포함할 수 있다.The server is a memory that stores a plurality of modules (module), the processor is connected to the memory and reacts to the plurality of modules, the processor for processing the service information or action information for controlling the service information provided to the terminal, communication Means, and a user interface (UI) display means.
메모리는 정보를 저장하는 장치로, 고속 랜덤 액세스 메모리(highspeed random access memory, 자기 디스크 저장 장치, 플래시 메모리 장치, 기타 비휘발성 고체 상태 메모리 장치(non-volatile solid-state memory device) 등의 비휘발성 메모리 등 다양한 종류의 메모리를 포함할 수 있다.Memory is a device for storing information. Non-volatile memory, such as highspeed random access memory, magnetic disk storage, flash memory devices, and other non-volatile solid-state memory devices. And various kinds of memories.
통신 수단은 단말과 서비스 정보 또는 액션 정보를 실시간으로 송수신한다.The communication means transmits and receives service information or action information with the terminal in real time.
UI 표시 수단은 장치의 서비스 정보 또는 액션 정보를 실시간으로 출력한다. UI 표시 수단은 UI를 직접적 또는 간접적으로 출력하거나 표시하는 독립된 장치일 수도 있으며, 또는 장치의 일부분일 수도 있다.The UI display means outputs service information or action information of the device in real time. The UI display means may be a separate device that directly or indirectly outputs or displays the UI, or may be part of the device.
이하에서는 도 4 내지 도 5를 이용하여 공간 정규화 장치(100)가 적응형 템플릿을 생성하고, 생성적 적대 신경망을 이용하여 공간 정규화된 기능적 의료 영상을 생성하는 과정에 대해서 상세하게 설명한다.Hereinafter, a process of generating an adaptive template by the spatial normalization apparatus 100 and generating a spatial normalized functional medical image using the generative antagonist network will be described in detail with reference to FIGS. 4 to 5.
도 4은 본 발명의 하나의 실시예에 따른 의료영상의 공간 정규화 하는 과정을 나타내는 순서도이고, 도 5는 본 발명의 하나의 실시예에 따른 심층 학습 아키텍처 및 생성적 적대 신경망을 설명하기 위한 예시도이다.4 is a flowchart illustrating a process of spatial normalization of a medical image according to an exemplary embodiment of the present invention, and FIG. 5 is an exemplary diagram for describing a deep learning architecture and a generative antagonistic neural network according to an exemplary embodiment of the present invention. to be.
도 4에 도시한 바와 같이, 공간 정규화 장치(100)는 다양한 환경에 대응되는 복수개의 기능적 의료영상을 심층 학습 아키텍처에 입력하여 적응형 템플릿을 생성한다(S410). As shown in FIG. 4, the spatial normalization apparatus 100 generates an adaptive template by inputting a plurality of functional medical images corresponding to various environments into an in-depth learning architecture (S410).
여기서, 심층 학습 아키텍처(CAE)는 복수의 레이어로 형성되어 있으며, 모든 회선 레이어는 3D 방식으로 특징을 추출한다. 그리고 심층 학습 아키텍처(CAE)는 스트라이드 컨볼루션(strided convolution)을 사용하여 연산을 수행하고, 회선 이후에 지수 선형 단위 (ELU) 활성화 함수를 적용한다.Here, the deep learning architecture (CAE) is formed of a plurality of layers, and all the convolutional layers extract features in a 3D manner. In addition, deep learning architecture (CAE) performs operations using stride convolution, and applies an exponential linear unit (ELU) activation function after the convolution.
또한, 심층 학습 아키텍처(CAE)는 최종 출력 레이어를 제외하고 배치 정규화(batch normalization)가 적용된다.In addition, deep learning architecture (CAE) applies batch normalization except for the final output layer.
공간 정규화 장치(100)는 이러한 심층 학습 아키텍처를 통해 도출된 적응형 템플릿 결과와 미리 저장된 MRI 기반의 공간 정규화 결과간의 차이를 측정할 수 있다.The spatial normalization apparatus 100 may measure a difference between the adaptive template result derived through the deep learning architecture and the spatial normalization result based on the pre-stored MRI.
다시 말해, 공간 정규화 장치(100)는 심층 학습 아키텍처(CAE)의 출력 값과 미리 저장된 MRI 기반의 공간 정규화 결과간의 차이 측정하는 과정에서 다음 수학식 1의 손실 함수 (LCAE)가 최소화되도록 심층 학습을 수행할 수 있다.In other words, the spatial normalization apparatus 100 performs deep learning so that the loss function (LCAE) of Equation 1 is minimized in the process of measuring the difference between the output value of the deep learning architecture (CAE) and the pre-stored MRI based spatial normalization result. Can be done.
[수학식 1][Equation 1]
Figure PCTKR2019002264-appb-I000005
Figure PCTKR2019002264-appb-I000005
여기서 m은 배치(batch) 크기이고, IMNI는 MNI 공간에서의 MRI 기반 공간 정규화(레이블)의 이미지이고, INative는 기본 공간에서의 입력된 사용자의 기능적 의료 영상이며, N은 MNI 공간의 복셀 수를 나타낸다.Where m is the batch size, IMNI is the image of MRI based spatial normalization (label) in the MNI space, INative is the functional medical image of the input user in the base space, and N is the number of voxels in the MNI space. Indicates.
이와 같이, 공간 정규화 장치(100)는 심층 학습 아키텍처(CAE)의 심층 학습을 통해 개별적으로 공간적 정규화를 수행할 수 있는 적응형 템플릿을 생성할 수 있다.As such, the spatial normalization apparatus 100 may generate an adaptive template capable of performing spatial normalization individually through in-depth learning of a deep learning architecture (CAE).
다음으로, 공간 정규화 장치(100)는 사용자의 기능적 의료 영상을 입력받는다(S420).Next, the spatial normalization apparatus 100 receives a functional medical image of the user (S420).
공간 정규화 장치(100)는 연동되는 사용자 단말(미도시함) 또는 서버(미도시함)로부터 사용자의 기능적 의료영상을 입력받을 수 있으며, 여기서, 통신망은 유선 통신 네트워크, 근거리 또는 원거리 무선 통신 네트워크, 이들이 혼합된 네트워크 등 데이터를 전달하는 모든 형태의 통신 네트워크를 포함할 수 있다.The spatial normalization apparatus 100 may receive a functional medical image of a user from an associated user terminal (not shown) or a server (not shown), wherein the communication network is a wired communication network, a short-range or long-range wireless communication network, They can include any type of communication network that carries data, such as a mixed network.
그리고 공간 정규화 장치(100)는 사용자의 기능적 의료 영상을 적응형 템플릿에 적용하여 이미지를 생성하고 생성된 이미지의 진위 여부를 판별하여 반복 학습을 수행한다(S430).In operation S430, the spatial normalization apparatus 100 generates an image by applying a functional medical image of the user to an adaptive template, and determines whether the generated image is true or not (S430).
공간 정규화 장치(100)는 생성적 적대 신경망(GAN)을 통해 생성 모델과 구별 모델을 차례로 업데이트하면서 학습을 수행할 수 있다.The spatial normalization apparatus 100 may perform training while sequentially updating the generation model and the distinction model through the generative host neural network (GAN).
먼저, 공간 정규화 장치(100)는 생성적 적대 신경망(GAN)의 생성자(G)를 통해 사용자의 기능적 의료 영상을 적응형 템플릿에 기초하여 공간 정규화하고, 공간 정규화된 이미지를 생성한다.First, the spatial normalization apparatus 100 spatially normalizes a functional medical image of a user based on an adaptive template through a generator G of a generative host neural network GAN, and generates a spatial normalized image.
그리고 공간 정규화 장치(100)는 생성적 적대 신경망(GAN)의 판별자(Discriminator: D)를 통해 공간 정규화된 이미지와 미리 저장된 학습 데이터인 MRI 기반의 공간 정규화 영상을 비교하여 진위 여부를 판단할 수 있다.In addition, the spatial normalization apparatus 100 may determine the authenticity by comparing the spatial normalized image with the MRI-based spatial normalized image, which is pre-stored training data, through a discriminant (D) of the GN. have.
도 5의 (a)는 생성적 적대 신경망(GAN)의 생성자(Generator: G)의 네트워크 구조를 나타낸 예시도이고, 도 4의 (b)는 생성적 적대 신경망(GAN)의 판별자(Discriminator: D)의 네트워크 구조를 나타낸 예시도이다.FIG. 5A is an exemplary diagram illustrating a network structure of a generator G of a generating antagonist neural network, and FIG. 4B is a discriminator of a generating antagonist neural network GAN. It is an exemplary figure which shows the network structure of D).
도 5를 보면, 각 빨간색 주황색 상자는 3D strided convolutional kernel을 나타내며, 여기서 s는 보폭의 크기이고 k는 커널의 크기를 나타낸다. 각각의 파란색 상자는 누출 - ReLU 또는 ELU와 같은 활성화 기능과 결합된 일괄 정규화를 나타낸다. 그리고 초록색 상자는 장치의 수와 완전히 연결된 레이어를 보여주며, 보라색 상자는 두 개의 스트라이드와 커널 크기 3을 갖는 3D 전치된 컨볼루션 레이어 (디콘볼루션)를 나타낸다.5, each red orange box represents a 3D strided convolutional kernel, where s is the size of the stride and k is the size of the kernel. Each blue box represents a batch normalization combined with an activation function such as Leak-ReLU or ELU. The green box shows the layer completely connected to the number of devices, and the purple box shows the 3D transposed convolutional layer (deconvolution) with two strides and kernel size 3.
한편, 도 5의 (a)와 같이, 생성적 적대 신경망(GAN)의 생성자(Generator: G)는 심층 학습 아키텍처(CAE)와 동일한 신경망으로 이용될 수 있으나 이에 한정하는 것은 아니다.On the other hand, as shown in (a) of FIG. 5, the generator (G) of the GN may be used as the same neural network as the deep learning architecture (CAE), but is not limited thereto.
공간 정규화 장치(100)는 이와 같은 생성적 적대 신경망(GAN)의 생성자(G)와 판별자(D)를 이용하여, 판별자(D)의 판단 결과값이 생성되면, 공간 정규화된 이미지를 생성하고 판단하는 과정을 차례로 반복 수행한다.The spatial normalization apparatus 100 generates the spatial normalized image when the determination result of the discriminator D is generated by using the generator G and the discriminator D of the generative host neural network GAN. Repeat the process in order.
공간 정규화 장치(100)는 생성적 적대 신경망(GAN)이 다음 수학식 2와 같은 min-max 문제를 해결함으로써 학습된다.The spatial normalization apparatus 100 is learned by generating a generative antagonist network (GAN) solving a min-max problem as shown in Equation 2 below.
[수학식 2][Equation 2]
Figure PCTKR2019002264-appb-I000006
Figure PCTKR2019002264-appb-I000006
여기서 z는 네이티브 공간 (입력)의 PET 이미지이고, x는 MRI 기반 공간 정규화 결과(레이블)이며,
Figure PCTKR2019002264-appb-I000007
는 각각 생성자 및 판별 자의 매개 변수를 나타내고, E는 주어진 확률 분포에 대한 기대를 나타낸다.
Where z is a PET image of native space (input), x is the MRI-based spatial normalization result (label),
Figure PCTKR2019002264-appb-I000007
Denotes the parameters of the constructor and the discriminator, respectively, and E denotes the expectations for a given probability distribution.
먼저, max D는 목적함수를 극대화하는 판별자 D를 찾는 것을 나타낸다.First, max D represents finding the discriminator D that maximizes the objective function.
수학식 2에서 첫번째 항 E[Log D(x)]은 MRI 기반 공간 정규화 결과(x)로 목적 함수의 값이다. 그리고 두번째 항 E[log(1-D(G(z)))]은 생성자가 생성한 이미지(G(z))를 포함하며, arg max D를 위해서는 항 내부가 1- D(G(z)) 이므로, 두번째 항의 극대화는 D(G(z))의 극소화를 나타낸다.In Equation 2, the first term E [Log D (x)] is an MRI based spatial normalization result (x), which is a value of an objective function. And the second term E [log (1-D (G (z)))] contains the image G (z) created by the constructor, and for the arg max D, the term 1-D (G (z)) ), Maximizing the second term represents minimizing D (G (z)).
결과적으로 공간 정규화 장치(100)는 두 항의 목적함수를 이용하여 진짜 그림을 넣으면 큰 값, 가짜 그림을 넣으면 작은 값을 출력하도록 판별자(D)를 학습시킨다.As a result, the spatial normalization apparatus 100 trains the discriminator D to output a large value by inserting a real picture and a small value by inserting a fake picture by using the objective function of the two terms.
다음으로 min G는 목적함수를 극소화하는 생성망 G를 찾는 것을 나타낸다.Min G then finds the generation network G that minimizes the objective function.
수학식 2를 살펴보면, G는 두번째 항에만 포함되어 있으며, 전체 함수를 극소화하는 G는 두번째 항을 극소화하는 G이고, 결국 D(G(z))를 극대화하는 G를 나타낸다.Referring to Equation 2, G is included only in the second term, and G, which minimizes the entire function, is G, which minimizes the second term, and ultimately represents G that maximizes D (G (z)).
결과적으로 최적의 판별자 D를 가정하였을 때, 생성자(G)에 대한 목적함수는 D(x)와 D(G(z) 사이의 잰슨-섀넌 다이버전스(Jensen -Shannon Divergence) 최소화와 동일하다.As a result, assuming the optimal discriminator D, the objective function for generator G is equal to minimizing Jensen-Shannon Divergence between D (x) and D (G (z).
따라서, 공간 정규화 장치(100)는 공간 정규화된 이미지의 데이터 확률 분포와 정규화된 MRI 데이터 확률 분포 사이의 잰슨-섀넌 다이버전스가 극소화되도록 반복적으로 생성자(G)와 판별자(D)를 학습하여 업데이트할 수 있다.Therefore, the spatial normalization apparatus 100 iteratively learns and updates the constructor (G) and the discriminator (D) so that the Janson-Shannon divergence between the data probability distribution of the spatial normalized image and the normalized MRI data probability distribution is minimized. Can be.
또한, 다음 수학식 3과 같이 공간 정규화 장치(100)는 생성자(G)가 생성한 이미지와 MRI 기반 공간 정규화 결과(레이블)간의 충실도 손실을 min-max 문제에 추가할 수 있습니다.In addition, as shown in Equation 3, the spatial normalizer 100 may add a loss of fidelity between the image generated by the generator (G) and the MRI-based spatial normalization result (label) to the min-max problem.
[수학식 3][Equation 3]
Figure PCTKR2019002264-appb-I000008
Figure PCTKR2019002264-appb-I000008
wherewhere
Figure PCTKR2019002264-appb-I000009
Figure PCTKR2019002264-appb-I000009
여기서, D()는 이미지를 생성하는 생성자, G()는 이미지를 판단하는 판별자이며, z는 네이티브 공간의 기능적 의료영상이고, x는 MRI 기반 SN결과이며,
Figure PCTKR2019002264-appb-I000010
는 각각 생성자 및 판별 자의 매개 변수를 나타내며, E는 주어진 확률 분포에 대한 기대값,
Figure PCTKR2019002264-appb-I000011
은 MRI 기반의 SN 결과 간의 충실도 손실 값, m은 배치 크기이고, IiMNI는 MNI 공간에서 MRI 기반 SN 결과를 나타내는 이미지이고, IiNative는 기본 공간의 기능적 의료영상을 나타낸다.
Here, D () is a constructor for generating an image, G () is a discriminator for determining an image, z is a functional medical image in a native space, x is an MRI-based SN result,
Figure PCTKR2019002264-appb-I000010
Are the parameters of the constructor and the discriminator, respectively, and E is the expected value for a given probability distribution,
Figure PCTKR2019002264-appb-I000011
Is the fidelity loss value between the MRI-based SN results, m is the batch size, IiMNI is an image representing the MRI-based SN results in the MNI space, and IiNative is a functional medical image of the base space.
이와 같이 반복 학습된 생성자(G)가 MRI 기반 공간 정규화 결과와 유사도가 매우 높은 사용자의 기능적 의료영상을 공간 정규화한 이미지를 생성할 수 있다.As described above, the repeated learner G may generate an image obtained by spatially normalizing a functional medical image of a user having a high similarity with the MRI-based spatial normalization result.
이와 같은 과정에서 생성적 적대 신경망(GAN)의 판별자(D)는 MRI 기반 공간 정규화 결과와 생성자(G)가 생성한 공간 정규화한 이미지를 구별하지 못하면, 공간 정규화 장치(100)는 해당 학습을 완료할 수 있다.In this process, if the discriminator (D) of the generative host neural network (GAN) cannot distinguish the MRI-based spatial normalization result from the spatial normalized image generated by the generator (G), the spatial normalization apparatus 100 performs the corresponding learning. You can finish it.
다음으로, 공간 정규화 장치(100)는 학습된 알고리즘을 통해 사용자의 기능적 의료영상을 공간 정규화하여 공간 정규화된 의료영상을 제공한다(S440).Next, the spatial normalization apparatus 100 provides a spatial normalized medical image by spatial normalizing a functional medical image of the user through the learned algorithm (S440).
공간 정규화 장치(100)는 생성적 적대 신경망(GAN)에서 학습이 완료된 생성자(G)를 통해 사용자의 기능적 의료 영상을 공간 정규화하여 제공할 수 있다.The spatial normalization apparatus 100 may spatially normalize and provide a functional medical image of a user through the generator G in which the learning is completed in the generative host neural network GAN.
한편, 공간 정규화 장치(100)는 S410 단계에서 생성된 적응형 템플릿을 이용하여 해당 적응형 템플릿을 적용한 공간 정규화된 기능적 의료 영상을 제공할 수 있다. 다시 말해, 공간 정규화 장치(100)는 심층 학습 아키텍처(CAE)에 적용하여 적응형 템플릿(200)만을 생성하고, 해당 적응형 템플릿(200)만을 이용하여 기능적 의료 영상을 공간 정규화하여 제공할 수도 있다. Meanwhile, the spatial normalization apparatus 100 may provide a spatial normalized functional medical image to which the corresponding adaptive template is applied using the adaptive template generated in step S410. In other words, the spatial normalization apparatus 100 may apply only to the deep learning architecture (CAE), generate only the adaptive template 200, and provide the functional medical image by spatial normalization using only the adaptive template 200. .
이하에서는 도 6 및 도 7를 이용하여 본 발명의 하나의 실시예에 따른 의료영상의 공간 정규화 방법과, 평균 템플릿을 통한 공간 정규화 방법을 비교하여 자기공명 영상 정보와의 정확도에 대해서 상세하게 설명한다.Hereinafter, comparing the spatial normalization method of the medical image and the spatial normalization method using the average template according to an embodiment of the present invention with reference to FIGS. 6 and 7 will be described in detail with respect to the magnetic resonance image information .
도 6은 본 발명의 하나의 실시예에 따른 방법, 평균 템플릿 방법, 자기공명영상 정보 활용 방법을 통해 공간 정규화된 영상을 비교하기 위한 도면이고, 도 7은 본 발명의 하나의 실시예에 따른 방법과 자기공명영상 정보 활용 방법의 표준화된 이해 가치의 비율과 평균 템플릿 방법과 자기공명영상 정보 활용 방법과의 표준화된 이해 가치의 비율을 나타낸 그래프이다.FIG. 6 is a diagram for comparing spatial normalized images through a method, an average template method, and magnetic resonance image information using method according to an embodiment of the present invention, and FIG. 7 is a method according to an embodiment of the present invention. And the ratio of the standardized understanding value of the method of utilizing MRI information and the standardized understanding value of the average template method and the method of using MRI information.
도 6의 (가)는 평균 템플릿을 이용하여 PET 영상을 공간 정규화한 이미지이고, (나)는 본 발명에서 제안한 방법을 통한 PET 영상을 공간 정규화한 이미지이며, (다)는 자기공명 영상 정보를 활용하여 PET 영상을 공간 정규한 이미지이다.6A is a spatial normalized image of a PET image using an average template, and (b) is a spatial normalized image of a PET image using the method proposed by the present invention, and (c) represents magnetic resonance image information. Spatial normalized images of PET images.
도 6을 살펴보면, (나)와 (다)의 이미지는 거의 일치하는 반면에, (가)의 영상에서는 화살표 방향과 동그라미 영역에 차이가 크게 나타나는 것을 알수 있다. 다시 말해, 평균 템플릿을 이용하여 PET 영상을 공간 정규화한 경우, 정확도가 많이 낮은 반면에 본 발명에서 제안한 딥러닝을 통한 PET 영상의 공간 정규화한 경우, 정확도가 매우 높은 것을 알 수 있다.Referring to FIG. 6, while the images of (b) and (c) are almost identical, the image of (a) shows a large difference in the direction of the arrow and the circled area. In other words, when the PET image is spatially normalized using the average template, the accuracy is very low, while the spatial normalization of the PET image through the deep learning proposed in the present invention, the accuracy is very high.
도 7은 본 발명의 하나의 실시예에 따른 공간적 정규화 방법과 평균 템플릿을 통한 공간적 정규화 방법의 정량적인 분석 결과를 나타낸다.7 illustrates a quantitative analysis result of the spatial normalization method and the spatial normalization method using an average template according to an embodiment of the present invention.
도 7의 그래프는 표준화된 이해 가치의 비율(ratio of standardized uptake value: SUVr)를 나타낸 그래프로, 뇌의 일정 영역을 선택하여 자기공명영상정보와의 오차를 비교한 결과이다.7 is a graph showing a ratio of standardized uptake value (SUVr), and a result of comparing an error with magnetic resonance image information by selecting a certain region of the brain.
도 7를 살펴보면, 오차 값이 평균 템플릿을 통한 공간 정규화한 결과 값(흰 블록)의 오차가 최대 20% 가까이 나타나는 반면에, 본 발명에서 사용되는 심층 학습 아키텍처(CAE: Convolutional auto-encoder)는 구별 모델(빗금 블록)의 오차는 5% 미만의 값을 가진다. 특히 본 발명에서 제안한 생성적 적대 신경망(GAN)의 오차는 가장 최소의 오차를 보인다.Referring to FIG. 7, while the error value appears to be close to 20% of the error value of the spatial normalized result of the average template (white block), the deep learning architecture (CAE: Convolutional auto-encoder) used in the present invention is distinguished. The error of the model (hatched block) is less than 5%. In particular, the error of the GN proposed in the present invention shows the minimum error.
따라서, 본 발명에서 제안한 심층 학습 아키텍처를 통해 적응형 템플릿을 생성하고, 생성된 적응형 템플릿을 이용하여 생성적 적대 신경망을 통해 기능적 의료 영상을 공간 정규화하는 방법이 실제 자기공명영상 정보를 활용하여 공간 정규화하는 방법과 가장 유사함을 알 수 있다.Therefore, the method of generating an adaptive template through the deep learning architecture proposed in the present invention and spatially normalizing the functional medical image through the generating hostile neural network using the generated adaptive template utilizes the actual magnetic resonance image information. You can see that this is most similar to the normalization method.
본 발명의 하나의 실시예에 따른 방법을 실행시키기 위한 프로그램은 컴퓨터 판독 가능한 기록 매체에 기록될 수 있다.The program for executing the method according to one embodiment of the present invention can be recorded on a computer readable recording medium.
컴퓨터 판독 가능 매체는 프로그램 명령, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있다. 상기 매체는 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 당업자에게 공지되어 사용 가능한 것일 수도 있다. 컴퓨터 판독 가능 기록 매체의 예에는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체, CD-ROM, DVD와 같은 광기록 매체, 플롭티컬 디스크와 같은 자기-광 매체, 및 롬, 램, 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다. 여기서 매체는 프로그램 명령, 데이터 구조 등을 지정하는 신호를 전송하는 반송파를 포함하는 광 또는 금속선, 도파관 등의 전송 매체일 수도 있다. 프로그램 명령의 예에는 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드가 포함된다.Computer-readable media may include, alone or in combination with the program instructions, data files, data structures, and the like. The media may be those specially designed and constructed or those known and available to those skilled in the computer software arts. Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical recording media such as CD-ROMs, DVDs, magnetic-optical media such as floppy disks, and ROM, RAM, flash memory, and the like. Hardware devices specifically configured to store and execute the same program instructions are included. In this case, the medium may be a transmission medium such as an optical or metal wire, a waveguide, or the like including a carrier wave for transmitting a signal specifying a program command, a data structure, and the like. Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
이상에서 본 발명의 바람직한 하나의 실시예에 대하여 상세하게 설명하였지만 본 발명의 권리범위는 이에 한정되는 것은 아니고 다음의 청구범위에서 정의하고 있는 본 발명의 기본 개념을 이용한 당업자의 여러 변형 및 개량 형태 또한 본 발명의 권리범위에 속하는 것이다.Although one preferred embodiment of the present invention has been described in detail above, the scope of the present invention is not limited thereto, and various modifications and improvements of those skilled in the art using the basic concept of the present invention defined in the following claims are also provided. It belongs to the scope of the present invention.
100: 의료영상의 공간 정규화 장치 110: 적응형 템플릿 생성부100: spatial normalization apparatus of the medical image 110: adaptive template generator
120: 학습부 130: 공간 정규화부120: learning unit 130: spatial normalization unit
200: 적응형 템플릿 300: 학습 데이터200: adaptive template 300: training data

Claims (10)

  1. 복수 개의 기능적 의료영상을 심층 학습 아키텍처에 입력하면 미리 저장된 학습데이터에 기초하여 상기 기능적 의료영상을 공간적 정규화하기 위한 적응형 템플릿을 생성하는 적응형 템플릿 생성부,An adaptive template generator for generating an adaptive template for spatially normalizing the functional medical image based on pre-stored learning data when a plurality of functional medical images are input into the deep learning architecture;
    입력받은 사용자의 기능적 의료영상에 대해서 생성적 적대 신경망(Generative adversarial network: GAN)을 통해 상기 적응형 템플릿에 기초한 이미지를 생성하고 상기 생성된 이미지의 진위 여부를 판단하는 과정을 반복하여 학습하는 학습부, 그리고Learning unit for repeatedly learning the generated image based on the adaptive template for the functional medical image of the user received through a generative adversarial network (GAN) and determining the authenticity of the generated image , And
    상기 학습 결과에 기초하여 공간 정규화된 사용자의 기능적 의료영상을 제공하는 공간 정규화부를 포함하는 의료영상의 공간 정규화 장치.And a spatial normalization unit for providing a functional medical image of a user who is spatially normalized based on the learning result.
  2. 제1항에서In claim 1
    상기 적응형 템플릿 생성부는,The adaptive template generator,
    상기 심층 학습 아키텍처로부터 도출된 적응형 템플릿 결과와 미리 저장된 학습 데이터인 MRI 기반의 공간 정규화 영상과의 차이값을 측정하고, 측정된 차이가 최소화되도록 심층 학습을 통해 개별적으로 적응 가능한 적응형 템플릿을 생성하는 의료영상의 공간 정규화 장치.Measure the difference between the adaptive template result derived from the deep learning architecture and the MRI-based spatial normalized image, which is pre-stored learning data, and create an individually adaptable adaptive template through deep learning to minimize the measured difference. Spatial normalization device for medical imaging.
  3. 제1항에서,In claim 1,
    상기 학습부는,The learning unit,
    상기 사용자의 기능적 의료영상을 상기 적응형 템플릿에 기초하여 공간 정규화하여 공간 정규화된 이미지를 생성하고,Spatially normalize the functional medical image of the user based on the adaptive template to generate a spatially normalized image,
    상기 공간 정규화된 이미지와 미리 저장된 학습 데이터인 MRI 기반의 공간 정규화 영상을 비교하여 진위 여부를 판단하며 상기 판단 결과값이 생성되면 공간 정규화된 이미지를 생성하고 판단하는 과정을 차례로 반복 수행하고,The authenticity is determined by comparing the spatial normalized image with the MRI-based spatial normalized image, which is pre-stored learning data. When the determination result is generated, the process of generating and determining the spatial normalized image is sequentially performed.
    상기 판단 결과로 상기 공간 정규화된 이미지와 미리 저장된 학습 데이터인 MRI 기반의 공간 정규화 영상을 구별하지 못하면, 해당 학습을 완료하는 의료영상의 공간 정규화 장치.And if the spatial normalized image and the MRI-based spatial normalized image, which are previously stored learning data, are not distinguished as a result of the determination, completing the corresponding learning.
  4. 제3항에서,In claim 3,
    상기 학습부는,The learning unit,
    상기 공간 정규화된 이미지의 데이터 확률 분포와 정규화된 MRI 데이터 확률 분포 사이의 잰슨-섀넌 다이버전스(Jensen -Shannon Divergence)가 극소화되도록 반복 학습시키는 의료영상의 공간 정규화 장치.And a spatial normalization apparatus of the medical image for repeating learning to minimize the Jensen-Shannon Divergence between the data probability distribution of the spatial normalized image and the normalized MRI data probability distribution.
  5. 제1항에서,In claim 1,
    상기 학습부는,The learning unit,
    다음 수학식을 이용하여 min-max 문제를 해결하는 과정을 통해 반복하여 학습하는 의료영상의 공간 정규화 장치:Spatial normalization device for medical images repeatedly learned through the process of solving the min-max problem using the following equation:
    Figure PCTKR2019002264-appb-I000012
    Figure PCTKR2019002264-appb-I000012
    wherewhere
    Figure PCTKR2019002264-appb-I000013
    Figure PCTKR2019002264-appb-I000013
    여기서, D()는 이미지를 생성하는 생성기, G()는 이미지를 판단하는 판별기이며, z는 네이티브 공간의 기능적 의료영상이고, x는 MRI 기반 SN 결과이며,
    Figure PCTKR2019002264-appb-I000014
    는 각각 생성자 및 판별 자의 매개 변수를 나타내며, E는 주어진 확률 분포에 대한 기대값,
    Figure PCTKR2019002264-appb-I000015
    은 MRI 기반의 SN 결과 간의 충실도 손실 값, m은 배치크기이고, Ii MNI는 MNI 공간에서 MRI 기반 SN 결과를 나타내는 이미지이고, Ii Native는 기본 공간의 기능적 의료영상이다.
    Here, D () is a generator for generating an image, G () is a discriminator for determining an image, z is a functional medical image in a native space, x is an MRI-based SN result,
    Figure PCTKR2019002264-appb-I000014
    Are the parameters of the constructor and the discriminator, respectively, and E is the expected value for a given probability distribution,
    Figure PCTKR2019002264-appb-I000015
    Is the fidelity loss value between MRI-based SN results, m is the batch size, Ii MNI is the image representing the MRI-based SN results in the MNI space, and Ii Native is the functional medical image of the basic space.
  6. 다양한 환경에 대응되는 복수개의 기능적 의료영상을 심층 학습 아키텍처에 적용하여 적응형 템플릿을 생성하는 단계,Generating an adaptive template by applying a plurality of functional medical images corresponding to various environments to an in-depth learning architecture,
    입력받은 사용자의 기능적 의료영상에 대해서 생성적 적대 신경망(Generative adversarial network: GAN)을 통해 상기 적응형 템플릿에 기초한 이미지를 생성하고 상기 생성된 이미지의 진위 여부를 판단하는 과정을 반복하여 학습하는 단계, 그리고Repeating the process of generating an image based on the adaptive template and determining the authenticity of the generated image through a generative adversarial network (GAN) for the received functional medical image of the user; And
    상기 학습이 완료되면 상기 학습 결과에 기초하여 공간 정규화된 사용자의 기능적 의료영상을 제공하는 단계를 포함하는 의료영상의 공간 정규화 장치의 방법.And providing a functional medical image of a user who is spatially normalized based on the learning result when the learning is completed.
  7. 제6항에서,In claim 6,
    상기 적응형 템플릿을 생성하는 단계는,Generating the adaptive template,
    상기 심층 학습 아키텍처로부터 도출된 적응형 템플릿 결과와 미리 저장된 MRI 기반의 공간 정규화 결과간의 차이를 측정하고, 측정된 차이가 최소화되도록 심층 학습을 통해 개별적으로 적응 가능한 적응형 템플릿을 생성하는 의료영상의 공간 정규화 장치의 방법.The space of the medical image which measures the difference between the adaptive template result derived from the deep learning architecture and the pre-stored MRI-based spatial normalization result, and generates the adaptive template which can be individually adapted through in-depth learning to minimize the measured difference. Method of normalization device.
  8. 제6항에서,In claim 6,
    상기 학습하는 단계는,The learning step,
    상기 사용자의 기능적 의료영상을 상기 적응형 템플릿에 기초하여 공간 정규화하고, 공간 정규화된 이미지를 생성하는 단계,Spatial normalizing the functional medical image of the user based on the adaptive template, and generating a spatial normalized image;
    상기 공간 정규화된 이미지와 미리 저장된 학습 데이터인 MRI 기반의 공간 정규화 영상을 비교하여 진위 여부를 판단하는 단계,Comparing the spatial normalized image with the MRI-based spatial normalized image, which is previously stored learning data, to determine authenticity;
    상기 판단 결과값이 생성되면 공간 정규화된 이미지를 생성하고 판단하는 과정을 차례로 반복 수행하는 단계, 그리고Repeating the steps of generating and determining a spatial normalized image when the determination result value is generated, and
    상기 판단 결과로 상기 공간 정규화된 이미지와 미리 저장된 학습 데이터인 MRI 기반의 공간 정규화 영상을 구별하지 못하면, 해당 학습을 완료하는 단계를 포함하는 의료영상의 공간 정규화 장치의 방법.If it is not possible to distinguish between the spatial normalized image and the MRI-based spatial normalized image which is pre-stored learning data as a result of the determination, completing the corresponding learning.
  9. 제8항에서,In claim 8,
    상기 반복 수행하는 단계는,Repeating the step,
    상기 공간 정규화된 이미지의 데이터 확률 분포와 정규화된 MRI 데이터 확률 분포 사이의 잰슨-섀넌 다이버전스(Jensen -Shannon Divergence)가 극소화되도록 반복 학습하는 의료영상의 공간 정규화 장치의 방법.And a method of repetitive learning of medical images to minimize the Jensen-Shannon Divergence between the data probability distribution of the spatial normalized image and the normalized MRI data probability distribution.
  10. 제6항에서,In claim 6,
    상기 학습하는 단계는,The learning step,
    다음 수학식을 이용하여 min-max 문제를 해결하는 과정을 통해 반복하여 학습하는 의료영상의 공간 정규화 장치의 방법:Spatial normalization method of medical images repeatedly learned through the process of solving the min-max problem using the following equation:
    Figure PCTKR2019002264-appb-I000016
    Figure PCTKR2019002264-appb-I000016
    wherewhere
    Figure PCTKR2019002264-appb-I000017
    Figure PCTKR2019002264-appb-I000017
    여기서, D()는 이미지를 생성하는 생성기, G()는 이미지를 판단하는 판별기이며, z는 네이티브 공간의 기능적 의료영상이고, x는 MRI 기반 SN 결과이며,
    Figure PCTKR2019002264-appb-I000018
    는 각각 생성자 및 판별 자의 매개 변수를 나타내며, E는 주어진 확률 분포에 대한 기대값,
    Figure PCTKR2019002264-appb-I000019
    은 MRI 기반의 SN 결과 간의 충실도 손실 값, m은 배치크기이고, Ii MNI는 MNI 공간에서 MRI 기반 SN 결과를 나타내는 이미지이고, Ii Native는 기본 공간의 기능적 의료영상이다.
    Here, D () is a generator for generating an image, G () is a discriminator for determining an image, z is a functional medical image in a native space, x is an MRI-based SN result,
    Figure PCTKR2019002264-appb-I000018
    Are the parameters of the constructor and the discriminator, respectively, and E is the expected value for a given probability distribution,
    Figure PCTKR2019002264-appb-I000019
    Is the fidelity loss value between MRI-based SN results, m is the batch size, Ii MNI is the image representing the MRI-based SN results in the MNI space, and Ii Native is the functional medical image of the basic space.
PCT/KR2019/002264 2018-02-28 2019-02-25 Device for spatial normalization of medical image using deep learning and method therefor WO2019168310A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/965,815 US11475612B2 (en) 2018-02-28 2019-02-25 Device for spatial normalization of medical image using deep learning and method therefor

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20180024743 2018-02-28
KR10-2018-0024743 2018-02-28
KR1020180123300A KR102219890B1 (en) 2018-02-28 2018-10-16 Apparatus for spatial normalization of medical image using deep learning and method thereof
KR10-2018-0123300 2018-10-16

Publications (1)

Publication Number Publication Date
WO2019168310A1 true WO2019168310A1 (en) 2019-09-06

Family

ID=67805858

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/002264 WO2019168310A1 (en) 2018-02-28 2019-02-25 Device for spatial normalization of medical image using deep learning and method therefor

Country Status (1)

Country Link
WO (1) WO2019168310A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353548A (en) * 2020-03-11 2020-06-30 中国人民解放军军事科学院国防科技创新研究院 Robust feature deep learning method based on confrontation space transformation network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101223681B1 (en) * 2011-03-11 2013-01-21 한국외국어대학교 연구산학협력단 Automatic Segmentation device and method of Cartilage in Magnetic Resonance Image
KR20140088840A (en) * 2013-01-03 2014-07-11 지멘스 코포레이션 Needle enhancement in diagnostic ultrasound imaging
US20140350392A1 (en) * 2011-09-20 2014-11-27 Ge Healthcare Limited Methods of spatial normalization of positron emission tomography images
KR20150036230A (en) * 2012-07-27 2015-04-07 가부시키가이샤 히다치 하이테크놀로지즈 Matching process device, matching process method, and inspection device employing same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101223681B1 (en) * 2011-03-11 2013-01-21 한국외국어대학교 연구산학협력단 Automatic Segmentation device and method of Cartilage in Magnetic Resonance Image
US20140350392A1 (en) * 2011-09-20 2014-11-27 Ge Healthcare Limited Methods of spatial normalization of positron emission tomography images
KR20150036230A (en) * 2012-07-27 2015-04-07 가부시키가이샤 히다치 하이테크놀로지즈 Matching process device, matching process method, and inspection device employing same
KR20140088840A (en) * 2013-01-03 2014-07-11 지멘스 코포레이션 Needle enhancement in diagnostic ultrasound imaging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HONGYOON CHOI: "Generation of structural MR images from amyloid PET: Application to MR-less quantification", THE JOURNAL OF NUCLEAR MEDICINE, 7 December 2017 (2017-12-07) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353548A (en) * 2020-03-11 2020-06-30 中国人民解放军军事科学院国防科技创新研究院 Robust feature deep learning method based on confrontation space transformation network
CN111353548B (en) * 2020-03-11 2020-10-20 中国人民解放军军事科学院国防科技创新研究院 Robust feature deep learning method based on confrontation space transformation network

Similar Documents

Publication Publication Date Title
KR102219890B1 (en) Apparatus for spatial normalization of medical image using deep learning and method thereof
WO2019132168A1 (en) System for learning surgical image data
US9892361B2 (en) Method and system for cross-domain synthesis of medical images using contextual deep network
CN108171212A (en) For detecting the method and apparatus of target
JP2023044669A (en) Graph model-based brain function registration method
WO2020071877A1 (en) System and method for searching for pathological image
WO2021137454A1 (en) Artificial intelligence-based method and system for analyzing user medical information
Tran et al. Light-weight deformable registration using adversarial learning with distilling knowledge
CN109887077A (en) Method and apparatus for generating threedimensional model
WO2020060196A1 (en) Apparatus and method for reconstructing three-dimensional image
CN107729928A (en) Information acquisition method and device
CN113902724A (en) Method, device, equipment and storage medium for classifying tumor cell images
WO2021184195A1 (en) Medical image reconstruction method, and medical image reconstruction network training method and apparatus
WO2019168310A1 (en) Device for spatial normalization of medical image using deep learning and method therefor
Manimegalai et al. [Retracted] 3D Convolutional Neural Network Framework with Deep Learning for Nuclear Medicine
WO2023234622A1 (en) Image spatial normalization and normalization system and method using same
CN111081372B (en) Disease diagnosis device, terminal device, and computer-readable storage medium
WO2022092670A1 (en) Method for analyzing thickness of brain cortical region
WO2021107661A2 (en) Data processing method using learning model
WO2022010106A1 (en) Learning data generation device, method for driving same device, and computer-readable recording medium
WO2024029697A1 (en) Method for predicting risk of brain disease and method for training risk analysis model for brain disease
WO2023121003A1 (en) Method for classifying image data by using artificial neural network, and apparatus therefor
CN114841970B (en) Identification method and device for inspection image, readable medium and electronic equipment
CN113724185B (en) Model processing method, device and storage medium for image classification
WO2023153839A1 (en) Dementia information calculation method and analysis device using two-dimensional mri

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19759914

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19759914

Country of ref document: EP

Kind code of ref document: A1