WO2022120731A1 - Mri-pet image modality conversion method and system based on cyclic generative adversarial network - Google Patents

Mri-pet image modality conversion method and system based on cyclic generative adversarial network Download PDF

Info

Publication number
WO2022120731A1
WO2022120731A1 PCT/CN2020/135319 CN2020135319W WO2022120731A1 WO 2022120731 A1 WO2022120731 A1 WO 2022120731A1 CN 2020135319 W CN2020135319 W CN 2020135319W WO 2022120731 A1 WO2022120731 A1 WO 2022120731A1
Authority
WO
WIPO (PCT)
Prior art keywords
mri
image
pet
discriminator
adversarial network
Prior art date
Application number
PCT/CN2020/135319
Other languages
French (fr)
Chinese (zh)
Inventor
胡战利
郑海荣
张娜
刘新
杨永峰
梁栋
唐政
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Priority to PCT/CN2020/135319 priority Critical patent/WO2022120731A1/en
Publication of WO2022120731A1 publication Critical patent/WO2022120731A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction

Definitions

  • the present application relates to the field of imaging technologies, and in particular, to a method and system for MRI-PET image modality conversion based on a recurrent generative adversarial network.
  • Positron Emission Computed Tomography is a kind of radioactive elements with short half-life such as F18 and C11 that use substances necessary for human metabolism, such as glucose, protein, nucleic acid, etc., to be injected into the human body. By observing its aggregation in the process of human tissue metabolism, it can reflect the metabolism of the tissue, so as to achieve the purpose of diagnosis.
  • the most commonly used is fluorodeoxyglucose (Fludeoxyglucose, 18F-FDG).
  • PET imaging requires the injection of radioisotopes into the human body for imaging, so there are certain operational risks and a certain dose of radiation to the patient.
  • Magnetic resonance imaging is a type of tomography, which uses the magnetic resonance phenomenon of tissue under strong magnetic field to obtain electromagnetic signals of tissue, and reconstruct human tissue accordingly.
  • MR has excellent performance for imaging soft tissue structures, and can directly obtain native 3D cross-sectional imaging without the reconstruction step of the image.
  • MR does not involve any ionizing radiation in the imaging process, so it does not cause any form of radiation to the patient.
  • AD Alzheimer's disease
  • PD Parkinson's disease
  • multimodal neuroimaging such as MRI and PET
  • ADNI Alzheimer's Disease Neuroimaging Initiative
  • the present application provides an MRI-PET image modality conversion method based on a recurrent generative adversarial network.
  • a kind of MRI-PET image modality conversion method based on cyclic generative confrontation network provided by the application adopts the following technical scheme:
  • a MRI-PET image modality conversion method based on recurrent generative adversarial network comprising the following steps:
  • a trained recurrent generative adversarial network model is used to perform modality conversion processing from MRI images to PET images.
  • the constructed recurrent generative adversarial network model includes a pair of first generators, a pair of second generators, a first discriminator and a second discriminator, wherein the recurrent generative adversarial network model is performed on the input data set.
  • Adversarial training includes the following steps:
  • the first generator and the second generator perform the next iteration according to the first discrimination result and the second discrimination result, until the first discriminator and the second discriminator cannot distinguish the authenticity of the PET-generated image and the MRI-generated image.
  • the first generator and the second generator are constructed based on an improved U-Net model, wherein the improved U-Net model adopts a self-attention unit to replace the skip connection step in the original U-Net model. Crop and expand step.
  • the self-attention unit is designed as a cross self-attention sub-module.
  • the first discriminator and the second discriminator output the first discrimination result and the second discrimination result, including the following steps:
  • the PET image and the PET-generated image are input into the first discriminator as input pictures, and the MRI image and the MRI-generated image are input into the second discriminator as input pictures;
  • the input image is extracted through two convolution layers to obtain a feature map with spatial domain, and the input image is subjected to Haar wavelet transform at the same time to obtain a wavelet domain feature set. And the wavelet domain feature set is input into the affine transformation layer to obtain a feature map with spatial domain and wavelet domain features;
  • the above-mentioned wavelet affine transformation layer is repeated at least twice, and finally through the Softmax function, the first discriminator outputs the first discrimination result, and the second discriminator outputs the second discrimination result.
  • the convolution layer uses a ReLU function as an activation function.
  • the recurrent generative adversarial network model includes an adversarial loss function and a cycle consistency loss function.
  • the adversarial loss function is determined by the first discrimination result and the second discrimination result of the first discriminator and the second discriminator, and the specific form is as follows:
  • G represents the generator that generates another modal image from one modal image
  • D represents the discriminator that distinguishes whether the modal image is the generated image
  • I represents the image of this modal
  • CE represents the Softmax
  • the function is the cross-entropy function of the activation function
  • the label is the real label used for evaluation.
  • the cycle consistency loss function has the following specific form:
  • the image length is m,n, I(i,j)K(i,j) is the pixel value corresponding to the two input images;
  • L cyccon ⁇ SSIM L SSIM (G MRI-PET , G PET-MRI )+ ⁇ PSNR L PSNR (G MRI-PET , G PET-MRI );
  • is the control loss function value that conforms to the cycle consistency constant parameter term
  • the present application provides an MRI-PET image modality conversion system based on a recurrent generative adversarial network.
  • a kind of MRI-PET image modality conversion system based on cyclic generative adversarial network provided by this application adopts the following technical scheme:
  • An MRI-PET image modality conversion system based on recurrent generative adversarial network including:
  • an acquisition module acquiring an MRI image dataset and a PET image dataset, and constructing an input dataset from the MRI image dataset and the PET image dataset;
  • Training module training and generating a cyclic generative adversarial network model, and gradually reach a state of convergence
  • the verification module uses the trained recurrent generative adversarial network model to perform modality conversion processing from MRI images to PET images.
  • the present application includes at least one of the following beneficial technical effects:
  • the present application performs modal conversion processing from MRI images to PET images based on recurrent generative adversarial networks.
  • the self-attention unit and the wavelet affine transformation layer are applied in the generator and the discriminator respectively, which greatly improves the characteristics and characteristics of the generated images.
  • the utilization of wavelet domain features, and the occupancy ratio of GPU memory during training is reduced;
  • This application designs a joint loss function based on the SSIM function and the PSNR function. On the basis of the traditional cycle consistency comparison, the requirements for the generated image structure and signal-to-noise ratio are added, which significantly improves the quality of the PET generated image.
  • FIG. 1 is a schematic flowchart of an MRI-PET image modality conversion method based on a recurrent generative adversarial network in an embodiment of the present application;
  • FIG. 2 is a schematic structural diagram of a cyclic generative adversarial network model in an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a first generator or a second generator in an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a cross self-attention sub-module in an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a first discriminator or a second discriminator in an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of a wavelet affine transformation layer in an embodiment of the present application.
  • FIG. 7 is a system block diagram of an MRI-PET image modality conversion system based on a recurrent generative adversarial network in an embodiment of the present application.
  • AD Alzheimer's disease
  • PD Parkinson's disease
  • multimodal neuroimaging such as MRI and PET
  • ADNI Alzheimer's Disease Neuroimaging Initiative
  • the network structure combines the wavelet affine transform, and uses the Haar wavelet transform method to modulate the feature map of the spatial domain information in the wavelet domain.
  • the existing image modality conversion work from MRI images to PET images uses the public database ADNI. Since this database comes from data collected by hospitals around the world, the image data is uneven and it is difficult to find MR images ⁇ PET with a large amount of data.
  • the image matching data set uses the traditional Cycle-GAN model, which is not improved on the basis of the model, so the training effect is not ideal.
  • this application designs an MRI-PET image modality conversion method and system based on a recurrent generative adversarial network.
  • This application will realize the pixel-to-pixel conversion of MRI images to PET images based on the recurrent generative adversarial network, and obtain PET images with clinical diagnostic value through imaging methods combined with deep learning models.
  • a method for MRI-PET image modality conversion based on a recurrent generative adversarial network proposed in this application includes the following steps:
  • Step S100 acquiring an MRI image data set and a PET image data set, and constructing an input data set from the MRI image data set and the PET image data set.
  • Step S200 constructing a cyclic generative adversarial network model, and performing adversarial training on the cyclic generative adversarial network model by using the input data set.
  • the cyclic generative adversarial network model constructed in the present application will conduct adversarial training on the cyclic generative adversarial network model by using the input data set, and the adversarial training can make the cyclic generative adversarial network model obtain reinforcement learning , compared with the common generative adversarial network processing, the recurrent generative adversarial network model of the present application does not require a large number of registered MRI images and PET images, and the synthesis speed is faster.
  • the recurrent generative adversarial network model in this embodiment of the present application includes a pair of first generators, a pair of second generators, a first discriminator, and a second discriminator, wherein the pair of Adversarial training of a recurrent generative adversarial network model includes the following steps:
  • Step S210 the MRI image is generated as a PET generated image by the first generator
  • Step S220 the PET image is generated as an MRI generated image by the second generator
  • Step S230 distinguish whether the PET generated image is a PET image by the first discriminator, and output the first discrimination result to the first generator;
  • Step S240 distinguish whether the MRI generated image is an MRI image by the second discriminator, and output the second discrimination result to the second discriminator;
  • Step S250 the first generator and the second generator perform the next iteration according to the first discrimination result and the second discrimination result, until the first discriminator and the second discriminator cannot distinguish the authenticity of the PET-generated image and the MRI-generated image. sex.
  • the generator (referring to the above-mentioned first generator and second generator) completes the mutual mapping between the MRI image and the PET image
  • the discriminator (referring to the above-mentioned first discrimination and a second discriminator) are used to distinguish between genuine and false images generated by MRI and PET.
  • the so-called confrontation refers to the confrontation between the generator and the discriminator.
  • the generator learns the data distribution of the real data, the discriminator distinguishes the true and false from the real data and the data generated by the generator, and the generator expects the generated data as much as possible. To fool the discriminator, and the discriminator expects to be able to discern the data generated by the generator, thus creating an adversary.
  • the so-called cycle refers to the above-mentioned pair of generators (referring to the above-mentioned first generator and second generator) and a pair of discriminators (referring to the above-mentioned first discriminator and second discriminator). loop structure.
  • the generator and the discriminator continue to play games in the generation and confrontation, learn together, and gradually reach the Nash equilibrium.
  • the data generated by the generator is enough to mix the fake with the real, so that the discriminator cannot distinguish between the real and the fake.
  • the self-attention unit is designed as a cross self-attention sub-module, and the cross-cross self-attention sub-module can obtain non-local features from the horizontal and vertical directions, which is different from the traditional attention.
  • the difference between the modules is that the cross self-attention sub-module reduces the original H*W feature map to H+W-1, which can aggregate pixels in the horizontal and vertical directions, enhance the representation ability at the pixel level, and greatly reduce the model GPU memory and algorithm complexity occupied during training.
  • X(i,j) is obtained after dimension reduction by two convolution filters of size 1x1 first and Y(i,j), where C' is less than C.
  • the AFFINE operation is as follows. Select the element x(i,j) at any position in X(i,j), and select the row element and column of the column where the element is located in Y(i,j). combination of elements
  • the cross self-attention sub-module constructed above only calculates feature vectors in the vertical and horizontal directions, and the cross-sub-self-attention sub-module is designed as the above self-attention unit twice using the cross-sub-self-attention sub-module, which can obtain global correlation and can greatly Reduce the amount of calculation and the memory space occupied.
  • the first discriminator and the second discriminator output the first discrimination result and the second discrimination result, including the following steps:
  • the PET image and the PET-generated image are input into the first discriminator as input pictures, and the MRI image and the MRI-generated image are input into the second discriminator as input pictures;
  • the input image is extracted through two convolution layers to obtain a feature map with spatial domain, and the input image is subjected to Haar wavelet transform at the same time to obtain a wavelet domain feature set. And the wavelet domain feature set is input into the affine transformation layer to obtain a feature map with spatial domain and wavelet domain features;
  • the above-mentioned wavelet affine transformation layer is repeated at least twice, and finally through the Softmax function, the first discriminator outputs the first discrimination result, and the second discriminator outputs the second discrimination result.
  • the wavelet affine transformation layer comes from the conditional normalization method, and uses the affine map learned by the model to transform the feature map.
  • Conditional normalization methods have been shown to be very effective in improving the performance of the model on the task of image style transfer.
  • Haar wavelet transform uses Haar function as wavelet function for wavelet packet transform.
  • Haar wavelet transform four wavelet domain feature sets including low frequency and high frequency can be obtained after the input image is processed by Haar wavelet transform.
  • the Averaging step is to average the adjacent pixels
  • the Diqqerencing step is to calculate the difference between the pixel and the result of the Averaging step
  • Thresholding is the threshold processing to filter the results that are not within the threshold range
  • the four wavelet domain features are set according to two types. Different thresholds are obtained by performing Haar wavelet transform twice in the order of row and column. It is worth noting that the final four wavelet domain features will be input into the affine transformation layer after being processed by two convolution filters.
  • the above-mentioned convolutional layer uses the ReLU function as the activation function.
  • MRI images and PET images are input into the above-mentioned cyclic generative adversarial network model.
  • the MRI images are respectively sent to the first generator and the second discriminator after block extraction, the first generator generates PET generated images, and the PET generated images generated by the first generator are sent to the second generator and the first generator respectively.
  • the first discriminator discriminates the authenticity of the PET-generated image and outputs the first discrimination result to the first generator, and the second generator generates a cyclic MRI-generated image, and the cyclic MRI-generated image is compared with the original MRI image. Get the losses of the first generator and the second generator.
  • the PET image is sent to the second generator and the first discriminator after block extraction, the second generator generates the MRI generated image, and the MRI generated image generated by the second generator is sent to the first generator and the first generator respectively.
  • the second discriminator discriminates the authenticity of the MRI generated image and outputs the second discrimination result to the second generator.
  • the first generator generates a cyclic PET generated image, and the cyclic PET generated image is compared with the original MRI image. A comparison is made to get the losses of the first generator and the second generator.
  • the above embodies the adversarial method and cyclic structure in the recurrent generative adversarial network model.
  • the cyclic generative adversarial network model contains an adversarial loss function and a cycle consistency loss function.
  • the adversarial loss function is determined by the first discrimination result and the second discrimination result of the first discriminator and the second discriminator, and the specific form is as follows:
  • G represents the generator that generates another modal image from one modal image
  • D represents the discriminator that distinguishes whether the modal image is the generated image
  • I represents the image of this modal
  • CE represents the Softmax
  • the function is the cross-entropy function of the activation function
  • the label is the real label used for evaluation.
  • the above two forms represent the adversarial loss function models of the first discriminator and the second discriminator, respectively.
  • the image length is m,n, I(i,j)K(i,j) is the pixel value corresponding to the two input images;
  • L cyccon ⁇ SSIM L SSIM (G MRI-PET , G PET-MRI )+ ⁇ PSNR L PSNR (G MRI-PET , G PET-MRI );
  • is the control loss function value that conforms to the cycle consistency constant parameter term
  • Step S300 training and generating a cyclic generative adversarial network model, and gradually reaching a convergence state.
  • step S300 specifically, the cyclic generative adversarial network model generated by training uses the Nadam optimizer to optimize the model.
  • Step S400 using the trained recurrent generative adversarial network model to perform modal conversion processing on the MRI image to the PET image.
  • the present application mainly uses the already trained first generator to solidify and form an inference model. Therefore, in the testing stage, the MRI image is extracted from the block and sent to the inference model, and the inference model is used to generate the PET generated image from the MRI image, and the PET generated image is required.
  • the present application performs modal conversion processing from MRI images to PET images based on the recurrent generative adversarial network.
  • the self-attention unit and the wavelet affine transformation layer are applied in the generator and the discriminator respectively, which greatly improves the characteristics and characteristics of the generated images.
  • the utilization of wavelet domain features and the occupancy ratio of GPU memory during training are reduced.
  • the present application designs a joint loss function based on the SSIM function and the PSNR function. On the basis of the traditional cycle consistency comparison, the requirements for the generated image structure and signal-to-noise ratio are added, which significantly improves the quality of the PET generated image.
  • another object of the present application is to provide an MRI-PET image modality conversion system based on a recurrent generative adversarial network, including an acquisition module, a construction module, a training module and a verification module.
  • the acquisition module is used to acquire the MRI image dataset and the PET image dataset, and construct the input dataset from the MRI image dataset and the PET image dataset;
  • the building module is used to construct a recurrent generative adversarial network model, and the recurrent generative adversarial network model is trained against the input data set;
  • the training module is used to train the generative cycle generative adversarial network model, and gradually reach the convergence state;
  • the validation module is used for modality conversion processing from MRI images to PET images using the trained recurrent generative adversarial network model.
  • the above system will perform modal conversion processing from MRI images to PET images based on recurrent generative adversarial networks.
  • the self-attention unit and the wavelet affine transformation layer are applied in the generator and discriminator respectively, which greatly improves the characteristics of the generated image. and the utilization of wavelet domain features, and reduce the occupancy ratio of GPU memory during training.
  • the present application designs a joint loss function based on the SSIM function and the PSNR function. On the basis of the traditional cycle consistency comparison, the requirements for the generated image structure and signal-to-noise ratio are added, which significantly improves the quality of the PET generated image.

Abstract

The present application relates to an MRI-PET image modality conversion method and system based on a cyclic generative adversarial network. The method comprises the following steps: obtaining an MRI image data set and a PET image data set, and constructing an input data set from the MRI image data set and the PET image data set; constructing a cyclic generative adversarial network model, and performing adversarial training on the cyclic generative adversarial network model by means of the input data set; training and generating the cyclic generative adversarial network model, and gradually reaching a convergence state; and performing modality conversion processing on an MRI image to a PET image by using the trained cyclic generative adversarial network model. According to the present application, by applying the method and system, the blank in the field of generating a PET image by an MRI image is filled, and the problem of low resolution of PET generated images is solved.

Description

基于循环生成对抗网络的MRI-PET图像模态转换方法及系统MRI-PET image modality conversion method and system based on recurrent generative adversarial network 技术领域technical field
本申请涉及成像技术领域,尤其是涉及一种基于循环生成对抗网络的MRI-PET图像模态转换方法及系统。The present application relates to the field of imaging technologies, and in particular, to a method and system for MRI-PET image modality conversion based on a recurrent generative adversarial network.
背景技术Background technique
正电子发射计算机断层扫描(Positron Emission Computed Tomography,PET),是一种利用人体代谢必须的物质如葡萄糖、蛋白质、核酸等,标记上半衰期短的放射性元素如F18、C11,将其注入人体后,通过观察其在人体组织代谢过程中的聚集,来表示反映组织的代谢情况,从而达到诊断的目的。最常用的是氟代脱氧葡萄糖(Fludeoxyglucose,18F-FDG)。PET成像需要向人体注射放射性同位素进行显像,因此在操作上有一定的危险,并且会对患者造成一定剂量的辐射。Positron Emission Computed Tomography (PET) is a kind of radioactive elements with short half-life such as F18 and C11 that use substances necessary for human metabolism, such as glucose, protein, nucleic acid, etc., to be injected into the human body. By observing its aggregation in the process of human tissue metabolism, it can reflect the metabolism of the tissue, so as to achieve the purpose of diagnosis. The most commonly used is fluorodeoxyglucose (Fludeoxyglucose, 18F-FDG). PET imaging requires the injection of radioisotopes into the human body for imaging, so there are certain operational risks and a certain dose of radiation to the patient.
磁共振成像(Magnetic Resonance,MR)是断层成像的一种,这种成像手段利用强磁场下组织发生的磁共振现象获得组织的电磁信号,并依此进行人体组织的重建。MR对于软组织结构成像有着非常优秀的表现,并且能够省略图像的重建步骤直接获得原生的三维断面成像。MR与核医学成像方法不同,其成像过程不涉及任何电离辐射,因此不会对病人造成任何形式上的辐射。Magnetic resonance imaging (MR) is a type of tomography, which uses the magnetic resonance phenomenon of tissue under strong magnetic field to obtain electromagnetic signals of tissue, and reconstruct human tissue accordingly. MR has excellent performance for imaging soft tissue structures, and can directly obtain native 3D cross-sectional imaging without the reconstruction step of the image. Unlike nuclear medicine imaging methods, MR does not involve any ionizing radiation in the imaging process, so it does not cause any form of radiation to the patient.
目前许多脑部疾病如阿尔茨海默病(Alzheimer’s disease,AD)帕金森病(Parkinson’s disease,PD)的诊断都需要依靠多模态神经影像提供的综合信息进行(如MRI和PET)。然而,进行多模态信息研究时就不可避免地会遇到图像数据的缺失,如公开数据库阿尔茨海默病神经影像计划(Alzheimer's Disease Neuroimaging Initiative,ADNI)数据库中同一个患者常常会缺失PET数据。由于PET设备的普及性较低,PET图像数据的缺失是不可避免的问题,但是如果只研究同时具有两种模态的病例,那么可用于深度学习模型训练的数据会大大减少,这样会严重影响模型的训练结果和诊断表现。At present, the diagnosis of many brain diseases such as Alzheimer's disease (AD) and Parkinson's disease (PD) relies on the comprehensive information provided by multimodal neuroimaging (such as MRI and PET). However, when conducting multimodal information research, it is inevitable to encounter the lack of image data, such as the public database Alzheimer's Disease Neuroimaging Initiative (ADNI) database often has missing PET data for the same patient. . Due to the low popularity of PET equipment, the lack of PET image data is an inevitable problem, but if only cases with both modalities are studied, the data available for deep learning model training will be greatly reduced, which will seriously affect Model training results and diagnostic performance.
因此,研究和开发MR到PET的不同模态断层影像转换方法,对于目前的医疗诊断领域有着重要的科学意义和广阔的应用前景。Therefore, the research and development of different modality tomographic image conversion methods from MR to PET has important scientific significance and broad application prospects for the current medical diagnosis field.
发明内容SUMMARY OF THE INVENTION
为了填补MRI图像生成PET图像领域的空白,本申请提供一种基于循环生成对抗网络的MRI-PET图像模态转换方法。In order to fill the gap in the field of generating PET images from MRI images, the present application provides an MRI-PET image modality conversion method based on a recurrent generative adversarial network.
本申请提供的一种基于循环生成对抗网络的MRI-PET图像模态转换方法,采用如下 的技术方案:A kind of MRI-PET image modality conversion method based on cyclic generative confrontation network provided by the application adopts the following technical scheme:
一种基于循环生成对抗网络的MRI-PET图像模态转换方法,包括如下步骤:A MRI-PET image modality conversion method based on recurrent generative adversarial network, comprising the following steps:
获取MRI图像数据集和PET图像数据集,从MRI图像数据集和PET图像数据集中构造输入数据集;Acquire an MRI image dataset and a PET image dataset, and construct an input dataset from the MRI image dataset and the PET image dataset;
构建循环生成对抗网络模型,借由输入数据集对循环生成对抗网络模型进行对抗训练;Build a recurrent generative adversarial network model, and conduct adversarial training on the recurrent generative adversarial network model with the input data set;
训练生成循环生成对抗网络模型,并逐步达到收敛状态;Train the generation cycle to generate the adversarial network model, and gradually reach the convergence state;
利用训练好的循环生成对抗网络模型对MRI图像到PET图像进行模态转换处理。A trained recurrent generative adversarial network model is used to perform modality conversion processing from MRI images to PET images.
可选的,构建的循环生成对抗网络模型包括一对第一生成器、一对第二生成器、第一辨别器和第二辨别器,其中,借由输入数据集对循环生成对抗网络模型进行对抗训练包括如下步骤:Optionally, the constructed recurrent generative adversarial network model includes a pair of first generators, a pair of second generators, a first discriminator and a second discriminator, wherein the recurrent generative adversarial network model is performed on the input data set. Adversarial training includes the following steps:
借由第一生成器将MRI图像生成为PET生成图像;generating, by the first generator, the MRI image as a PET generated image;
借由第二生成器将PET图像生成为MRI生成图像;generating, by the second generator, the PET image into an MRI generated image;
借由第一辨别器区分PET生成图像是否为PET图像,并向第一生成器输出第一辨别结果;Distinguish whether the PET generated image is a PET image by the first discriminator, and output the first discrimination result to the first generator;
借由第二辨别器区分MRI生成图像是否为MRI图像,并向第二生成器输出第二辨别结果;Distinguish whether the MRI generated image is an MRI image by the second discriminator, and output the second discrimination result to the second generator;
第一生成器和第二生成器根据第一辨别结果和第二辨别结果进行下一轮迭代,直至第一辨别器和第二辨别器无法区分PET生成图像和MRI生成图像的真伪性。The first generator and the second generator perform the next iteration according to the first discrimination result and the second discrimination result, until the first discriminator and the second discriminator cannot distinguish the authenticity of the PET-generated image and the MRI-generated image.
可选的,所述第一生成器和第二生成器基于改进的U-Net模型构造而成,其中,改进的U-Net模型采用自注意单元替换原U-Net模型中跳跃连接步骤中的裁剪拓展步骤。Optionally, the first generator and the second generator are constructed based on an improved U-Net model, wherein the improved U-Net model adopts a self-attention unit to replace the skip connection step in the original U-Net model. Crop and expand step.
可选的,所述自注意单元被设计为十字交叉自注意力子模块。Optionally, the self-attention unit is designed as a cross self-attention sub-module.
可选的,第一辨别器和第二辨别器输出第一辨别结果和第二辨别结果,包括如下步骤:Optionally, the first discriminator and the second discriminator output the first discrimination result and the second discrimination result, including the following steps:
PET图像和PET生成图像作为输入图片输入到第一辨别器中,MRI图像和MRI生成图像作为输入图片输入到第二辨别器中;The PET image and the PET-generated image are input into the first discriminator as input pictures, and the MRI image and the MRI-generated image are input into the second discriminator as input pictures;
在第一辨别器和第二辨别器中内嵌应用小波仿射变换层;apply a wavelet affine transform layer inline in the first discriminator and the second discriminator;
在小波仿射变换层中,输入图片通过两个卷积层后以提取得到带有空间域的特征图谱,输入图片同时进行Haar小波变换,获得小波域特征集合,将带有空间域的特征图谱和小波域特征集合输入到仿射变换层中得到带有空间域和小波域特征的特征图谱;In the wavelet affine transformation layer, the input image is extracted through two convolution layers to obtain a feature map with spatial domain, and the input image is subjected to Haar wavelet transform at the same time to obtain a wavelet domain feature set. And the wavelet domain feature set is input into the affine transformation layer to obtain a feature map with spatial domain and wavelet domain features;
重复至少两次上述小波仿射变换层,最后通过Softmax函数,第一辨别器输出第一辨别结果,第二辨别器输出第二辨别结果。The above-mentioned wavelet affine transformation layer is repeated at least twice, and finally through the Softmax function, the first discriminator outputs the first discrimination result, and the second discriminator outputs the second discrimination result.
可选的,所述卷积层由ReLU函数作为激活函数。Optionally, the convolution layer uses a ReLU function as an activation function.
可选的,所述循环生成对抗网络模型包含对抗损失函数和循环一致性损失函数。Optionally, the recurrent generative adversarial network model includes an adversarial loss function and a cycle consistency loss function.
可选的,所述对抗损失函数由第一辨别器和第二辨别器的第一辨别结果和第二辨别结果决定,具体形式如下所示:Optionally, the adversarial loss function is determined by the first discrimination result and the second discrimination result of the first discriminator and the second discriminator, and the specific form is as follows:
L(G MRI-PET,D PET,I MRI,I PET)=CE(D PET(G MRI-PET(I MRI)),label); L( GMRI-PET ,DPET , IMRI , IPET )=CE(DPET( GMRI-PET ( IMRI )),label);
L(G PET-MRI,D MRI,I PET,I MRI)=CE(D MRI(G PET-MRI(I PET)),label); L( GPET-MRI ,DMRI , IPET , IMRI )=CE(DMRI( GPET -MRI ( IPET)),label);
其中,G表示由一种模态图像生成另一种模态图像的生成器,D表示辨别该种模态图像是否为生成图像的辨别器,I表示该种模态的图像,CE表示以Softmax函数为激活函数的交叉熵函数,label为用于评价的真实标签。Among them, G represents the generator that generates another modal image from one modal image, D represents the discriminator that distinguishes whether the modal image is the generated image, I represents the image of this modal, CE represents the Softmax The function is the cross-entropy function of the activation function, and the label is the real label used for evaluation.
可选的,所述循环一致性损失函数,具体形式如下:Optionally, the cycle consistency loss function has the following specific form:
Figure PCTCN2020135319-appb-000001
Figure PCTCN2020135319-appb-000001
其中,μ和σ分别表示图像的均值和标准差,C 1=(k 1L) 2和C 2=(k 2L) 2是两个较小的常数项,避免分母为0,其中L表示图像的最大像素值; where μ and σ represent the mean and standard deviation of the image, respectively, and C 1 =(k 1 L) 2 and C 2 =(k 2 L) 2 are two smaller constant terms, avoiding a denominator of 0, where L represents the maximum pixel value of the image;
Figure PCTCN2020135319-appb-000002
Figure PCTCN2020135319-appb-000002
其中,图像长度为m,n,I(i,j)K(i,j)为两张输入图像对应的像素值;Among them, the image length is m,n, I(i,j)K(i,j) is the pixel value corresponding to the two input images;
Figure PCTCN2020135319-appb-000003
Figure PCTCN2020135319-appb-000003
Figure PCTCN2020135319-appb-000004
Figure PCTCN2020135319-appb-000004
Figure PCTCN2020135319-appb-000005
Figure PCTCN2020135319-appb-000005
L cyccon=μ SSIML SSIM(G MRI-PET,G PET-MRI)+μ PSNRL PSNR(G MRI-PET,G PET-MRI); L cyccon = μ SSIM L SSIM (G MRI-PET , G PET-MRI )+μ PSNR L PSNR (G MRI-PET , G PET-MRI );
其中μ为控制损失函数值符合循环一致性常参数项;where μ is the control loss function value that conforms to the cycle consistency constant parameter term;
由此得到全局损失函数的数学表达:From this, the mathematical expression of the global loss function is obtained:
Figure PCTCN2020135319-appb-000006
Figure PCTCN2020135319-appb-000006
为了填补MRI图像生成PET图像领域的空白,本申请提供一种基于循环生成对抗网络的MRI-PET图像模态转换系统。In order to fill the gap in the field of generating PET images from MRI images, the present application provides an MRI-PET image modality conversion system based on a recurrent generative adversarial network.
本申请提供的一种基于循环生成对抗网络的MRI-PET图像模态转换系统,采用如下的技术方案:A kind of MRI-PET image modality conversion system based on cyclic generative adversarial network provided by this application adopts the following technical scheme:
一种基于循环生成对抗网络的MRI-PET图像模态转换系统,包括:An MRI-PET image modality conversion system based on recurrent generative adversarial network, including:
获取模块,获取MRI图像数据集和PET图像数据集,从MRI图像数据集和PET图像数据集中构造输入数据集;an acquisition module, acquiring an MRI image dataset and a PET image dataset, and constructing an input dataset from the MRI image dataset and the PET image dataset;
构建模块,构建循环生成对抗网络模型,借由输入数据集对循环生成对抗网络模型进行对抗训练;Building modules, constructing a recurrent generative adversarial network model, and conducting adversarial training on the recurrent generative adversarial network model with the input data set;
训练模块,训练生成循环生成对抗网络模型,并逐步达到收敛状态;Training module, training and generating a cyclic generative adversarial network model, and gradually reach a state of convergence;
验证模块,利用训练好的循环生成对抗网络模型对MRI图像到PET图像进行模态转换处理。The verification module uses the trained recurrent generative adversarial network model to perform modality conversion processing from MRI images to PET images.
综上所述,本申请包括以下至少一种有益技术效果:To sum up, the present application includes at least one of the following beneficial technical effects:
1、本申请基于循环生成对抗网络对MRI图像到PET图像进行模态转换处理,通过结合自注意单元和小波仿射变换层分别应用在生成器和辨别器中,大大提高了生成图像的特征和小波域特征的利用率,并且减小了GPU内存在训练时的占用比;1. The present application performs modal conversion processing from MRI images to PET images based on recurrent generative adversarial networks. By combining the self-attention unit and the wavelet affine transformation layer, they are applied in the generator and the discriminator respectively, which greatly improves the characteristics and characteristics of the generated images. The utilization of wavelet domain features, and the occupancy ratio of GPU memory during training is reduced;
2、本申请设计了基于SSIM函数和PSNR函数的联合损失函数在传统循环一致性对比的基础上加入了对于生成图像结构、信噪比方面的要求,显著提高了PET生成图像的质量。2. This application designs a joint loss function based on the SSIM function and the PSNR function. On the basis of the traditional cycle consistency comparison, the requirements for the generated image structure and signal-to-noise ratio are added, which significantly improves the quality of the PET generated image.
附图说明Description of drawings
图1是本申请实施例中基于循环生成对抗网络的MRI-PET图像模态转换方法的流程示意图;FIG. 1 is a schematic flowchart of an MRI-PET image modality conversion method based on a recurrent generative adversarial network in an embodiment of the present application;
图2是本申请实施例中循环生成对抗网络模型的结构示意图;2 is a schematic structural diagram of a cyclic generative adversarial network model in an embodiment of the present application;
图3是本申请实施例中第一生成器或第二生成器的结构示意图;3 is a schematic structural diagram of a first generator or a second generator in an embodiment of the present application;
图4是本申请实施例中十字交叉自注意力子模块的结构示意图;4 is a schematic structural diagram of a cross self-attention sub-module in an embodiment of the present application;
图5是本申请实施例中第一辨别器或第二辨别器的结构示意图;5 is a schematic structural diagram of a first discriminator or a second discriminator in an embodiment of the present application;
图6是本申请实施例中小波仿射变换层的结构示意图;6 is a schematic structural diagram of a wavelet affine transformation layer in an embodiment of the present application;
图7是本申请实施例中基于循环生成对抗网络的MRI-PET图像模态转换系统的系统框图。FIG. 7 is a system block diagram of an MRI-PET image modality conversion system based on a recurrent generative adversarial network in an embodiment of the present application.
具体实施方式Detailed ways
以下结合本申请实施例中的附图对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性的劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
在相关技术中,目前许多脑部疾病如阿尔茨海默病(Alzheimer’s disease,AD)帕金森病(Parkinson’s disease,PD)的诊断都需要依靠多模态神经影像提供的综合信息进行(如MRI和PET)。然而,进行多模态信息研究时就不可避免地会遇到图像数据的缺失,如公开数据库阿尔茨海默病神经影像计划(Alzheimer's Disease Neuroimaging Initiative,ADNI)数据库中同一个患者常常会缺失PET数据。由于PET设备的普及性较低,PET图像数据的缺失是不可避免的问题,但是如果只研究同时具有两种模态的病例,那么可用于深度学习模型训练的数据会大大减少,这样会严重影响模型的训练结果和诊断表现。因此,研究和开发MR到PET的不同模态断层影像转换方法,对于目前的医疗诊断领域有着重要的科学意义和广阔的应用前景。In related technologies, the diagnosis of many brain diseases such as Alzheimer's disease (AD) and Parkinson's disease (PD) currently needs to rely on the comprehensive information provided by multimodal neuroimaging (such as MRI and PET). However, when conducting multimodal information research, it is inevitable to encounter the lack of image data, such as the public database Alzheimer's Disease Neuroimaging Initiative (ADNI) database often has missing PET data for the same patient. . Due to the low popularity of PET equipment, the lack of PET image data is an inevitable problem, but if only cases with both modalities are studied, the data available for deep learning model training will be greatly reduced, which will seriously affect Model training results and diagnostic performance. Therefore, the research and development of different modality tomographic image conversion methods from MR to PET has important scientific significance and broad application prospects for the current medical diagnosis field.
Y.S.Pan等人与2018年发表的文章“Synthesizing Missing PET from MRI with Cycle-consistent Generative Adversarial Networks for Alzheimer’s Disease Diagnosis”利用了Cycle-Gan网络尝试从MRI图像生成PET图像,利用来自ADNI公开数据库的影像数据证明了MRI图像生成PET图像的可行性。Y.S.Pan et al. and the 2018 paper "Synthesizing Missing PET from MRI with Cycle-consistent Generative Adversarial Networks for Alzheimer's Disease Diagnosis" utilized Cycle-Gan networks to attempt to generate PET images from MRI images, using imaging data from the ADNI public database The feasibility of generating PET images from MRI images is demonstrated.
X.Dong等人于2019年在Physics in Medicine&Biology期刊上发表文章“Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging”该文章将3D循环生成对抗网络应用于于低剂量CT成像领域,其中输入像素块为64*64*64,生成器使用了基于自注意单元的U-Net网络,其中自注意单元有利于网络识别出信息量最大的图像块,以此进行更好的图像降噪。X.Dong et al. published the article "Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging" in the journal Physics in Medicine & Biology in 2019. The article applies 3D recurrent generative adversarial networks to the field of low-dose CT imaging , where the input pixel block is 64*64*64, and the generator uses a U-Net network based on self-attention unit, in which the self-attention unit helps the network to identify the image block with the largest amount of information, so as to achieve better image reduction noise.
L.Q.Qu等人在2019年在Medical Image Analysis期刊上发表的文章“Synthesized 7T MRI from 3T MRI via deep learning in spatial and wavelet domains”利用小波变换实现了有效的多尺度重建,同时能利用低频组织造影和高频解剖细节。网络结构中结合了小波仿射变换,利用了Haar小波变换方法小波域的信息调制空间域的特征图。The article "Synthesized 7T MRI from 3T MRI via deep learning in spatial and wavelet domains" published in the journal Medical Image Analysis in 2019 by L.Q.Qu et al. used wavelet transform to achieve effective multi-scale reconstruction, while using low-frequency tissue angiography and High frequency anatomical details. The network structure combines the wavelet affine transform, and uses the Haar wavelet transform method to modulate the feature map of the spatial domain information in the wavelet domain.
针对于上述相关技术,可见存在如下的缺陷:For the above-mentioned related technologies, it can be seen that there are the following defects:
1、现有的图像模态转换多集中于MRI图像、CT图像这两种模态上,极少有进行PET图像信息处理的。1. Most of the existing image modal conversions focus on the two modalities of MRI images and CT images, and very few of them perform PET image information processing.
2、由于PET成像技术的原因,PET图像的分辨率普遍不高,包含的空间信息少,提高了深度学习模型的训练难度。2. Due to the PET imaging technology, the resolution of PET images is generally not high and contains little spatial information, which increases the difficulty of training deep learning models.
3、现有的MRI图像到PET图像的图像模态转换工作使用的是公开数据库ADNI,由于该数据库来自全球各地医院采集的数据,影像数据参差不齐并且难以找到大数据量的MR图像\PET图像匹配的数据集,使用的是传统的Cycle-GAN模型,没有在模型的基础上进行 改良因此训练效果并不理想。3. The existing image modality conversion work from MRI images to PET images uses the public database ADNI. Since this database comes from data collected by hospitals around the world, the image data is uneven and it is difficult to find MR images\PET with a large amount of data. The image matching data set uses the traditional Cycle-GAN model, which is not improved on the basis of the model, so the training effect is not ideal.
基于上述内容,本申请为了填补MRI图像生成PET图像领域的空白,解决PET生成图像分辨率低的问题,本申请设计了一种基于循环生成对抗网络的MRI-PET图像模态转换方法及系统,本申请将基于循环生成对抗网络实现MRI图像到PET图像的像素到像素的转换,通过影像学方法结合深度学习模型得到具有临床诊断价值的PET图像。Based on the above content, in order to fill the gap in the field of PET image generation from MRI images and solve the problem of low resolution of PET generated images, this application designs an MRI-PET image modality conversion method and system based on a recurrent generative adversarial network. This application will realize the pixel-to-pixel conversion of MRI images to PET images based on the recurrent generative adversarial network, and obtain PET images with clinical diagnostic value through imaging methods combined with deep learning models.
参照图1所示,本申请提出的一种基于循环生成对抗网络的MRI-PET图像模态转换方法,包括如下步骤:Referring to FIG. 1 , a method for MRI-PET image modality conversion based on a recurrent generative adversarial network proposed in this application includes the following steps:
步骤S100,获取MRI图像数据集和PET图像数据集,从MRI图像数据集和PET图像数据集中构造输入数据集。Step S100, acquiring an MRI image data set and a PET image data set, and constructing an input data set from the MRI image data set and the PET image data set.
步骤S200,构建循环生成对抗网络模型,借由输入数据集对循环生成对抗网络模型进行对抗训练。Step S200, constructing a cyclic generative adversarial network model, and performing adversarial training on the cyclic generative adversarial network model by using the input data set.
根据步骤S200所限定的技术方案,具体的,本申请构建的循环生成对抗网络模型,将借由输入数据集对循环生成对抗网络模型进行对抗训练,对抗训练可以使循环生成对抗网络模型得到强化学习,对比于普通的生成对抗网络处理,本申请的循环生成对抗网络模型不需要大量配准好的MRI图像和PET图像,且合成速度更加快速。According to the technical solution defined in step S200, specifically, the cyclic generative adversarial network model constructed in the present application will conduct adversarial training on the cyclic generative adversarial network model by using the input data set, and the adversarial training can make the cyclic generative adversarial network model obtain reinforcement learning , compared with the common generative adversarial network processing, the recurrent generative adversarial network model of the present application does not require a large number of registered MRI images and PET images, and the synthesis speed is faster.
参照图2所示,本申请实施例中的循环生成对抗网络模型包括一对第一生成器、一对第二生成器、第一辨别器和第二辨别器,其中,借由输入数据集对循环生成对抗网络模型进行对抗训练包括如下步骤:Referring to FIG. 2 , the recurrent generative adversarial network model in this embodiment of the present application includes a pair of first generators, a pair of second generators, a first discriminator, and a second discriminator, wherein the pair of Adversarial training of a recurrent generative adversarial network model includes the following steps:
步骤S210,借由第一生成器将MRI图像生成为PET生成图像;Step S210, the MRI image is generated as a PET generated image by the first generator;
步骤S220,借由第二生成器将PET图像生成为MRI生成图像;Step S220, the PET image is generated as an MRI generated image by the second generator;
步骤S230,借由第一辨别器区分PET生成图像是否为PET图像,并向第一生成器输出第一辨别结果;Step S230, distinguish whether the PET generated image is a PET image by the first discriminator, and output the first discrimination result to the first generator;
步骤S240,借由第二辨别器区分MRI生成图像是否为MRI图像,并向第二生成器输出第二辨别结果;Step S240, distinguish whether the MRI generated image is an MRI image by the second discriminator, and output the second discrimination result to the second discriminator;
步骤S250,第一生成器和第二生成器根据第一辨别结果和第二辨别结果进行下一轮迭代,直至第一辨别器和第二辨别器无法区分PET生成图像和MRI生成图像的真伪性。Step S250, the first generator and the second generator perform the next iteration according to the first discrimination result and the second discrimination result, until the first discriminator and the second discriminator cannot distinguish the authenticity of the PET-generated image and the MRI-generated image. sex.
在循环生成对抗网络模型的对抗训练过程中,生成器(指代上述第一生成器和第二生成器)完成MRI图像和PET图像之间的相互映射,而辨别器(指代上述第一辨别器和第二辨别器)用于辨认区分MRI生成图像和PET生成图像的真假与否。In the adversarial training process of the recurrent generative adversarial network model, the generator (referring to the above-mentioned first generator and second generator) completes the mutual mapping between the MRI image and the PET image, and the discriminator (referring to the above-mentioned first discrimination and a second discriminator) are used to distinguish between genuine and false images generated by MRI and PET.
所谓的对抗,指的就是生成器与辨别器之间的互相对抗,生成器学习真实数据的数 据分布,辨别器从真实数据和生成器生成数据中辨别真假,生成器期望生成数据尽可能地欺骗辨别器,而辨别器期望能够辨别出生成器生成的数据,由此形成对抗。所谓的循环,指的是上述一对生成器(指代上述第一生成器和第二生成器)与一对辨别器(指代上述第一辨别器和第二辨别器),进行上述对抗的循环结构。The so-called confrontation refers to the confrontation between the generator and the discriminator. The generator learns the data distribution of the real data, the discriminator distinguishes the true and false from the real data and the data generated by the generator, and the generator expects the generated data as much as possible. To fool the discriminator, and the discriminator expects to be able to discern the data generated by the generator, thus creating an adversary. The so-called cycle refers to the above-mentioned pair of generators (referring to the above-mentioned first generator and second generator) and a pair of discriminators (referring to the above-mentioned first discriminator and second discriminator). loop structure.
因此,生成器和辨别器在生成和对抗中不断博弈,共同学习,逐步达到纳什平衡,最终生成器生成的数据足以以假乱真,使得辨别器无法辨别真伪。Therefore, the generator and the discriminator continue to play games in the generation and confrontation, learn together, and gradually reach the Nash equilibrium. In the end, the data generated by the generator is enough to mix the fake with the real, so that the discriminator cannot distinguish between the real and the fake.
参照图3所示,第一生成器和第二生成器基于改进的U-Net模型构造而成,原U-Net模型有编码器和解码器组成,编码器和解码器堆栈中的镜像层之间有跳跃连接,跳跃连接的主要作用是在输入和输出之间直接通过网络传输一些低级的图像信息。本申请实施例中改进的U-Net模型采用自注意单元替换原U-Net模型中跳跃连接步骤中的裁剪拓展步骤,自注意单元能有效利用特征图谱中最重要的信息特征,有效提高了不同维度的信息,能够更好地完成全局信息和局部信息的融合,因此,采用自注意单元同样能够得到和拓展路径相等大小的特征图谱,并且能够提高全局特征和局部特征的利用率,降低计算量的同时还能大大提高模型的性能。Referring to Figure 3, the first generator and the second generator are constructed based on the improved U-Net model. The original U-Net model consists of an encoder and a decoder. There are skip connections between them. The main function of skip connections is to transmit some low-level image information directly through the network between the input and output. The improved U-Net model in the embodiment of the present application uses a self-attention unit to replace the clipping and expansion step in the skip connection step in the original U-Net model. The self-attention unit can effectively utilize the most important information features in the feature map, effectively improving different Dimensional information can better complete the fusion of global information and local information. Therefore, using the self-attention unit can also obtain a feature map of the same size as the expansion path, and can improve the utilization of global features and local features. Reduce the amount of calculation At the same time, it can greatly improve the performance of the model.
参照图4所示,本申请实施例中,自注意单元被设计为十字交叉自注意力子模块,十字交叉自注意力子模块可以从水平方向和垂直方向获得非局部特征,与传统的注意力模块不同的是,十字交叉自注意力子模块将原有的H*W的特征图谱缩小为H+W-1,能够聚合横纵方向上像素,增强了像素级别的表征能力,同时大大减少模型训练时占用的GPU显存和算法复杂度。Referring to FIG. 4 , in the embodiment of the present application, the self-attention unit is designed as a cross self-attention sub-module, and the cross-cross self-attention sub-module can obtain non-local features from the horizontal and vertical directions, which is different from the traditional attention. The difference between the modules is that the cross self-attention sub-module reduces the original H*W feature map to H+W-1, which can aggregate pixels in the horizontal and vertical directions, enhance the representation ability at the pixel level, and greatly reduce the model GPU memory and algorithm complexity occupied during training.
以下对十字交叉自注意力子模块结合图4进行如下阐述说明。The following describes the cross self-attention sub-module in conjunction with FIG. 4 .
给定特征图I(i,j)大小为C×H×W,其中的元素定义为i i,j,首先通过两个大小为1x1的卷积滤波器降维后得到X(i,j)和Y(i,j),其中C’小于C。AFFINE操作具体如下,在X(i,j)中选定任意一个位置的元素x(i,j),同时在Y(i,j)中选定与该元素所在行所在列的行元素和列元素的组合
Figure PCTCN2020135319-appb-000007
Given a feature map I(i,j) of size C×H×W, where the elements are defined as i i,j , X(i,j) is obtained after dimension reduction by two convolution filters of size 1x1 first and Y(i,j), where C' is less than C. The AFFINE operation is as follows. Select the element x(i,j) at any position in X(i,j), and select the row element and column of the column where the element is located in Y(i,j). combination of elements
Figure PCTCN2020135319-appb-000007
然后通过一个1×1的卷积单元滤波器以及利用一个softmax单元得到特征权重图谱,其中在位置u,通道t的特征值定义为f t,u。利用另一个1×1的卷积单元滤波器得到Z(i,j),在其中的任意一个元素z(i,j)定义其所对应的列向量和行向量为
Figure PCTCN2020135319-appb-000008
AGGREGATE操作定义如下:
Figure PCTCN2020135319-appb-000009
Then pass a 1×1 convolutional unit filter and utilize a softmax unit to obtain the feature weight map, where at position u, the feature value of channel t is defined as f t,u . Z(i,j) is obtained by using another 1×1 convolution unit filter, and any element z(i,j) defines its corresponding column vector and row vector as
Figure PCTCN2020135319-appb-000008
The AGGREGATE operation is defined as follows:
Figure PCTCN2020135319-appb-000009
以上所构造的十字交叉自注意力子模块仅在垂直方向和水平方向计算特征向量,两次使用十字交叉子自注意力子模块即被设计为上述自注意单元,可获得全局相关性,能够大大降低计算量以及占用的显存空间。The cross self-attention sub-module constructed above only calculates feature vectors in the vertical and horizontal directions, and the cross-sub-self-attention sub-module is designed as the above self-attention unit twice using the cross-sub-self-attention sub-module, which can obtain global correlation and can greatly Reduce the amount of calculation and the memory space occupied.
以下对第一辨别器和第二辨别器的结构设计和辨别方法做如下说明。The structural design and identification method of the first discriminator and the second discriminator are described below.
参照图5所示,第一辨别器和第二辨别器输出第一辨别结果和第二辨别结果,包括如下步骤:5, the first discriminator and the second discriminator output the first discrimination result and the second discrimination result, including the following steps:
PET图像和PET生成图像作为输入图片输入到第一辨别器中,MRI图像和MRI生成图像作为输入图片输入到第二辨别器中;The PET image and the PET-generated image are input into the first discriminator as input pictures, and the MRI image and the MRI-generated image are input into the second discriminator as input pictures;
在第一辨别器和第二辨别器中内嵌应用小波仿射变换层;apply a wavelet affine transform layer inline in the first discriminator and the second discriminator;
在小波仿射变换层中,输入图片通过两个卷积层后以提取得到带有空间域的特征图谱,输入图片同时进行Haar小波变换,获得小波域特征集合,将带有空间域的特征图谱和小波域特征集合输入到仿射变换层中得到带有空间域和小波域特征的特征图谱;In the wavelet affine transformation layer, the input image is extracted through two convolution layers to obtain a feature map with spatial domain, and the input image is subjected to Haar wavelet transform at the same time to obtain a wavelet domain feature set. And the wavelet domain feature set is input into the affine transformation layer to obtain a feature map with spatial domain and wavelet domain features;
重复至少两次上述小波仿射变换层,最后通过Softmax函数,第一辨别器输出第一辨别结果,第二辨别器输出第二辨别结果。The above-mentioned wavelet affine transformation layer is repeated at least twice, and finally through the Softmax function, the first discriminator outputs the first discrimination result, and the second discriminator outputs the second discrimination result.
根据上述步骤所述的技术方案,具体的,参照图6所示,小波仿射变换层来自于条件标准化方法,使用模型习得的仿射映射转换特征映射。条件标准化方法被证明能够非常有效地提高模型在图像的风格转换任务中的表现。According to the technical solution described in the above steps, specifically, as shown in FIG. 6 , the wavelet affine transformation layer comes from the conditional normalization method, and uses the affine map learned by the model to transform the feature map. Conditional normalization methods have been shown to be very effective in improving the performance of the model on the task of image style transfer.
Haar小波变换为小波包变换使用了Haar函数作为小波函数。Haar wavelet transform uses Haar function as wavelet function for wavelet packet transform.
在Haar小波变换中,输入图片经过Haar小波变换处理后可以得到四个包括低频和高频的小波域特征集合。其中Averaging步骤为对于相邻像素求平均值,Diqqerencing步骤为计算该像素与其Averaging步骤处理后结果的差值,Thresholding为阈值处理过滤不在阈值范围内的结果,四个小波域特征分别根据设置两种不同的阈值,并以先沿行在沿列进行的顺序进行两次Haar小波变换而得到。值得说明的是,最后得到的四个小波域特征将通过两次卷积滤波器处理后输入到仿射变换层中。In Haar wavelet transform, four wavelet domain feature sets including low frequency and high frequency can be obtained after the input image is processed by Haar wavelet transform. The Averaging step is to average the adjacent pixels, the Diqqerencing step is to calculate the difference between the pixel and the result of the Averaging step, Thresholding is the threshold processing to filter the results that are not within the threshold range, and the four wavelet domain features are set according to two types. Different thresholds are obtained by performing Haar wavelet transform twice in the order of row and column. It is worth noting that the final four wavelet domain features will be input into the affine transformation layer after being processed by two convolution filters.
在仿射变换层中,仿射变换层分别得到带有空间域的特征图谱和小波域特征集合,其中,仿射变换层的具体定义如下,Output=λ*F+δ,其中,其中F为带有空间域的特征图谱,λ和δ表示通过两次卷积滤波器处理后的小波域特征,分别于F进行逐元素相乘和相加的仿射变换,最后得到带有空间域和小波域特征的特征图谱。In the affine transformation layer, the affine transformation layer obtains the feature map with the spatial domain and the wavelet domain feature set respectively, where the specific definition of the affine transformation layer is as follows, Output=λ*F+δ, where F is The feature map with spatial domain, λ and δ represent the wavelet domain features processed by two convolution filters, respectively perform element-wise multiplication and addition of affine transformation on F, and finally get the spatial domain and wavelet Feature map of domain features.
进一步,本申请实施例中上述卷积层由ReLU函数作为激活函数。Further, in the embodiment of the present application, the above-mentioned convolutional layer uses the ReLU function as the activation function.
参照图2所示,由此基于上述循环生成对抗网络模型,将MRI图像和PET图像输入 到上述循环生成对抗网络模型中。Referring to Fig. 2 , based on the above-mentioned cyclic generative adversarial network model, MRI images and PET images are input into the above-mentioned cyclic generative adversarial network model.
MRI图像经过图块提取后分别送入第一生成器和第二辨别器中,第一生成器生成PET生成图像,第一生成器生成的PET生成图像分别送入到第二生成器和第一辨别器中,第一辨别器辨别PET生成图像的真伪并向第一生成器输出第一辨别结果,第二生成器生成循环的MRI生成图像,通过循环的MRI生成图像与原MRI图像进行对比得到第一生成器和第二生成器的损耗。The MRI images are respectively sent to the first generator and the second discriminator after block extraction, the first generator generates PET generated images, and the PET generated images generated by the first generator are sent to the second generator and the first generator respectively. Among the discriminators, the first discriminator discriminates the authenticity of the PET-generated image and outputs the first discrimination result to the first generator, and the second generator generates a cyclic MRI-generated image, and the cyclic MRI-generated image is compared with the original MRI image. Get the losses of the first generator and the second generator.
反之,PET图像经过图块提取后分别送入第二生成器和第一辨别器中,第二生成器生成MRI生成图像,第二生成器生成的MRI生成图像分别送入到第一生成器和第二辨别器中,第二辨别器辨别MRI生成图像的真伪并向第二生成器输出第二辨别结果,第一生成器生成循环的PET生成图像,通过循环的PET生成图像与原MRI图像进行对比得到第一生成器和第二生成器的损耗。On the contrary, the PET image is sent to the second generator and the first discriminator after block extraction, the second generator generates the MRI generated image, and the MRI generated image generated by the second generator is sent to the first generator and the first generator respectively. In the second discriminator, the second discriminator discriminates the authenticity of the MRI generated image and outputs the second discrimination result to the second generator. The first generator generates a cyclic PET generated image, and the cyclic PET generated image is compared with the original MRI image. A comparison is made to get the losses of the first generator and the second generator.
上述体现了循环生成对抗网络模型中的对抗方式与循环结构。The above embodies the adversarial method and cyclic structure in the recurrent generative adversarial network model.
值得说明的是,在循环生成对抗网络模型包含对抗损失函数和循环一致性损失函数。It is worth noting that the cyclic generative adversarial network model contains an adversarial loss function and a cycle consistency loss function.
所述对抗损失函数由第一辨别器和第二辨别器的第一辨别结果和第二辨别结果决定,具体形式如下所示:The adversarial loss function is determined by the first discrimination result and the second discrimination result of the first discriminator and the second discriminator, and the specific form is as follows:
L(G MRI-PET,D PET,I MRI,I PET)=CE(D PET(G MRI-PET(I MRI)),label); L( GMRI-PET ,DPET , IMRI , IPET )=CE(DPET( GMRI-PET ( IMRI )),label);
L(G PET-MRI,D MRI,I PET,I MRI)=CE(D MRI(G PET-MRI(I PET)),label); L( GPET-MRI ,DMRI , IPET , IMRI )=CE(DMRI( GPET -MRI ( IPET)),label);
其中,G表示由一种模态图像生成另一种模态图像的生成器,D表示辨别该种模态图像是否为生成图像的辨别器,I表示该种模态的图像,CE表示以Softmax函数为激活函数的交叉熵函数,label为用于评价的真实标签。上述两种形式分别表示第一辨别器和第二辨别器的对抗损失函数模型。Among them, G represents the generator that generates another modal image from one modal image, D represents the discriminator that distinguishes whether the modal image is the generated image, I represents the image of this modal, CE represents the Softmax The function is the cross-entropy function of the activation function, and the label is the real label used for evaluation. The above two forms represent the adversarial loss function models of the first discriminator and the second discriminator, respectively.
其中,循环一致性损失函数,具体形式如下:Among them, the cycle consistency loss function, the specific form is as follows:
Figure PCTCN2020135319-appb-000010
Figure PCTCN2020135319-appb-000010
其中,μ和σ分别表示图像的均值和标准差,C 1=(k 1L) 2和C 2=(k 2L) 2是两个较小的常数项,避免分母为0,其中L表示图像的最大像素值; where μ and σ represent the mean and standard deviation of the image, respectively, and C 1 =(k 1 L) 2 and C 2 =(k 2 L) 2 are two smaller constant terms, avoiding a denominator of 0, where L represents the maximum pixel value of the image;
Figure PCTCN2020135319-appb-000011
Figure PCTCN2020135319-appb-000011
其中,图像长度为m,n,I(i,j)K(i,j)为两张输入图像对应的像素值;Among them, the image length is m,n, I(i,j)K(i,j) is the pixel value corresponding to the two input images;
Figure PCTCN2020135319-appb-000012
Figure PCTCN2020135319-appb-000012
Figure PCTCN2020135319-appb-000013
Figure PCTCN2020135319-appb-000013
Figure PCTCN2020135319-appb-000014
Figure PCTCN2020135319-appb-000014
L cyccon=μ SSIML SSIM(G MRI-PET,G PET-MRI)+μ PSNRL PSNR(G MRI-PET,G PET-MRI); L cyccon = μ SSIM L SSIM (G MRI-PET , G PET-MRI )+μ PSNR L PSNR (G MRI-PET , G PET-MRI );
其中μ为控制损失函数值符合循环一致性常参数项;where μ is the control loss function value that conforms to the cycle consistency constant parameter term;
由此得到全局损失函数的数学表达:From this, the mathematical expression of the global loss function is obtained:
Figure PCTCN2020135319-appb-000015
Figure PCTCN2020135319-appb-000015
步骤S300,训练生成循环生成对抗网络模型,并逐步达到收敛状态。Step S300, training and generating a cyclic generative adversarial network model, and gradually reaching a convergence state.
根据步骤S300所限定的技术方案,具体的,训练生成的循环生成对抗网络模型使用Nadam优化器来优化模型。According to the technical solution defined in step S300, specifically, the cyclic generative adversarial network model generated by training uses the Nadam optimizer to optimize the model.
步骤S400,利用训练好的循环生成对抗网络模型对MRI图像到PET图像进行模态转换处理。Step S400, using the trained recurrent generative adversarial network model to perform modal conversion processing on the MRI image to the PET image.
根据步骤S400所限定的技术方案,具体的,本申请在测试阶段中主要使用已经训练好的第一生成器固化形成推论模型。由此,在测试阶段,通过将MRI图像经过图块提取后送入到推论模型中,推论模型用于将MRI图像生成PET生成图像,该PET生成图像即为所需要的。According to the technical solution defined in step S400, specifically, in the testing phase, the present application mainly uses the already trained first generator to solidify and form an inference model. Therefore, in the testing stage, the MRI image is extracted from the block and sent to the inference model, and the inference model is used to generate the PET generated image from the MRI image, and the PET generated image is required.
因此,本申请基于循环生成对抗网络对MRI图像到PET图像进行模态转换处理,通过结合自注意单元和小波仿射变换层分别应用在生成器和辨别器中,大大提高了生成图像的特征和小波域特征的利用率,并且减小了GPU内存在训练时的占用比。并且本申请设计了基于SSIM函数和PSNR函数的联合损失函数在传统循环一致性对比的基础上加入了对于生成图像结构、信噪比方面的要求,显著提高了PET生成图像的质量。Therefore, the present application performs modal conversion processing from MRI images to PET images based on the recurrent generative adversarial network. By combining the self-attention unit and the wavelet affine transformation layer, they are applied in the generator and the discriminator respectively, which greatly improves the characteristics and characteristics of the generated images. The utilization of wavelet domain features and the occupancy ratio of GPU memory during training are reduced. In addition, the present application designs a joint loss function based on the SSIM function and the PSNR function. On the basis of the traditional cycle consistency comparison, the requirements for the generated image structure and signal-to-noise ratio are added, which significantly improves the quality of the PET generated image.
参照图7所示,本申请的另一个目的在提供一种基于循环生成对抗网络的MRI-PET图像模态转换系统,包括获取模块、构建模块、训练模块和验证模块。Referring to FIG. 7 , another object of the present application is to provide an MRI-PET image modality conversion system based on a recurrent generative adversarial network, including an acquisition module, a construction module, a training module and a verification module.
获取模块用于获取MRI图像数据集和PET图像数据集,从MRI图像数据集和PET图像数据集中构造输入数据集;The acquisition module is used to acquire the MRI image dataset and the PET image dataset, and construct the input dataset from the MRI image dataset and the PET image dataset;
构建模块用于构建循环生成对抗网络模型,借由输入数据集对循环生成对抗网络模型进行对抗训练;The building module is used to construct a recurrent generative adversarial network model, and the recurrent generative adversarial network model is trained against the input data set;
训练模块用于训练生成循环生成对抗网络模型,并逐步达到收敛状态;The training module is used to train the generative cycle generative adversarial network model, and gradually reach the convergence state;
验证模块用于利用训练好的循环生成对抗网络模型对MRI图像到PET图像进行模态转换处理。The validation module is used for modality conversion processing from MRI images to PET images using the trained recurrent generative adversarial network model.
因此,上述系统将基于循环生成对抗网络对MRI图像到PET图像进行模态转换处理,通过结合自注意单元和小波仿射变换层分别应用在生成器和辨别器中,大大提高了生成图像的特征和小波域特征的利用率,并且减小了GPU内存在训练时的占用比。并且本申请设计了基于SSIM函数和PSNR函数的联合损失函数在传统循环一致性对比的基础上加入了对于生成图像结构、信噪比方面的要求,显著提高了PET生成图像的质量。Therefore, the above system will perform modal conversion processing from MRI images to PET images based on recurrent generative adversarial networks. By combining the self-attention unit and the wavelet affine transformation layer, they are applied in the generator and discriminator respectively, which greatly improves the characteristics of the generated image. and the utilization of wavelet domain features, and reduce the occupancy ratio of GPU memory during training. In addition, the present application designs a joint loss function based on the SSIM function and the PSNR function. On the basis of the traditional cycle consistency comparison, the requirements for the generated image structure and signal-to-noise ratio are added, which significantly improves the quality of the PET generated image.
以上均为本申请的较佳实施例,并非依此限制本申请的保护范围,故:凡依本申请的结构、形状、原理所做的等效变化,均应涵盖于本申请的保护范围之内。The above are all preferred embodiments of the present application, and are not intended to limit the protection scope of the present application. Therefore: all equivalent changes made according to the structure, shape and principle of the present application should be covered within the scope of the present application. Inside.

Claims (10)

  1. 一种基于循环生成对抗网络的MRI-PET图像模态转换方法,其特征在于,包括如下步骤:A MRI-PET image modality conversion method based on a recurrent generative adversarial network, characterized in that it comprises the following steps:
    获取MRI图像数据集和PET图像数据集,从MRI图像数据集和PET图像数据集中构造输入数据集;Acquire an MRI image dataset and a PET image dataset, and construct an input dataset from the MRI image dataset and the PET image dataset;
    构建循环生成对抗网络模型,借由输入数据集对循环生成对抗网络模型进行对抗训练;Build a recurrent generative adversarial network model, and conduct adversarial training on the recurrent generative adversarial network model with the input data set;
    训练生成循环生成对抗网络模型,并逐步达到收敛状态;Train the generation cycle to generate the adversarial network model, and gradually reach the convergence state;
    利用训练好的循环生成对抗网络模型对MRI图像到PET图像进行模态转换处理。A trained recurrent generative adversarial network model is used to perform modality conversion processing from MRI images to PET images.
  2. 根据权利要求1所述的一种基于循环生成对抗网络的MRI-PET图像模态转换方法,其特征在于,构建的循环生成对抗网络模型包括一对第一生成器、一对第二生成器、第一辨别器和第二辨别器,其中,借由输入数据集对循环生成对抗网络模型进行对抗训练包括如下步骤:The MRI-PET image modality conversion method based on a recurrent generative adversarial network according to claim 1, wherein the constructed recurrent generative adversarial network model comprises a pair of first generators, a pair of second generators, The first discriminator and the second discriminator, wherein the adversarial training of the recurrent generative adversarial network model by means of the input dataset includes the following steps:
    借由第一生成器将MRI图像生成为PET生成图像;generating, by the first generator, the MRI image as a PET generated image;
    借由第二生成器将PET图像生成为MRI生成图像;generating, by the second generator, the PET image into an MRI generated image;
    借由第一辨别器区分PET生成图像是否为PET图像,并向第一生成器输出第一辨别结果;Distinguish whether the PET generated image is a PET image by the first discriminator, and output the first discrimination result to the first generator;
    借由第二辨别器区分MRI生成图像是否为MRI图像,并向第二生成器输出第二辨别结果;Distinguish whether the MRI generated image is an MRI image by the second discriminator, and output the second discrimination result to the second generator;
    第一生成器和第二生成器根据第一辨别结果和第二辨别结果进行下一轮迭代,直至第一辨别器和第二辨别器无法区分PET生成图像和MRI生成图像的真伪性。The first generator and the second generator perform the next iteration according to the first discrimination result and the second discrimination result, until the first discriminator and the second discriminator cannot distinguish the authenticity of the PET-generated image and the MRI-generated image.
  3. 根据权利要求2所述的一种基于循环生成对抗网络的MRI-PET图像模态转换方法,其特征在于,所述第一生成器和第二生成器基于改进的U-Net模型构造而成,其中,改进的U-Net模型采用自注意单元替换原U-Net模型中跳跃连接步骤中的裁剪拓展步骤。The MRI-PET image modality conversion method based on a recurrent generative adversarial network according to claim 2, wherein the first generator and the second generator are constructed based on an improved U-Net model, Among them, the improved U-Net model adopts the self-attention unit to replace the clipping and expansion step in the skip connection step in the original U-Net model.
  4. 根据权利要求3所述的一种基于循环生成对抗网络的MRI-PET图像模态转换方法,其特征在于,所述自注意单元被设计为十字交叉自注意力子模块。The MRI-PET image modality conversion method based on a recurrent generative adversarial network according to claim 3, wherein the self-attention unit is designed as a cross self-attention sub-module.
  5. 根据权利要求2所述的一种基于循环生成对抗网络的MRI-PET图像模态转换方法,其特征在于,第一辨别器和第二辨别器输出第一辨别结果和第二辨别结果,包括如下步骤:The MRI-PET image modality conversion method based on a recurrent generative adversarial network according to claim 2, wherein the first discriminator and the second discriminator output the first discrimination result and the second discrimination result, including the following step:
    PET图像和PET生成图像作为输入图片输入到第一辨别器中,MRI图像和MRI生成图像作为输入图片输入到第二辨别器中;The PET image and the PET-generated image are input into the first discriminator as input pictures, and the MRI image and the MRI-generated image are input into the second discriminator as input pictures;
    在第一辨别器和第二辨别器中内嵌应用小波仿射变换层;apply a wavelet affine transform layer inline in the first discriminator and the second discriminator;
    在小波仿射变换层中,输入图片通过两个卷积层后以提取得到带有空间域的特征图谱,输入图片同时进行Haar小波变换,获得小波域特征集合,将带有空间域的特征图谱和小波域特征集合输入到仿射变换层中得到带有空间域和小波域特征的特征图谱;In the wavelet affine transformation layer, the input image is extracted through two convolution layers to obtain a feature map with spatial domain, and the input image is simultaneously subjected to Haar wavelet transform to obtain a wavelet domain feature set, and the feature map with spatial domain is obtained. And the wavelet domain feature set is input into the affine transformation layer to obtain a feature map with spatial domain and wavelet domain features;
    重复至少两次上述小波仿射变换层,最后通过Softmax函数,第一辨别器输出第一辨别结果,第二辨别器输出第二辨别结果。The above-mentioned wavelet affine transformation layer is repeated at least twice, and finally through the Softmax function, the first discriminator outputs the first discrimination result, and the second discriminator outputs the second discrimination result.
  6. 根据权利要求5所述的一种基于循环生成对抗网络的MRI-PET图像模态转换方法,其特征在于,所述卷积层由ReLU函数作为激活函数。The MRI-PET image modality conversion method based on a recurrent generative adversarial network according to claim 5, wherein the convolutional layer uses a ReLU function as an activation function.
  7. 根据权利要求1所述的一种基于循环生成对抗网络的MRI-PET图像模态转换方法,其特征在于,所述循环生成对抗网络模型包含对抗损失函数和循环一致性损失函数。The MRI-PET image modality conversion method based on a recurrent generative adversarial network according to claim 1, wherein the recurrent generative adversarial network model comprises an adversarial loss function and a cycle consistency loss function.
  8. 根据权利要求6所述的一种基于循环生成对抗网络的MRI-PET图像模态转换方法,其特征在于,所述对抗损失函数由第一辨别器和第二辨别器的第一辨别结果和第二辨别结果决定,具体形式如下所示:The MRI-PET image modality conversion method based on a recurrent generative adversarial network according to claim 6, wherein the adversarial loss function is determined by the first discrimination result and the first discrimination result of the first discriminator and the second discriminator. Second, the identification result is determined, and the specific form is as follows:
    L(G MRI-PET,D PET,I MRI,I PET)=CE(D PET(G MRI-PET(I MRI)),label); L( GMRI-PET ,DPET , IMRI , IPET )=CE(DPET( GMRI-PET ( IMRI )),label);
    L(G PET-MRI,D MRI,I PET,I MRI)=CE(D MRI(G PET-MRI(I PET)),label); L( GPET-MRI ,DMRI , IPET , IMRI )=CE(DMRI( GPET -MRI ( IPET)),label);
    其中,G表示由一种模态图像生成另一种模态图像的生成器,D表示辨别该种模态图像是否为生成图像的辨别器,I表示该种模态的图像,CE表示以Softmax函数为激活函数的交叉熵函数,label为用于评价的真实标签。Among them, G represents the generator that generates another modal image from one modal image, D represents the discriminator that distinguishes whether the modal image is the generated image, I represents the image of this modal, CE represents the Softmax The function is the cross-entropy function of the activation function, and the label is the real label used for evaluation.
  9. 根据权利要求6所述的一种基于循环生成对抗网络的MRI-PET图像模态转换方法,其特征在于,所述循环一致性损失函数,具体形式如下:The MRI-PET image modality conversion method based on cyclic generative adversarial network according to claim 6, is characterized in that, described cyclic consistency loss function, specific form is as follows:
    Figure PCTCN2020135319-appb-100001
    Figure PCTCN2020135319-appb-100001
    其中,μ和σ分别表示图像的均值和标准差,C 1=(k 1L) 2和C 2=(k 2L) 2是两个较小的常数项,避免分母为0,其中L表示图像的最大像素值; where μ and σ represent the mean and standard deviation of the image, respectively, and C 1 =(k 1 L) 2 and C 2 =(k 2 L) 2 are two smaller constant terms, avoiding a denominator of 0, where L represents the maximum pixel value of the image;
    Figure PCTCN2020135319-appb-100002
    Figure PCTCN2020135319-appb-100002
    其中,图像长度为m,n,I(i,j)K(i,j)为两张输入图像对应的像素值;Among them, the image length is m,n, I(i,j)K(i,j) is the pixel value corresponding to the two input images;
    Figure PCTCN2020135319-appb-100003
    Figure PCTCN2020135319-appb-100003
    Figure PCTCN2020135319-appb-100004
    Figure PCTCN2020135319-appb-100004
    Figure PCTCN2020135319-appb-100005
    Figure PCTCN2020135319-appb-100005
    L cyccon=μ SSIML SSIM(G MRI-PET,G PET-MRI)+μ PSNRL PSNR(G MRI-PET,G PET-MRI); L cyccon = μ SSIM L SSIM (G MRI-PET , G PET-MRI )+μ PSNR L PSNR (G MRI-PET , G PET-MRI );
    其中μ为控制损失函数值符合循环一致性常参数项;where μ is the control loss function value that conforms to the cycle consistency constant parameter term;
    由此得到全局损失函数的数学表达:This leads to the mathematical expression of the global loss function:
    Figure PCTCN2020135319-appb-100006
    Figure PCTCN2020135319-appb-100006
  10. 一种基于循环生成对抗网络的MRI-PET图像模态转换系统,其特征在于,包括:An MRI-PET image modality conversion system based on a recurrent generative adversarial network, comprising:
    获取模块,获取MRI图像数据集和PET图像数据集,从MRI图像数据集和PET图像数据集中构造输入数据集;an acquisition module, which acquires the MRI image dataset and the PET image dataset, and constructs the input dataset from the MRI image dataset and the PET image dataset;
    构建模块,构建循环生成对抗网络模型,借由输入数据集对循环生成对抗网络模型进行对抗训练;Building modules, constructing a recurrent generative adversarial network model, and conducting adversarial training on the recurrent generative adversarial network model with the input data set;
    训练模块,训练生成循环生成对抗网络模型,并逐步达到收敛状态;A training module, which trains and generates a cyclic generative adversarial network model, and gradually reaches a state of convergence;
    验证模块,利用训练好的循环生成对抗网络模型对MRI图像到PET图像进行模态转换处理。The verification module uses the trained recurrent generative adversarial network model to perform modality conversion processing from MRI images to PET images.
PCT/CN2020/135319 2020-12-10 2020-12-10 Mri-pet image modality conversion method and system based on cyclic generative adversarial network WO2022120731A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/135319 WO2022120731A1 (en) 2020-12-10 2020-12-10 Mri-pet image modality conversion method and system based on cyclic generative adversarial network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/135319 WO2022120731A1 (en) 2020-12-10 2020-12-10 Mri-pet image modality conversion method and system based on cyclic generative adversarial network

Publications (1)

Publication Number Publication Date
WO2022120731A1 true WO2022120731A1 (en) 2022-06-16

Family

ID=81972995

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135319 WO2022120731A1 (en) 2020-12-10 2020-12-10 Mri-pet image modality conversion method and system based on cyclic generative adversarial network

Country Status (1)

Country Link
WO (1) WO2022120731A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272156A (en) * 2022-09-01 2022-11-01 中国海洋大学 Oil and gas reservoir high-resolution wellbore imaging characterization method based on cyclic generation countermeasure network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110544239A (en) * 2019-08-19 2019-12-06 中山大学 Multi-modal MRI conversion method, system and medium for generating countermeasure network based on conditions
CN110580695A (en) * 2019-08-07 2019-12-17 深圳先进技术研究院 multi-mode three-dimensional medical image fusion method and system and electronic equipment
CN110689561A (en) * 2019-09-18 2020-01-14 中山大学 Conversion method, system and medium of multi-modal MRI and multi-modal CT based on modular GAN
CN111340903A (en) * 2020-02-10 2020-06-26 深圳先进技术研究院 Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580695A (en) * 2019-08-07 2019-12-17 深圳先进技术研究院 multi-mode three-dimensional medical image fusion method and system and electronic equipment
CN110544239A (en) * 2019-08-19 2019-12-06 中山大学 Multi-modal MRI conversion method, system and medium for generating countermeasure network based on conditions
CN110689561A (en) * 2019-09-18 2020-01-14 中山大学 Conversion method, system and medium of multi-modal MRI and multi-modal CT based on modular GAN
CN111340903A (en) * 2020-02-10 2020-06-26 深圳先进技术研究院 Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HOO-CHANG SHIN; ALVIN IHSANI; SWETHA MANDAVA; SHARATH TURUVEKERE SREENIVAS; CHRISTOPHER FORSTER; JIOOK CHA; ALZHEIMER'S DISEASE NE: "GANBERT: Generative Adversarial Networks with Bidirectional Encoder Representations from Transformers for MRI to PET synthesis", ARXIV.ORG, 10 August 2020 (2020-08-10), pages 1 - 12, XP081736375 *
QU LIANGQIONG, ZHANG YONGQIN, WANG SHUAI, YAP PEW-THIAN, SHEN DINGGANG: "Synthesized 7T MRI from 3T MRI via deep learning in spatial and wavelet domains", MEDICAL IMAGE ANALYSIS, vol. 62, 1 May 2020 (2020-05-01), GB , pages 1 - 12, XP055941924, ISSN: 1361-8415, DOI: 10.1016/j.media.2020.101663 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272156A (en) * 2022-09-01 2022-11-01 中国海洋大学 Oil and gas reservoir high-resolution wellbore imaging characterization method based on cyclic generation countermeasure network

Similar Documents

Publication Publication Date Title
Hu et al. Bidirectional mapping generative adversarial networks for brain MR to PET synthesis
Armanious et al. MedGAN: Medical image translation using GANs
Armanious et al. Unsupervised medical image translation using cycle-MedGAN
Dangi et al. A distance map regularized CNN for cardiac cine MR image segmentation
Hu et al. Brain MR to PET synthesis via bidirectional generative adversarial network
CN110444277B (en) Multi-mode brain MRI image bidirectional conversion method based on multi-generation and multi-confrontation
Zhao et al. Deep learning of brain magnetic resonance images: A brief review
Li et al. Deepvolume: Brain structure and spatial connection-aware network for brain mri super-resolution
CN113808106B (en) Ultra-low dose PET image reconstruction system and method based on deep learning
WO2022121100A1 (en) Darts network-based multi-modal medical image fusion method
Gong et al. MR-based attenuation correction for brain PET using 3-D cycle-consistent adversarial network
KR20220067543A (en) Systems and methods for improving low-dose volumetric contrast-enhanced MRI
Huang et al. Considering anatomical prior information for low-dose CT image enhancement using attribute-augmented Wasserstein generative adversarial networks
CN112819914B (en) PET image processing method
Singh et al. Medical image generation using generative adversarial networks
Zhan et al. LR-cGAN: Latent representation based conditional generative adversarial network for multi-modality MRI synthesis
Zhang et al. Spatial adaptive and transformer fusion network (STFNet) for low‐count PET blind denoising with MRI
WO2022043910A1 (en) Systems and methods for automatically enhancing low-dose pet images with robustness to out-of-distribution (ood) data
CN114240753A (en) Cross-modal medical image synthesis method, system, terminal and storage medium
CN112508775A (en) MRI-PET image mode conversion method and system based on loop generation countermeasure network
Do et al. 7T MRI super-resolution with Generative Adversarial Network
Amirkolaee et al. Development of a GAN architecture based on integrating global and local information for paired and unpaired medical image translation
WO2022120731A1 (en) Mri-pet image modality conversion method and system based on cyclic generative adversarial network
Wang et al. IGNFusion: an unsupervised information gate network for multimodal medical image fusion
Wang et al. MSE-Fusion: Weakly supervised medical image fusion with modal synthesis and enhancement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20964669

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20964669

Country of ref document: EP

Kind code of ref document: A1