CN110288609A - A kind of multi-modal whole-heartedly dirty image partition method of attention mechanism guidance - Google Patents

A kind of multi-modal whole-heartedly dirty image partition method of attention mechanism guidance Download PDF

Info

Publication number
CN110288609A
CN110288609A CN201910461477.2A CN201910461477A CN110288609A CN 110288609 A CN110288609 A CN 110288609A CN 201910461477 A CN201910461477 A CN 201910461477A CN 110288609 A CN110288609 A CN 110288609A
Authority
CN
China
Prior art keywords
image
map
mode
attention mechanism
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910461477.2A
Other languages
Chinese (zh)
Other versions
CN110288609B (en
Inventor
杨琬琪
周子奇
郭心娜
杨明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Nanjing Normal University
Original Assignee
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Normal University filed Critical Nanjing Normal University
Priority to CN201910461477.2A priority Critical patent/CN110288609B/en
Publication of CN110288609A publication Critical patent/CN110288609A/en
Application granted granted Critical
Publication of CN110288609B publication Critical patent/CN110288609B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Abstract

The invention discloses a kind of multi-modal whole-heartedly dirty image partition methods of attention mechanism guidance.Current mode is generated by Cycle-GAN and corresponds to the image of other mode to expand training set, and then original image and corresponding generation image pass through one and half twin networks progress image segmentations simultaneously.There are two independent encoder and a shared decoders for the half twin network, encoder is for learning the privately owned feature of mode, these features first pass through an attention mechanism module and carry out Fusion Features, and the shared feature of mode is then extracted by decoder and carries out last segmentation.The present invention makes full use of mode shared information and private information, improves segmentation precision.

Description

A kind of multi-modal whole-heartedly dirty image partition method of attention mechanism guidance
Technical field
The invention belongs to the field of medical imaging, in particular to a kind of multi-modal whole-heartedly dirty image partition method.
Background technique
According to American Heart Association (AHA) heart disease and stroke statistical report in 2019, in the U.S., 2019 about 1, 055,000 people's suffering from coronary heart disease, including 720,000 new and 335,000 recurrent coronary artery diseases Example.In this sense, early diagnosis and therapy plays in terms of the death rate and disease incidence for reducing cardiovascular disease Important function.During early diagnosis, it is complete to carry out that doctor usually collects image-forming information from different modes (for example, MR and CT) Face investigation, one of them important prerequisite is heart minor structure of the accurate Ground Split from different modalities image.But it passes The artificial segmentation of system is very time-consuming and laborious.Therefore, develop one whole-heartedly the dirty method divided automatically it is extremely urgent.
Although the method based on depth convolutional neural networks has been widely used for dividing other organs, these methods are answered It is still limited for multi-modal full cardiac segmentation task, reason is: 1) mode inconsistency: the image from different modalities has Apparent difference in appearance;2) complicated result: different heart minor structures is connection, can be even overlapped sometimes;3) each disease Human heart in appearance also can be variant.
In recent years, there are some trials about multi-modal full cardiac segmentation.Such as Dou Qi et al. propose it is a kind of with pair The unsupervised cross-domain generation frame of anti-study cross-module state medical image segmentation.Zheng Yefeng et al. proposes a synchronous study and turns Change and the method for segmenting medical 3D rendering, this method can learn unpaired data set and keep their anatomical structure It is constant.It being generated about image, the style of unpaired image can be generated to another domain from a domain migration by CycleGAN, But it has that lacking label carries out deformation constraint.Ronneberger et al. is opened by " full convolutional neural networks FCN " Hair, proposes " U-net " structure, and documentation & info up and down and a symmetrical expansion road are captured comprising a constricted path Diameter is used to obtain accurate local message, is usually used in medical image segmentation.However the above method cannot all make full use of two moulds The information or correlation that can be shared between state can not effectively overcome limitation mentioned above.
Summary of the invention
In order to solve the technical issues of above-mentioned background technique is mentioned, the invention proposes a kind of the more of attention mechanism guidance Mode whole-heartedly dirty image partition method makes full use of mode shared information and private information, improves segmentation precision.
In order to achieve the above technical purposes, the technical solution of the present invention is as follows:
A kind of multi-modal whole-heartedly dirty image partition method of attention mechanism guidance, comprising the following steps:
(1) it generates across modality images:
It introduces and generates confrontation network, which includes 2 generators and 2 arbiters, respectively corresponds CT image and MRI figure Picture, original CT image and MRI image are inputted respectively in corresponding generator, generate the figure for corresponding to another mode respectively Picture;
(2) cross-module state feature learning and image segmentation:
Half twin network is constructed, which includes 2 independent encoders and 1 shared decoder, and in encoder Attention mechanism module is set between decoder;Original CT image is inputted to wherein one together with the MRI image being generated by it Original MRI image is inputted together with the CT image being generated by it another encoder, includes multilayer in encoder by a encoder Down-sampling layer, the privately owned feature spectrogram of 2 encoder outputs, 2 mode, attention mechanism module merge the privately owned spy of 2 mode Shared decoder is levied and is sent into, decoder exports the segmentation result of image.
Further, in generating confrontation network, it is defined as follows circulation consistency loss function Lcyc(GA,GB):
In above formula, xA、xBRespectively original CT image, MRI image sample,To obey pd(xA) distribution xA's It is expected thatTo obey pd(xB) distribution xBExpectation, GA、GBRespectively correspond to the generator of CT image and MRI image;
In generating confrontation network, it is defined as follows segmentation loss function Lseg(SA/B,GA,GB):
In above formula, S is mappedA/B: A → Y, B → Y, A indicate CT mode, and B indicates MRI mode, and Y indicates that segmentation tag, i represent One training sample, N are training sample sum, yA、yBRespectively in the true segmentation result of A mode and B mode.
Further, comprehensive circulation consistency loss function and segmentation loss function, define total loss function L (GA,GB, DA,DB,SA/B):
L(GA,GB,DA,DB,SA/B)=LGAN(GA,DA)+LGAN(GB,DB)+λLcyc(GA,GB)+γLseg(SA/B,GA,GB)
In above formula, LGAN(GA,DA) and LGAN(GB,DB) pairs of anti-loss function of making a living, DA、DBRespectively correspond to CT image and The arbiter of MRI image, λ, γ are the weight coefficient of loss function.
Further, in half twin network, the high-resolution feature of encoder localization, and capture more accurate Information;The information of context is traveled to higher layers of resolution by decoder, and learns advanced semantic information.
Further, the process of the attention mechanism module is as follows:
The characteristic spectrum of 2 encoder outputs is connected first to obtain preliminary fusion map through channel layer, will tentatively be merged Map recombinates to obtain map 1 through matrix, preliminary fusion map is successively obtained map 2 through matrix recombination and transposition, by 1 He of map Map 2 makees the result after vector product and obtains attention map through softmax function, by attention map with tentatively merge map work Element sums it up the result of vector product one by one with map progress is tentatively merged again, obtains final Fusion Features map.
By adopting the above technical scheme bring the utility model has the advantages that
The present invention carries out across modality images generations using improved CycleGAN to expand training set, and reduces mode The image of layer is inconsistent;The invention proposes a kind of half twin networks of new cross-module state attention mechanism guidance, to learn Mode can share the feature privately owned with mode, carry out multi-modal full segmenting cardiac images.The present invention efficiently solves marked 3D The problems such as whole-heartedly dirty CT and MRI image quantity are few, the prior art underuses the relevant information of cross-module state improves segmentation essence Degree, therefore application value with higher.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is network structure of the invention;
Fig. 3 is the flow chart of attention mechanism module in the present invention;
Fig. 4 is embodiment segmentation result figure, is included (a), (b) two width subgraph.
Specific embodiment
Below with reference to attached drawing, technical solution of the present invention is described in detail.
The present invention devises a kind of multi-modal whole-heartedly dirty image partition method of attention mechanism guidance, as shown in Figs. 1-2, The following steps are included:
1, it generates across modality images:
It introduces and generates confrontation network, which includes 2 generators and 2 arbiters, respectively corresponds CT image and MRI figure Picture, they pass through mutual game, available extraordinary output during study.Original CT image and MRI image It is inputted in corresponding generator respectively, generates the image for corresponding to another mode respectively.
To solve the problems, such as that generator will fight study from unpaired image, the present invention is generated using mandatory requirement Sample GA(GB(xA)) and GB(GA(xB)) constraint that is consistent with original image, therefore define circulation consistency and lose letter Number:
In above formula, xA、xBRespectively original CT image, MRI image sample,To obey pd(xA) distribution xA's It is expected thatTo obey pd(xB) distribution xBExpectation, GA、GBRespectively correspond to the generator of CT image and MRI image.
If the transformation that two generators learn is reversible each other, the two generators will be consistent by recycling Inspection without by any punishment.The generation of the problem of to prevent this data deformation, so defining another auxiliary Map SA/B: A → Y, B → Y, wherein A represents CT mode, and B represents MR mode, and Y represents segmentation tag, and in segmentation result and Take cross entropy as the segmentation loss function of constraint between true segmentation result:
In above formula, i represents a training sample, and N is training sample sum, yA、yBRespectively in the true of A mode and B mode Real segmentation result.
Comprehensive circulation consistency loss function and segmentation loss function, define total loss function:
L(GA,GB,DA,DB,SA/B)=LGAN(GA,DA)+LGAN(GB,DB)+λLcyc(GA,GB)+γLseg(SA/B,GA,GB)
In above formula, LGAN(GA,DA) and LGAN(GB,DB) pairs of anti-loss function of making a living, DA、DBRespectively correspond to CT image and The arbiter of MRI image, λ, γ are the weight coefficient of loss function.
2, cross-module state feature learning and image segmentation:
Half twin network is constructed, which includes 2 independent encoders and 1 shared decoder, and in encoder Attention mechanism module is set between decoder.Original CT image is inputted to wherein one together with the MRI image being generated by it Original MRI image is inputted together with the CT image being generated by it another encoder, includes multilayer in encoder by a encoder Down-sampling layer, the privately owned feature spectrogram of 2 encoder outputs, 2 mode, attention mechanism module merge the privately owned spy of 2 mode Shared decoder is levied and is sent into, decoder exports the segmentation result of image.
The effect of encoder is the high-resolution feature of localization, and captures more accurate information;The work of decoder With being the semantic information that the information of context is traveled to higher layers of resolution, and learns advanced.
The present invention devises the attention mechanism module of a channel-wise between encoder and decoder.It will Characteristic spectrum in two mode with each heart minor structure boundary information exports a new characteristic spectrum as input. The process of attention mechanism module is as shown in Figure 3.First by the characteristic spectrum of 2 encoder outputs (assuming that having a size of C1×H× W) connect to obtain preliminary fusion map (having a size of (C through channel layer1+C2) × H × W), it will tentatively merge map and be recombinated through matrix To map 1, preliminary fusion map is successively obtained into map 2 through matrix recombination and transposition, after map 1 and map 2 are made vector product Result obtain attention map (having a size of (C through softmax function1+C2)×(C1+C2)), by attention map with tentatively melt The result that conjunction map makees vector product carries out element adduction one by one with the preliminary map that merges again, obtains final Fusion Features map (having a size of (C1+C2)×H×W)。
Because two mode are all used to describe the same heart organ, it is assumed that the two mode may include many correlations The feature of connection, this non-diagonal block C in attention map1×C2And C2×C1The available embodiment in part.Can also include in mode The exclusive feature of some mode, this is in diagonal blocks C1×C1And C2×C2Part also available embodiment.
Effectiveness of the invention is hereafter verified by emulation experiment.
Stochastic gradient descent is carried out using Adam method, learning rate is 2 × 10-4, other settings are entirely by reference to CycleGAN Setting go to train generator and arbiter.In order to accelerate training process, select to separate pre-training GA/BAnd DA/B, then complete again Network is divided in ground training end to end.
Apply the inventive method to the Challenge contest of the field of medical imaging flagship meeting MICCAI 2017 On the public data collection that Multi-Modality Whole Heart Segmentation is provided.It contains in data set and does not match Pair 20 MRI and 20 CT 3D rendering.The minor structure of data set has all carried out label by radiologist.Segmentation Target is that be partitioned into 7 minor structures of heart include: left ventricle, atrium sinistrum, right ventricle, atrium dextrum, aorta, pulmonary artery and the heart Flesh.In the training process, data set is divided into training set (10 samples) and test set (10 samples), and carries out eighty percent discount intersection Verifying.CT mode is denoted as A, MRI mode is denoted as B.
Because the direction of data acquisition is different, all sample orientations have all been changed to by hat by software I TK-SNAP Shape position.All be all cut there are the part of criterion label comes out, be cut into later the slice of 2D totally 2534 CT and 2208 MRI.By the size of resized to 128 × 128 after the different slice of these shapes.
In order to measure the gap of segmentation result and actual result, using Dice coefficient as evaluation index.Dice is used for Measure the ratio being overlapped between authentic signature and segmentation result.Dice is higher, and proof segmentation precision is higher.
Cardiac segmentation visualizes comparing result as shown in figure 4, (a) in Fig. 4 is the cardiac segmentation visualization from CT to MRI As a result, (b) being cardiac segmentation visualization result from MRI to CT, the Ours in figure is to be obtained using dividing method of the present invention Segmentation result.As can be seen that the image generated is closely similar with original image, and missed without any apparent deformation and segmentation Difference.
The Comparative result being split using distinct methods is as shown in table 1.As can be seen from the table, the present invention is successfully Feature shared between mode is extracted to promote the segmentation precision of MRI image.The present invention has firstly evaluated full convolutional Neural net Network (FCN) is respectively applied to the result of two mode segmentations as reference line.Then it has evaluated U-net and is respectively applied to two moulds State.For the validity for further verifying the method for the present invention, CycleGAN method is used only on the data set and carries out across modal graph As generating.It is not it is obvious that still MRI is divided that comparative experiments, which embodies the method for the present invention and acts on the segmentation precision for improving CT, It is effective that the significantly improving of precision, which shows attention mechanism,.Also, attention mechanism efficiently avoids " bad " MRI figure As leading astray risk caused by " good " CT image.
Table 1
Method Aorta Atrium sinistrum Left ventricle Cardiac muscle Right ventricle Atrium dextrum Pulmonary artery It is average
FCN_CT 0.8863 0.8163 0.8838 0.8541 0.7885 0.7940 0.7758 0.8284
FCN_MRI 0.6931 0.6882 0.8006 0.7161 0.6759 0.7593 0.6693 0.7146
Unet_CT 0.8992 0.7704 0.8381 0.8162 0.7643 0.8003 0.7947 0.8119
Unet_MRI 0.7719 0.6942 0.7751 0.6961 0.6983 0.7829 0.7171 0.7336
CycleGAN_CT 0.9407 0.8277 0.8362 0.7942 0.8064 0.8134 0.8103 0.8327
CycleGAN_MRI 0.7686 0.6555 0.7612 0.7038 0.6637 0.7658 0.6973 0.7165
Ours_CT 0.9282 0.8131 0.8497 0.7869 0.8066 0.8255 0.8391 0.8356
Ours_MRI 0.7875 0.6940 0.8031 0.7189 0.6733 0.7967 0.7147 0.7412
Embodiment is merely illustrative of the invention's technical idea, and this does not limit the scope of protection of the present invention, it is all according to Technical idea proposed by the present invention, any changes made on the basis of the technical scheme are fallen within the scope of the present invention.

Claims (5)

1. a kind of multi-modal whole-heartedly dirty image partition method of attention mechanism guidance, which comprises the following steps:
(1) it generates across modality images:
It introduces and generates confrontation network, which includes 2 generators and 2 arbiters, respectively corresponds CT image and MRI image; Original CT image and MRI image are inputted respectively in corresponding generator, generate the image for corresponding to another mode respectively;
(2) cross-module state feature learning and image segmentation:
Half twin network is constructed, which includes 2 independent encoders and 1 shared decoder, and in encoder and solution Attention mechanism module is set between code device;Original CT image is inputted to one of volume together with the MRI image being generated by it Original MRI image is inputted together with the CT image being generated by it another encoder, includes to adopt under multilayer in encoder by code device Sample layer, the privately owned feature spectrogram of 2 encoder outputs, 2 mode, attention mechanism module merge the privately owned feature of 2 mode simultaneously It is sent into shared decoder, decoder exports the segmentation result of image.
2. the multi-modal whole-heartedly dirty image partition method of attention mechanism guidance according to claim 1, which is characterized in that It generates in confrontation network, is defined as follows circulation consistency loss function Lcyc(GA,GB):
In above formula, xA、xBRespectively original CT image, MRI image sample,To obey pd(xA) distribution xAExpectation,To obey pd(xB) distribution xBExpectation, GA、GBRespectively correspond to the generator of CT image and MRI image;
In generating confrontation network, it is defined as follows segmentation loss function Lseg(SA/B,GA,GB):
In above formula, S is mappedA/B: A → Y, B → Y, A indicate CT mode, and B indicates MRI mode, and Y indicates that segmentation tag, i represent one Training sample, N are training sample sum, yA、yBRespectively in the true segmentation result of A mode and B mode.
3. the multi-modal whole-heartedly dirty image partition method of attention mechanism guidance according to claim 2, which is characterized in that comprehensive Circulation consistency loss function and segmentation loss function are closed, total loss function L (G is definedA,GB,DA,DB,SA/B):
L(GA,GB,DA,DB,SA/B)=LGAN(GA,DA)+LGAN(GB,DB)+λLcyc(GA,GB)+γLseg(SA/B,GA,GB)
In above formula, LGAN(GA,DA) and LGAN(GB,DB) pairs of anti-loss function of making a living, DA、DBRespectively correspond to CT image and MRI The arbiter of image, λ, γ are the weight coefficient of loss function.
4. the multi-modal whole-heartedly dirty image partition method of attention mechanism guidance according to claim 1, which is characterized in that In half twin network, the high-resolution feature of encoder localization, and capture more accurate information;Decoder is by context Information travel to higher layers of resolution, and learn advanced semantic information.
5. the multi-modal whole-heartedly dirty image partition method of attention mechanism guidance according to claim 1, which is characterized in that institute The process for stating attention mechanism module is as follows:
The characteristic spectrum of 2 encoder outputs is connected first to obtain preliminary fusion map through channel layer, will tentatively merge map It recombinates to obtain map 1 through matrix, preliminary fusion map is successively obtained into map 2 through matrix recombination and transposition, by map 1 and map 2, which make the result after vector product, obtains attention map through softmax function, by attention map with tentatively merging map makees vector Element sums it up long-pending result one by one with map progress is tentatively merged again, obtains final Fusion Features map.
CN201910461477.2A 2019-05-30 2019-05-30 Multi-modal whole-heart image segmentation method guided by attention mechanism Active CN110288609B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910461477.2A CN110288609B (en) 2019-05-30 2019-05-30 Multi-modal whole-heart image segmentation method guided by attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910461477.2A CN110288609B (en) 2019-05-30 2019-05-30 Multi-modal whole-heart image segmentation method guided by attention mechanism

Publications (2)

Publication Number Publication Date
CN110288609A true CN110288609A (en) 2019-09-27
CN110288609B CN110288609B (en) 2021-06-08

Family

ID=68002969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910461477.2A Active CN110288609B (en) 2019-05-30 2019-05-30 Multi-modal whole-heart image segmentation method guided by attention mechanism

Country Status (1)

Country Link
CN (1) CN110288609B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179207A (en) * 2019-12-05 2020-05-19 浙江工业大学 Cross-modal medical image synthesis method based on parallel generation network
CN111353499A (en) * 2020-02-24 2020-06-30 上海交通大学 Multi-modal medical image segmentation method, system, storage medium and electronic device
CN111696027A (en) * 2020-05-20 2020-09-22 电子科技大学 Multi-modal image style migration method based on adaptive attention mechanism
CN112308833A (en) * 2020-10-29 2021-02-02 厦门大学 One-shot brain image segmentation method based on circular consistent correlation
CN113177943A (en) * 2021-06-29 2021-07-27 中南大学 Cerebral apoplexy CT image segmentation method
CN113312530A (en) * 2021-06-09 2021-08-27 哈尔滨工业大学 Multi-mode emotion classification method taking text as core
CN113537057A (en) * 2021-07-14 2021-10-22 山西中医药大学 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN
CN113779298A (en) * 2021-09-16 2021-12-10 哈尔滨工程大学 Medical vision question-answering method based on composite loss
CN114842312A (en) * 2022-05-09 2022-08-02 深圳市大数据研究院 Generation and segmentation method and device for unpaired cross-modal image segmentation model
CN116883247A (en) * 2023-09-06 2023-10-13 感跃医疗科技(成都)有限公司 Unpaired CBCT image super-resolution generation algorithm based on Cycle-GAN

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564119A (en) * 2018-04-04 2018-09-21 华中科技大学 A kind of any attitude pedestrian Picture Generation Method
CN108897740A (en) * 2018-05-07 2018-11-27 内蒙古工业大学 A kind of illiteracy Chinese machine translation method based on confrontation neural network
CN109325951A (en) * 2018-08-13 2019-02-12 深圳市唯特视科技有限公司 A method of based on the conversion and segmenting medical volume for generating confrontation network
CN109325931A (en) * 2018-08-22 2019-02-12 中北大学 Based on the multi-modality images fusion method for generating confrontation network and super-resolution network
CN109598745A (en) * 2018-12-25 2019-04-09 上海联影智能医疗科技有限公司 Method for registering images, device and computer equipment
CN109637634A (en) * 2018-12-11 2019-04-16 厦门大学 A kind of medical image synthetic method based on generation confrontation network
CN109685813A (en) * 2018-12-27 2019-04-26 江西理工大学 A kind of U-shaped Segmentation Method of Retinal Blood Vessels of adaptive scale information
US20190147582A1 (en) * 2017-11-15 2019-05-16 Toyota Research Institute, Inc. Adversarial learning of photorealistic post-processing of simulation with privileged information
CN109801294A (en) * 2018-12-14 2019-05-24 深圳先进技术研究院 Three-dimensional atrium sinistrum dividing method, device, terminal device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190147582A1 (en) * 2017-11-15 2019-05-16 Toyota Research Institute, Inc. Adversarial learning of photorealistic post-processing of simulation with privileged information
CN108564119A (en) * 2018-04-04 2018-09-21 华中科技大学 A kind of any attitude pedestrian Picture Generation Method
CN108897740A (en) * 2018-05-07 2018-11-27 内蒙古工业大学 A kind of illiteracy Chinese machine translation method based on confrontation neural network
CN109325951A (en) * 2018-08-13 2019-02-12 深圳市唯特视科技有限公司 A method of based on the conversion and segmenting medical volume for generating confrontation network
CN109325931A (en) * 2018-08-22 2019-02-12 中北大学 Based on the multi-modality images fusion method for generating confrontation network and super-resolution network
CN109637634A (en) * 2018-12-11 2019-04-16 厦门大学 A kind of medical image synthetic method based on generation confrontation network
CN109801294A (en) * 2018-12-14 2019-05-24 深圳先进技术研究院 Three-dimensional atrium sinistrum dividing method, device, terminal device and storage medium
CN109598745A (en) * 2018-12-25 2019-04-09 上海联影智能医疗科技有限公司 Method for registering images, device and computer equipment
CN109685813A (en) * 2018-12-27 2019-04-26 江西理工大学 A kind of U-shaped Segmentation Method of Retinal Blood Vessels of adaptive scale information

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
QINGSONG YANG等: "Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
屈宗艳: "基于注意机制的图像分割研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李璐 等: "基于视觉显著性的目标分割算法", 《计算机科学与探索》 *
郑顾平 等: "基于注意力机制的多尺度融合航拍影像语义分割", 《图学学报》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179207A (en) * 2019-12-05 2020-05-19 浙江工业大学 Cross-modal medical image synthesis method based on parallel generation network
CN111179207B (en) * 2019-12-05 2022-04-08 浙江工业大学 Cross-modal medical image synthesis method based on parallel generation network
CN111353499A (en) * 2020-02-24 2020-06-30 上海交通大学 Multi-modal medical image segmentation method, system, storage medium and electronic device
CN111353499B (en) * 2020-02-24 2022-08-19 上海交通大学 Multi-modal medical image segmentation method, system, storage medium and electronic device
CN111696027A (en) * 2020-05-20 2020-09-22 电子科技大学 Multi-modal image style migration method based on adaptive attention mechanism
CN112308833A (en) * 2020-10-29 2021-02-02 厦门大学 One-shot brain image segmentation method based on circular consistent correlation
CN112308833B (en) * 2020-10-29 2022-09-13 厦门大学 One-shot brain image segmentation method based on circular consistent correlation
CN113312530B (en) * 2021-06-09 2022-02-15 哈尔滨工业大学 Multi-mode emotion classification method taking text as core
CN113312530A (en) * 2021-06-09 2021-08-27 哈尔滨工业大学 Multi-mode emotion classification method taking text as core
CN113177943A (en) * 2021-06-29 2021-07-27 中南大学 Cerebral apoplexy CT image segmentation method
CN113177943B (en) * 2021-06-29 2021-09-07 中南大学 Cerebral apoplexy CT image segmentation method
CN113537057A (en) * 2021-07-14 2021-10-22 山西中医药大学 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN
CN113779298A (en) * 2021-09-16 2021-12-10 哈尔滨工程大学 Medical vision question-answering method based on composite loss
CN113779298B (en) * 2021-09-16 2023-10-31 哈尔滨工程大学 Medical vision question-answering method based on composite loss
CN114842312A (en) * 2022-05-09 2022-08-02 深圳市大数据研究院 Generation and segmentation method and device for unpaired cross-modal image segmentation model
CN114842312B (en) * 2022-05-09 2023-02-10 深圳市大数据研究院 Generation and segmentation method and device for unpaired cross-modal image segmentation model
CN116883247A (en) * 2023-09-06 2023-10-13 感跃医疗科技(成都)有限公司 Unpaired CBCT image super-resolution generation algorithm based on Cycle-GAN
CN116883247B (en) * 2023-09-06 2023-11-21 感跃医疗科技(成都)有限公司 Unpaired CBCT image super-resolution generation algorithm based on Cycle-GAN

Also Published As

Publication number Publication date
CN110288609B (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN110288609A (en) A kind of multi-modal whole-heartedly dirty image partition method of attention mechanism guidance
CN111709953B (en) Output method and device in lung lobe segment segmentation of CT (computed tomography) image
CN111833359B (en) Brain tumor segmentation data enhancement method based on generation of confrontation network
Liu et al. CT synthesis from MRI using multi-cycle GAN for head-and-neck radiation therapy
Hu et al. Brain MR to PET synthesis via bidirectional generative adversarial network
CN109598722B (en) Image analysis method based on recurrent neural network
CN108268870A (en) Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN105760874B (en) CT image processing system and its CT image processing method towards pneumoconiosis
CN110544264A (en) Temporal bone key anatomical structure small target segmentation method based on 3D deep supervision mechanism
CN109242860A (en) Based on the brain tumor image partition method that deep learning and weight space are integrated
CN107563434A (en) A kind of brain MRI image sorting technique based on Three dimensional convolution neutral net, device
Pluim et al. The truth is hard to make: Validation of medical image registration
Mercan et al. Virtual staining for mitosis detection in breast histopathology
CN109685787A (en) Output method, device in the lobe of the lung section segmentation of CT images
CN107330953A (en) A kind of Dynamic MRI method for reconstructing based on non-convex low-rank
Tobon‐Gomez et al. Realistic simulation of cardiac magnetic resonance studies modeling anatomical variability, trabeculae, and papillary muscles
CN115578404A (en) Liver tumor image enhancement and segmentation method based on deep learning
Lindner et al. Using synthetic training data for deep learning-based GBM segmentation
CN109727197A (en) A kind of medical image super resolution ratio reconstruction method
CN114881848A (en) Method for converting multi-sequence MR into CT
Tobon-Gomez et al. Automatic construction of 3D-ASM intensity models by simulating image acquisition: Application to myocardial gated SPECT studies
Fan et al. TR-Gan: multi-session future MRI prediction with temporal recurrent generative adversarial Network
JP2022077991A (en) Medical image processing apparatus, medical image processing method, medical image processing program, model training apparatus, and training method
CN105913388A (en) Priority constraint colorful image sparse expression restoration method
CN108596900A (en) Thyroid-related Ophthalmopathy medical image data processing unit, method, computer readable storage medium and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant