CN110322528B - Nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T - Google Patents

Nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T Download PDF

Info

Publication number
CN110322528B
CN110322528B CN201910566295.1A CN201910566295A CN110322528B CN 110322528 B CN110322528 B CN 110322528B CN 201910566295 A CN201910566295 A CN 201910566295A CN 110322528 B CN110322528 B CN 110322528B
Authority
CN
China
Prior art keywords
picture
network
output
vgg
net
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910566295.1A
Other languages
Chinese (zh)
Other versions
CN110322528A (en
Inventor
金心宇
陶建军
金昀程
陈智鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910566295.1A priority Critical patent/CN110322528B/en
Publication of CN110322528A publication Critical patent/CN110322528A/en
Application granted granted Critical
Publication of CN110322528B publication Critical patent/CN110322528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention provides a nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T, which comprises the following steps: the method comprises the following steps: step 1: acquiring a 3T picture and a 7T picture, and carrying out image preprocessing on the 3T picture and the 7T picture to obtain a preprocessed 3T picture and a preprocessed 7T picture; step 2: based on the U-net network, taking the preprocessed 3T picture as the input of the U-net network to obtain an output picture passing through the U-net; and step 3: respectively inputting the output picture subjected to the U-net and the 7T picture subjected to the preprocessing into a VGG-16 network to obtain the output of the 3T picture after passing through the VGG-16 network and the output of the 7T picture after passing through the VGG-16 network; and 4, step 4: and performing loss calculation on the output of the U-net network, the output of the 3T picture after passing through the VGG-16 network and the output of the 7T picture after passing through the VGG-16 network, obtaining parameters by using a random gradient descent method based on a loss function, updating the U-net network according to the parameters, and inputting the 3T picture into the updated U-net network to obtain a reconstruction result. The invention can reconstruct the blood vessel on the 3T picture.

Description

Nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T
Technical Field
The invention relates to the technical field of magnetic resonance imaging, in particular to a nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T.
Background
The 7T magnetic resonance image can clearly see the blood vessels which the 3T magnetic resonance image does not have, but most hospitals still use the 3T image due to the high cost and rarity of the 7T equipment. Therefore, in order to overcome this problem, it is necessary to provide a 3T and 7T-based mri brain image vessel reconstruction method.
Most of the existing domestic and foreign researches on image reconstruction focus on super-resolution image reconstruction. In 2014, Dong et al proposed a CNN model SRCNN for general natural image super-resolution reconstruction. Kim et al propose VDSR with reference to the VGG network structure for image classification on the basis of SRCNN. Bee Lim et al propose an enhanced depth residual error network EDSR.
While Wang et al used CNN for magnetic resonance image reconstruction in (imaging magnetic resonance imaging) and Chang et al proposed a U-net based magnetic resonance image reconstruction network in (Deep learning for undersampled MRI reconstruction) paper for magnetic resonance images. But all are reconstructions that do k-space undersampling of high resolution images as low resolution images.
Although various super-resolution reconstruction methods exist, there still exist some problems in the reconstruction of 3T and 7T:
(1) most super-resolution reconstruction networks are mainly used for natural images.
(2) Unlike the undersampled low-resolution and high-resolution pictures, the 3T, 7T images are not perfectly registered, and it is difficult to reconstruct the pictures directly using the differences between the pictures.
Accordingly, there is a need for improvements in the art.
Disclosure of Invention
The invention aims to provide an efficient nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T.
In order to solve the technical problems, the invention provides a nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T, which comprises the following steps: the method comprises the following steps:
step 1: acquiring a 3T picture and a 7T picture, and carrying out image preprocessing on the 3T picture and the 7T picture to obtain a preprocessed 3T picture and a preprocessed 7T picture;
step 2: based on the U-net network, taking the preprocessed 3T picture as the input of the U-net network to obtain an output picture passing through the U-net;
and step 3: respectively inputting the output picture subjected to the U-net and the 7T picture subjected to the preprocessing into a VGG-16 network to obtain the output of the 3T picture after passing through the VGG-16 network and the output of the 7T picture after passing through the VGG-16 network;
and 4, step 4: and performing loss calculation on the output of the U-net network, the output of the 3T picture after passing through the VGG-16 network and the output of the 7T picture after passing through the VGG-16 network, obtaining parameters by using a random gradient descent method based on a loss function, updating the U-net network according to the parameters, and inputting the 3T picture into the updated U-net network to obtain a reconstruction result.
The invention is an improvement of the nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T: the normalization processing method in the step 1 comprises the following steps:
Figure BDA0002108768820000021
wherein s isiRepresents each pixel value, max(s), in each graphi) Represents the maximum value of the pixels in the picture, so that the value of each pixel is [ -1,1 [)]In the meantime.
The invention is further improved by the 3T and 7T-based nuclear magnetic resonance brain image vessel reconstruction method: the step 2 comprises the following steps:
step 2-1, constructing a U-net network structure;
step 2-2, adding a tanh function to the output position of the U-net network, as shown in formula 2:
Cout=tanh(xi) (formula 2)
Wherein C isoutRepresenting the output of the U-net network; taking the preprocessed 3T picture as an input Xi of the U-net network to obtain an output picture passing through the U-net; and taking the 7T picture after the corresponding preprocessing as a label.
The invention is further improved by the 3T and 7T-based nuclear magnetic resonance brain image vessel reconstruction method: step 4 comprises the following steps:
step 4-1, loss calculation is carried out on the output of the U-net network, an L1loss function is used, and the target function is shown as a formula 3:
Figure BDA0002108768820000022
wherein
Figure BDA0002108768820000023
Representing and processing the 7T picture, yjRepresenting the output of the U-net network, ElRepresenting a loss function of the U-net network;
step 4-2, loss calculation of a feature diagram level is carried out on the output of the VGG-16 network, a mean square error function is used, and an objective function of the mean square error function is shown as a formula 4:
Figure BDA0002108768820000024
wherein
Figure BDA0002108768820000025
Represents the output of the 7T picture through a VGG-16 network, yvThe output of a 3T picture after passing through a VGG-16 network is represented, v represents each pixel, and N represents the number of all pixels; ekA loss function representing a feature level;
and 4-3, the final objective function formula E is shown as formula 5:
Figure BDA0002108768820000031
wherein
Figure BDA0002108768820000032
Is the weight that balances the two losses;
and 4-4, performing gradient updating by using a random gradient descent optimization method so as to optimize the target function E, training to obtain parameters, updating the U-net network according to the parameters, and inputting the 3T picture into the updated U-net network to obtain a reconstruction result.
The invention is further improved by the 3T and 7T-based nuclear magnetic resonance brain image vessel reconstruction method: step 4-4 comprises:
performing gradient updating by using a random gradient descent optimization method so as to optimize an overall objective function E, taking 90% of all preprocessed 3T pictures as training set pictures and 10% as test set pictures, taking the training set pictures as input of a U-net network, taking corresponding 7T pictures as labels for training, and storing updating weights and parameters in the training process;
and loading the model weight and the parameters obtained by training by the U-net network, and taking the test set picture as input into the U-net network to obtain a reconstruction result.
The nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T has the technical advantages that:
the invention designs an error on a characteristic level, adds a pre-training VGG-16 network structure behind a U-net network, and calculates the error output by the characteristic network so as to solve the problem that the picture is difficult to register. The invention brings the technical advantage that the blood vessel can be reconstructed on the 3T picture due to the fact that no blood vessel exists on the 3T picture.
Drawings
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a 3T and 7T-based nuclear magnetic resonance brain image vessel reconstruction method of the present invention.
Detailed Description
The invention will be further described with reference to specific examples, but the scope of the invention is not limited thereto.
Embodiment 1, a method for reconstructing a brain image vessel based on 3T and 7T magnetic resonance, as shown in fig. 1, includes the following steps:
s1: acquiring a plurality of 3T and 7T nuclear magnetic resonance brain images of the same person, and carrying out image preprocessing on the 3T images and the 7T images to obtain normalized images;
preprocessing is the relevant operation performed on the picture, aiming to improve its quality in order to increase the precision and accuracy of the processing algorithm of the next stage. 3T and 7T nuclear magnetic resonance brain images of the same person are registered (the original 3T and 7T images have different slice positions, angles and the like, and the angles and the positions of the 3T and 7T images after registration are unified, namely, each 3T image corresponds to one 7T image). Then to 3T picture x1,x2,…xj,…,xnAnd 7T Picture y1,y2,…yj,…,ynEach picture is normalized in the way shown in formula 1:
Figure BDA0002108768820000041
wherein s isiRepresents each pixel value, max(s), in each graphi) Represents the maximum value of the pixels in the picture, so that the value of each pixel is [ -1,1 [)]In the meantime.
After normalization, the 3T and 7T pictures are clipped to 272 × 272 size using a resize function of the OpenCV library, thereby obtaining the preprocessed 3T and 7T pictures.
S2: network construction
S201, a U-net neural network is constructed, networks are stacked according to a U-shaped structure, and a down-sampling process and an up-sampling process are formed.
Downsampling may extract features and upsampling may accomplish positioning.
S202, because we normalize the input and output pictures between [ -1,1], a tanh function needs to be added at the output position of the U-net network to ensure that the output of the network is between [ -1,1], as shown in formula 2:
Cout=tanh(xi) (formula 2)
Wherein C isoutRepresenting the output of the U-net network;
taking the preprocessed 3T picture as the input of the U-net network to obtain an output picture passing through the U-net; thus learning the nonlinear mapping relation between the 3T picture and the 7T picture; and taking the 7T picture after the corresponding preprocessing as a label.
S203, accessing a pre-trained VGG-16 network of ImageNet after the U-net network to form the U-net-VGG network, wherein the VGG-16 network is the front 16 layer of the VGG-19 network; and inputting the output picture subjected to the U-net and the 7T picture subjected to the preprocessing into a pre-training VGG-16 network to obtain the output of the 3T picture after passing through the VGG-16 network and the output of the 7T picture after passing through the VGG-16 network.
Errors at the spatial pixel level are difficult to solve our problem because it is difficult to register our 3T and 7T pictures completely accurately in registration. Therefore, an error on a characteristic level is designed, and the effect of the model can be effectively improved by adding a pre-trained VGG-16 network structure after the U-net network. Finally, the feature map is used as output without the need for full concatenation and the softmax layer.
And taking the preprocessed 3T picture as an input of the U-net network, and taking the correspondingly registered preprocessed 7T picture as a label. And respectively inputting the output picture subjected to the U-net and the 7T picture subjected to the preprocessing into the pre-training VGG-16 network to obtain the output of the 3T picture after passing through the VGG-16 network and the output of the 7T picture after passing through the VGG-16 network.
S3 model training and testing
Design of S301 loss function: and (3) performing loss calculation on the output of the U-net network, wherein an L1loss function is used, and an objective function of the L1loss function is shown as formula 3:
Figure BDA0002108768820000042
wherein
Figure BDA0002108768820000051
Denotes a 7T picture, yjRepresenting the output of the U-net network, ElRepresenting the loss function of the U-net network.
And (3) performing loss calculation on the aspect of a feature diagram on the output of the VGG-16 network, wherein a mean square error function is used, and an objective function of the mean square error function is shown in formula 4:
Figure BDA0002108768820000052
wherein
Figure BDA0002108768820000053
Representing the output of the 7T picture after preprocessing through a VGG-16 network, yvAnd the output of the 3T picture after passing through the VGG-16 network is shown, v represents each pixel, and N represents the number of all pixels. EkA loss function at the feature level is represented.
The final objective function is formulated as shown in equation 5:
Figure BDA0002108768820000054
wherein
Figure BDA0002108768820000055
Is the weight that balances the two losses, E is the overall objective function;
s302, a random gradient descent optimization method is used for gradient updating, so that the overall objective function E is optimized, and 90% of all preprocessed 3T pictures are used as training set pictures and 10% are used as test set pictures. And taking the training set picture as the input of the U-net network, and taking the corresponding 7T picture as a label for training. Note that the VGG-16 network parameters are not updated during training here. The purpose of this is to have the U-net have an integral performance during training, since only the U-net network is used during testing. And saving the updated weight and parameters in the training process. And (5) network training is carried out for 20 ten thousand times to obtain a trained pt model file.
S303, in the testing stage, only the U-net network is used, model weights and parameters obtained through training are loaded, a test set picture is used as input to obtain a reconstruction result, and blood vessels which are not on the 3T picture are successfully obtained on the reconstructed picture.
The overall objective function E is optimized using a random gradient descent optimization method. And training the network for 20 ten thousand times to obtain a trained pt model file. In the testing stage, only the U-net network is used, the model weight obtained through training is used, 10% of 3T pictures are used as input to obtain a reconstruction result, and blood vessels which are not on the 3T pictures are successfully obtained on the reconstructed pictures.
Finally, it is also noted that the above-mentioned lists merely illustrate a few specific embodiments of the invention. It is obvious that the invention is not limited to the above embodiments, but that many variations are possible. All modifications which can be derived or suggested by a person skilled in the art from the disclosure of the present invention are to be considered within the scope of the invention.

Claims (3)

1. A nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T is characterized in that: the method comprises the following steps:
step 1: acquiring a 3T picture and a 7T picture, and carrying out image preprocessing on the 3T picture and the 7T picture to obtain a preprocessed 3T picture and a preprocessed 7T picture;
step 2: based on the U-net network, taking the preprocessed 3T picture as the input of the U-net network to obtain an output picture passing through the U-net;
the method comprises the following steps:
step 2-1, constructing a U-net network structure;
step 2-2, adding a tanh function to the output position of the U-net network, wherein the formula is as follows: c _ out is tanh (x _ i);
wherein C _ out represents the output of the U-net network; taking the preprocessed 3T picture as an input x _ i of the U-net network to obtain an output picture passing through the U-net; taking the 7T picture after corresponding preprocessing as a label;
and step 3: respectively inputting the output picture subjected to the U-net and the 7T picture subjected to the preprocessing into a VGG-16 network to obtain the output of the 3T picture after passing through the VGG-16 network and the output of the 7T picture after passing through the VGG-16 network;
accessing a pre-trained VGG-16 network of ImageNet after the U-net network to form the U-net-VGG network, wherein the VGG-16 network is a front 16 layer of the VGG-19 network; inputting the output picture subjected to U-net and the 7T picture subjected to preprocessing into a pre-training VGG-16 network to obtain the output of the 3T picture after passing through the VGG-16 network and the output of the 7T picture after passing through the VGG-16 network;
and 4, step 4: loss calculation is carried out on the output of the U-net network, the output of the 3T picture after passing through the VGG-16 network and the output of the 7T picture after passing through the VGG-16 network, parameters are obtained by using a random gradient descent method based on a loss function, the U-net network is updated according to the parameters, and the 3T picture is input into the updated U-net network to obtain a reconstruction result;
the method comprises the following steps:
step 4-1, loss calculation is carried out on the output of the U-net network, an L1loss function is used, and the objective function is shown as the following formula:
Figure FDA0002955852960000011
wherein
Figure FDA0002955852960000012
Representing and processing the 7T picture, yjRepresenting the output of the U-net network, ElRepresenting a loss function of the U-net network;
step 4-2, loss calculation of a characteristic diagram level is carried out on the output of the VGG-16 network, a mean square error function is used, and an objective function of the mean square error function is shown as the following formula:
Figure FDA0002955852960000013
wherein
Figure FDA0002955852960000014
Represents the output of the 7T picture through a VGG-16 network, yvThe output of a 3T picture after passing through a VGG-16 network is represented, v represents each pixel, and N represents the number of all pixels; ekA loss function representing a feature level;
and 4-3, the final objective function formula E is shown as the following formula:
Figure FDA0002955852960000015
wherein
Figure FDA0002955852960000016
Is the weight that balances the two losses;
and 4-4, performing gradient updating by using a random gradient descent optimization method so as to optimize the target function E, training to obtain parameters, updating the U-net network according to the parameters, and inputting the 3T picture into the updated U-net network to obtain a reconstruction result.
2. The brain image vessel reconstruction method based on 3T and 7T magnetic resonance according to claim 1, characterized in that: the normalization processing method in the preprocessing of the step 1 comprises the following steps:
Figure FDA0002955852960000021
wherein s isiShowing in each figureEach pixel value, max(s)i) Represents the maximum value of the pixels in the picture, so that the value of each pixel is [ -1,1 [)]In the meantime.
3. The brain image vessel reconstruction method based on 3T and 7T magnetic resonance according to claim 2, characterized in that: step 4-4 comprises:
performing gradient updating by using a random gradient descent optimization method so as to optimize an overall objective function E, taking 90% of all preprocessed 3T pictures as training set pictures and 10% as test set pictures, taking the training set pictures as input of a U-net network, taking corresponding 7T pictures as labels for training, and storing updating weights and parameters in the training process;
and loading the model weight and the parameters obtained by training by the U-net network, and taking the test set picture as input into the U-net network to obtain a reconstruction result.
CN201910566295.1A 2019-06-26 2019-06-26 Nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T Active CN110322528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910566295.1A CN110322528B (en) 2019-06-26 2019-06-26 Nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910566295.1A CN110322528B (en) 2019-06-26 2019-06-26 Nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T

Publications (2)

Publication Number Publication Date
CN110322528A CN110322528A (en) 2019-10-11
CN110322528B true CN110322528B (en) 2021-05-14

Family

ID=68120456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910566295.1A Active CN110322528B (en) 2019-06-26 2019-06-26 Nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T

Country Status (1)

Country Link
CN (1) CN110322528B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359194A (en) * 2021-12-27 2022-04-15 浙江大学 Multi-mode stroke infarct area image processing method based on improved U-Net network
CN115240032B (en) * 2022-07-20 2023-06-23 中国人民解放军总医院第一医学中心 Method for generating 7T magnetic resonance image based on 3T magnetic resonance image of deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629816A (en) * 2018-05-09 2018-10-09 复旦大学 The method for carrying out thin layer MR image reconstruction based on deep learning
CN109215014A (en) * 2018-07-02 2019-01-15 中国科学院深圳先进技术研究院 Training method, device, equipment and the storage medium of CT image prediction model

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7809155B2 (en) * 2004-06-30 2010-10-05 Intel Corporation Computing a higher resolution image from multiple lower resolution images using model-base, robust Bayesian estimation
US9412076B2 (en) * 2013-07-02 2016-08-09 Surgical Information Sciences, Inc. Methods and systems for a high-resolution brain image pipeline and database program
WO2016019484A1 (en) * 2014-08-08 2016-02-11 Xiaoou Tang An apparatus and a method for providing super-resolution of a low-resolution image
US10242292B2 (en) * 2017-06-13 2019-03-26 Digital Surgery Limited Surgical simulation for training detection and classification neural networks
US10477641B2 (en) * 2017-09-24 2019-11-12 Massachusetts Institute Of Technology Methods and apparatus for image analysis for lighting control
CN107977930A (en) * 2017-12-09 2018-05-01 北京花开影视制作有限公司 A kind of image super-resolution method and its system
CN108416276B (en) * 2018-02-12 2022-05-24 浙江大学 Abnormal gait detection method based on human lateral gait video
CN108596065A (en) * 2018-04-13 2018-09-28 深圳职业技术学院 One kind is based on deep semantic segmentation marine oil spill detecting system and method
CN109215013B (en) * 2018-06-04 2023-07-21 平安科技(深圳)有限公司 Automatic bone age prediction method, system, computer device and storage medium
CN108921851B (en) * 2018-06-06 2021-07-09 深圳市未来媒体技术研究院 Medical CT image segmentation method based on 3D countermeasure network
CN109447897B (en) * 2018-10-24 2023-04-07 文创智慧科技(武汉)有限公司 Real scene image synthesis method and system
CN109903223B (en) * 2019-01-14 2023-08-25 北京工商大学 Image super-resolution method based on dense connection network and generation type countermeasure network
CN109635882B (en) * 2019-01-23 2022-05-13 福州大学 Salient object detection method based on multi-scale convolution feature extraction and fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629816A (en) * 2018-05-09 2018-10-09 复旦大学 The method for carrying out thin layer MR image reconstruction based on deep learning
CN109215014A (en) * 2018-07-02 2019-01-15 中国科学院深圳先进技术研究院 Training method, device, equipment and the storage medium of CT image prediction model

Also Published As

Publication number Publication date
CN110322528A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
US10624558B2 (en) Protocol independent image processing with adversarial networks
CN108550115B (en) Image super-resolution reconstruction method
CN109191476B (en) Novel biomedical image automatic segmentation method based on U-net network structure
CN109214989B (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN106796716A (en) Apparatus and method for providing super-resolution for low-resolution image
Sood et al. An application of generative adversarial networks for super resolution medical imaging
CN111476719A (en) Image processing method, image processing device, computer equipment and storage medium
CN110322528B (en) Nuclear magnetic resonance brain image vessel reconstruction method based on 3T and 7T
CN112801904B (en) Hybrid degraded image enhancement method based on convolutional neural network
CN114140469B (en) Depth layered image semantic segmentation method based on multi-layer attention
Wang et al. High fidelity deep learning‐based MRI reconstruction with instance‐wise discriminative feature matching loss
CN109949321B (en) brain nuclear magnetic resonance image tissue segmentation method based on three-dimensional Unet network
Yang et al. Image super-resolution reconstruction based on improved Dirac residual network
Wei et al. Channel rearrangement multi-branch network for image super-resolution
CN115867933A (en) Computer-implemented method, computer program product and system for processing images
CN117333750A (en) Spatial registration and local global multi-scale multi-modal medical image fusion method
Rashid et al. Single MR image super-resolution using generative adversarial network
CN117455770A (en) Lightweight image super-resolution method based on layer-by-layer context information aggregation network
Wang et al. Research on blind super-resolution technology for infrared images of power equipment based on compressed sensing theory
Xu et al. Histopathological image analysis on mouse testes for automated staging of mouse seminiferous tubule
Lu et al. Cascade of denoising and mapping neural networks for MRI R2* relaxometry of iron-loaded liver
CN114972033A (en) Self-supervision method for improving longitudinal resolution of optical coherence tomography image
An et al. A structural oriented training method for gan based fast compressed sensing mri
Song et al. ESRGAN-DP: Enhanced super-resolution generative adversarial network with adaptive dual perceptual loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant