CN115375544A - Super-resolution method for generating countermeasure network based on attention and UNet network - Google Patents

Super-resolution method for generating countermeasure network based on attention and UNet network Download PDF

Info

Publication number
CN115375544A
CN115375544A CN202210941977.8A CN202210941977A CN115375544A CN 115375544 A CN115375544 A CN 115375544A CN 202210941977 A CN202210941977 A CN 202210941977A CN 115375544 A CN115375544 A CN 115375544A
Authority
CN
China
Prior art keywords
layer
attention
resolution
unet
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210941977.8A
Other languages
Chinese (zh)
Inventor
苏进
李学俊
王华彬
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Canada Institute Of Health Engineering Hefei Co ltd
Original Assignee
China Canada Institute Of Health Engineering Hefei Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Canada Institute Of Health Engineering Hefei Co ltd filed Critical China Canada Institute Of Health Engineering Hefei Co ltd
Priority to CN202210941977.8A priority Critical patent/CN115375544A/en
Publication of CN115375544A publication Critical patent/CN115375544A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a super-resolution method for generating a countermeasure network based on attention and a UNet network, and belongs to the technical field of medical image reconstruction. Firstly, acquiring high-resolution and low-resolution data sets for training and testing; then constructing a super-resolution generation countermeasure network with a channel attention block and a UNet discriminator based on the attention and UNet network; then training the network, and finally testing the indexes of the generated image. On the basis of SRGAN, the invention improves the structure of the generator by introducing a mechanism of channel attention, so that the generation network pays more attention to the channel containing high-frequency information. After UNet incorporates a focus mechanism, allowing the discriminator to learn the representation of edges in the image and emphasize selected details, enables better focus on the lesion or vulnerable site, and further improves the quality of the reconstructed image.

Description

Super-resolution method for generating countermeasure network based on attention and UNet network
Technical Field
The invention belongs to the technical field of medical image reconstruction, and particularly relates to a super-resolution method for generating a countermeasure network based on an attention and UNet network.
Background
Currently, in the field of nuclear medicine, positron Emission Tomography (PET) is an advanced imaging technique. The basis of PET imaging is that the technique detects gamma ray pairs emitted indirectly by positron emitting radionuclides (also known as radiopharmaceuticals, radionuclides or radiotracers). The tracer is injected into a vein of the biologically active molecule, usually a sugar for cellular energy. The PET system sensitive detectors capture gamma ray radiation inside the body and use software to map the triangulation emission sources, creating a three-dimensional computed tomography image of tracer concentration in the body.
In the imaging process of medical images, medical devices are often affected by various conditions, such as physical limitations of the imaging devices, differences of the radioactive element dose, and how much radioactive element the patient can bear, so that the resolution of the obtained images is extremely limited. However, in clinical use, especially in the face of some difficult and complicated conditions, the demand for high resolution images is increasing.
In the prior art, the technology of Super Resolution generation adaptive Network (Super Resolution generation countermeasure Network) proposed in 2016 is widely used, and the technology achieves good results. But the work in the prior art efforts and their improvements has mainly focused on simulating a more complex and realistic degeneration process or building a better generator. However, the importance of the discriminator cannot be ignored because it provides the generator with the direction to generate a better image, and thus the SRGAN still has room to rise in increasing image resolution.
Through retrieval, the Chinese patent application number: 202110375243.3, filed as follows: 2021.04.07, subject name: the system comprises an original image acquisition module and a countermeasure network model. The confrontation network model comprises an image synthesis module, an image discrimination module and a multi-loss function. The image synthesis module outputs a synthesized image, the image discrimination module outputs an enhanced image, the multi-loss function senses image loss according to an output result of the image discrimination module, adjusts the synthesized image output by the image synthesis module and improves the enhancement effect, and the loss function comprises an antagonistic loss function, a cyclic consistency loss function, a perception loss function and a total loss function. But the application is more mainly applied to enhancement of natural images. Medical images have the characteristics of high noise and small key information ratio, and different image processing modes may cause great influence. The application has great difference with the invention in the image synthesis module, the image discrimination module and the multi-loss function, so the effect of enhancing the medical image is not obvious.
Disclosure of Invention
1. Technical problem to be solved by the invention
In view of the limited resolution of the image obtained in the image recognition in the prior art, the invention provides a super-resolution method for generating a confrontation network based on attention and a UNet network, and the resolution of the brain PET image is improved by introducing an attention mechanism and the UNet network.
2. Technical scheme
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the super-resolution method for generating the countermeasure network based on the attention and UNet network comprises the following steps:
acquiring a brain PET high-resolution image data set, and acquiring a high-resolution image training set and a high-resolution image testing set;
step two, respectively carrying out downsampling on the high-resolution image training set and the test set obtained in the step one to obtain a low-resolution image training set and a low-resolution image test set;
step three, constructing a super-resolution generation countermeasure network which is provided with a channel attention block and a UNet discriminator and is based on the attention and UNet network;
step four, normalizing the high-resolution image training set and the low-resolution image training set, calculating the total loss of the discriminator by using binary cross entropy loss, and training the network by using the high-resolution image training set and the low-resolution image training set;
and step five, testing the anti-network generated based on the attention and the super-resolution of the UNet network by using the high-resolution image test set and the low-resolution image test set obtained in the step one and the step two, and evaluating the performance of the network.
3. Advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following remarkable effects:
(1) According to the super-resolution method for generating the confrontation network based on the attention and the UNet network, on the basis of the traditional SRGAN, a channel attention mechanism is introduced into the super-resolution confrontation network generator, so that the generator can generate more detailed characteristics, and images with higher resolution can be acquired.
(2) The super-resolution method for generating the countermeasure network based on the attention and the UNet network strengthens the edge part by using the UNet network improved discriminator with the attention mechanism, and can more accurately identify the image characteristics, thereby helping doctors to obtain better diagnosis results.
(3) The super-resolution method for generating the countermeasure network based on the attention and UNet network can effectively improve the operation efficiency of the countermeasure network while ensuring the high resolution of an output image by improving the loss function.
Drawings
Fig. 1 (a) is a block diagram of the overall network structure of an AU-srna in the present invention, wherein G denotes a generator and D denotes a discriminator;
FIG. 1 (b) is a block diagram of the overall structure of AU-SRGAN generator in the present invention;
FIG. 1 (c) is a block diagram showing the structure of the CAB channel according to the present invention;
FIG. 1 (d) is a block diagram of the overall structure of the AU-SRGAN discriminator in the present invention;
FIG. 1 (e) is a block diagram of the structure of AB of the present invention;
FIG. 1 (f) is a block diagram of the CB structure of the present invention;
FIG. 2 (a) is a low resolution graph of an input to be tested;
FIG. 2 (b) is a graph showing the results obtained by Nearest neighbor interpolation (Nearest);
FIG. 2 (c) is a graph showing the results of Bicubic interpolation;
FIG. 2 (d) is a graph of the results obtained using SRGAN;
FIG. 2 (e) is a graph showing the results obtained using AU-SRGAN;
fig. 3 is a block diagram of the process flow of AU-srna of the present invention.
Detailed Description
For a further understanding of the present invention, reference is made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Examples
With reference to fig. 3, the super-resolution method for generating a countermeasure network based on an attention and UNet network according to this embodiment improves a discriminator network on the basis of an SRGAN network, and further improves the super-resolution capability, the picture quality, and the edge effect of the network.
As shown in fig. 1 (a) -1 (f), the super-resolution method for generating a countermeasure network based on an attention and UNet network of the present embodiment includes the following steps:
step one, acquiring a brain PET high-resolution image data set, and distributing the data set into a high-resolution image training set and a high-resolution image testing set according to the proportion of 7. The proportion can be adjusted as required. The training set picture comprises 3000 pictures, and the test set picture comprises 100 pictures. And randomly cutting the training set image to 88 × 88 pixel values by adopting a random cutting method.
And step two, using a Bicubic function to carry out double and triple down sampling on the training set and the test set in the step one to obtain a low-resolution picture training set and a low-resolution picture test set, and using a random clipping method to clip the low-resolution picture training set and the low-resolution picture test set into a size with the pixel value of 88 multiplied by 88.
Step three, constructing a super-resolution generation countermeasure network which is provided with a channel attention block and a UNet discriminator and is based on the attention and UNet network:
1. a channel attention mechanism is introduced in the SRGAN generator, using a channel attention block instead of the conventional residual block:
as shown in fig. 1 (a), the AU-srna (Attention and unext Super Resolution generation adaptive Network, which generates a countermeasure Network based on Attention and the Super Resolution of an unext Network) includes two modules, a generator and a discriminator.
As shown in fig. 1 (b), the sequence of the generators of AU-srna is sequentially: the first layer is a convolutional layer with 64 convolutional kernels, which are 9 × 9 in size and have a step size of 1; the second layer is a PRelu active layer; the third layer is 5 CAB blocks with the same structure; the fourth layer is a convolution layer containing 64 convolution kernels with the size of 3 multiplied by 3 and the step length of 1; the fifth layer is a Batch Normalization (BN) layer; the sixth layer is a jump connection layer; the seventh layer is two upsampled layers and one convolutional layer consisting of 1 convolutional kernel of size 9 × 9 with step size 1. Wherein, the structure of CAB piece does in proper order: the first layer is a convolutional layer containing 64 convolutional kernels with the size of 3 multiplied by 3 and the step size of 1; the second layer is a BN layer; the third layer is a PRelu activation layer; the fourth layer has the same structure as the first layer; adding a channel attention layer in the fifth layer; the sixth layer is a BN layer; the last layer is a jump connection layer that sums by elements. The structure of the channel attention layer is as follows: the first layer is a global average pooling layer; the second layer is a convolutional layer containing 4 convolutional kernels with the size of 1 multiplied by 1 and the length of 1; the third layer is a PRelu active layer; the fourth layer is a convolutional layer containing 64 convolutional kernels with the size of 1 × 1 and the length of 1; the fifth layer is a sigmoid active layer; the last layer is a jump connection layer which is multiplied by elements.
Wherein, the structure of the upper sampling layer is as follows: a convolutional layer with 256 steps of 1 and a size of 3 x 3 convolutional kernels, followed by a 2 times upsampled sub-pixel layer, and finally a PRelu active layer, connected in series.
2. In the above-mentioned discriminator of the super-resolution generation countermeasure network, a UNet discriminator with an attention block is used instead of a conventional discriminator, and the above-mentioned generator is combined to form a super-resolution generation countermeasure network of joint classification:
as shown in fig. 1 (d), the structure of the arbiter AUD of the AU-srna sequentially is: the first horizontal layer is a convolution Block formed by combining a frequency spectrum layer and a convolution layer containing 64 convolution layers with the step size of 1 and the size of 3 multiplied by 3 convolution kernels, and output data are copied and then respectively transmitted to an Attention Block (AB) of the vertical layer and a convolution Block of the next horizontal layer; the structure and output of the second layer and the third layer are the same as those of the first layer; the transverse fourth layer is a rolling Block and transmits an output result to a next layer of rolling blocks and a longitudinal third layer of Connecting Blocks (CB); the horizontal fifth layer transmits data to the three AB blocks respectively in a gate signal; the sixth and seventh horizontal layers are composed of an input data stream, a spectrum hierarchy layer and an output data stream; the last layer is composed of three convolution blocks.
The structure of the attention block AB is as follows: the first layer is an exclusive or layer of the gating signal and UNet input information; a second ReLU layer; the third layer is a sigmoid function layer; the fourth layer is a reacquisition layer (repeat); the last layer is the and or of the fourth layer result and the UNet result.
The structure of connecting block CB is: the first layer is that data input by Unet is up-sampled by two-line filtering (Bilinear Interpolation); the second layer is a layer of frequency spectrum; the third layer performs a Concentration (Concentration) operation on the input data obtained from the AB and the data obtained after the second layer.
And step four, normalizing the training set by using a sigmoid function, calculating the total loss of the discriminator by using binary cross entropy loss, and training the network, wherein the loss of the AU-SRGAN is pixel loss, content loss and countermeasure loss. Loss L of AU-SRGAN SR Expressed as:
L SR =L G +L D (1)
L G loss function, L, representing the generator D Is discriminant loss of AU-SRGAN generator loss L G Comprises the following steps:
L G =L MSE +γL VGG +δL GEN (2)
wherein, the gamma-ray absorption coefficient,delta is a hyperparameter, L MSE Is the pixel loss, L VGG For loss of network content, L GEN Is to combat the loss of the network. Pixel loss L of the present network MSE The pixel-level standard Mean Square Error (MSE) is used, which is expressed as:
Figure DEST_PATH_IMAGE001
(3)
wherein r is a down-sampling factor, W and H represent the width and height of the image respectively, and I LR 、I HR Respectively representing a low-resolution PET image and a high-resolution PET image, G θG (I LR ) The super-resolution map generated by the generator is shown, wherein subscripts x and y represent a specific pixel point.
L for loss obtained using the jth convolutional layer operation before the ith largest pooling layer in VGG19 networks VGG Representation, which can be expressed as:
Figure 843131DEST_PATH_IMAGE002
(4)
wherein, W i,j 、H i,j Shows the size of all the characteristic maps in the VGG19 network i,j Shown is the jth convolutional layer operation before the ith maximum pooled layer of the VGG19 network. Defining the penalty L against the probability of the arbiter GEN Comprises the following steps:
Figure DEST_PATH_IMAGE003
(5)
wherein the image generated by the generator is judged by the discriminator to be the probability value of the real image by D θG (G θG (I LR ) ) are shown.
Adopting UNet with attention mechanism as discrimination loss L of AU-SRGAN discriminator D The output of the UNet discriminator is a W × H matrix, with each element representing the likelihood that the pixel it represents is true. To calculate the total loss of a discriminator, the output is normalized using the sigmoid functionAnd computes the penalty using a binary cross entropy penalty. Assuming C is the output matrix, defining D = σ (C), x r For real data, x f Is false data. It is expressed as:
Figure 649544DEST_PATH_IMAGE004
(6)
and step five, training the network by using the high-resolution image training set obtained in the step one and the low-resolution image training set obtained in the step two. The parameters are set as follows: the blocksize is 16, the number of iterations is 200, and the coefficients γ, δ of the loss function are 6 × 10, respectively -3 , 1×10 -3 The optimizer for optimizing the loss uses Adam, and the learning rate is set to 1 × 10 -3
Step six, testing and verifying the AU-SRGAN model obtained by training by using the high-resolution image test set in the step one and the low-resolution image test set in the step two, verifying the result by using common image quality evaluation indexes PSNR and SSIM in order to better evaluate the effectiveness of the model, and comparing the AU-SRGAN with a Bicubic (Bicubic), a Nearest neighbor interpolation (Nearest neighbor) based method and an SRGAN method respectively, wherein the experimental results are as follows:
TABLE 1 index values for image quality evaluation
Figure 872715DEST_PATH_IMAGE005
As can be seen from the table, the AU-SRGAN model is superior to other methods in both PSNR and SSIM.
As shown in fig. 2 (b) to 2 (e), the super-resolution results obtained by using different methods for fig. 2 (a) are blurred, and the super-resolution images obtained by using the SRGAN method are improved in image quality, but are not clear enough at the edge of the image. After AU-SRGAN is used, the effect of the picture at the edge part is improved, the texture is clearer, and the effectiveness of the method in improving the quality of the brain PET image is proved.
The present invention and its embodiments have been described above schematically, and the description is not intended to be limiting, and what is shown in the drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, without departing from the spirit of the present invention, a person of ordinary skill in the art should understand that the present invention shall not be limited to the embodiments and the similar structural modes without creative design.

Claims (10)

1. Super-resolution method for generating a countermeasure network based on an attention and UNet network, characterized in that it comprises the steps of,
acquiring a brain PET high-resolution image data set, and acquiring a high-resolution image training set and a testing set;
step two, respectively carrying out downsampling on the high-resolution image training set and the test set obtained in the step one to obtain a low-resolution image training set and a low-resolution image test set;
step three, constructing a super-resolution generation countermeasure network which is provided with a channel attention block and a UNet discriminator and is based on the attention and UNet network;
step four, normalizing the high-resolution image training set and the low-resolution image training set, calculating the total loss of the discriminator by using binary cross entropy loss, and training the network by using the high-resolution image training set and the low-resolution image training set;
and step five, testing the super-resolution generation countermeasure network based on the attention and UNet network by using the high-resolution image test set and the low-resolution image test set obtained in the step one and the step two, and evaluating the performance of the network.
2. The super resolution method for generating a countermeasure network based on an attention and UNet network according to claim 1,
in the first step, the obtained data set is distributed into a high-resolution image training set and a high-resolution image testing set according to requirements, and images in the training set are randomly cut into proper sizes;
in the second step, the training set and the test set in the first step are subjected to double-three downsampling by using a Bicubic function to obtain a low-resolution picture training set and a low-resolution picture test set, and the training set and the test set are randomly cut into proper sizes.
3. The super resolution method for generating a countermeasure network based on an attention and UNet network according to claim 1 or 2,
in the third step, a generator for generating a countermeasure network based on super resolution of an attention and UNet network is constructed, and the total number of the generator is seven:
the first layer is a convolution layer of 64 convolution kernels; the second layer is a PRelu active layer; the third layer is 5 CAB blocks with the same structure, and the fourth layer is a convolution layer containing 64 convolution kernels with the size of 3 multiplied by 3 and the step length of 1; the fifth layer is a layer batch normalization layer; the sixth layer is a jump connection layer; the seventh layer is two upsampled layers and one convolutional layer.
4. The super resolution method for generating a countermeasure network based on an attention and UNet network according to claim 3,
the CAB block comprises a seven-layer structure:
the first layer is a convolution layer containing 64 convolution kernels; the second layer is a layer batch normalization layer; the third layer is a PRelu activation layer; the fourth layer is a convolution layer containing 64 convolution kernels; the fifth layer is a channel attention structure layer; the sixth layer is a layer batch normalization layer; the seventh layer is a jump connection layer summed according to elements;
the channel attention layer structure in the CAB block is:
the first layer is a global average pooling layer; the second layer is a convolution layer containing 4 convolution kernels; the third layer is a PRelu active layer; the fourth layer is a convolution layer containing 64 convolution kernels; the fifth layer is a sigmoid active layer; the sixth layer is a jump connection layer that is integrated by elements.
5. The super resolution method for generating a countermeasure network based on an attention and UNet network according to claim 4,
in the third step, a discriminator for generating a countermeasure network based on super resolution of the attention and UNet network is constructed, eight layers are provided,
the first horizontal layer is a convolution block which consists of a spectrum regression layer and a convolution layer containing 64 convolution kernels, and output data are copied and then respectively transmitted to the attention block of the vertical same layer and the convolution block of the next horizontal layer; the structure and output of the second layer and the third layer are the same as those of the first layer; the transverse fourth layer is a rolling block, and an output result is transmitted to the next layer of rolling block and the longitudinal third layer of connecting block; the transverse fifth layer transmits data to the three attention blocks in the form of gate signals respectively; the sixth and seventh horizontal layers are composed of an input data stream, a spectrum hierarchy layer and an output data stream; the eighth layer is composed of three convolution blocks.
6. The super resolution method for generating a countermeasure network based on an attention and UNet network according to claim 5,
the attention block has the following structure: the first layer is an exclusive or layer of the gating signal and UNet input information; a second ReLU layer; the third layer is a sigmoid function layer; the fourth layer is a heavy acquisition layer; the last layer is the AND or of the fourth layer result and the UNet result;
the structure of connecting block CB is: the first layer is that data input by Unet is up-sampled by two-line filtering; the second layer is a layer of frequency spectrum; the third layer performs centralized operation on the input data obtained from the AB and the data obtained after the second layer.
7. The super resolution method for generating a countermeasure network based on an attention and UNet network according to claim 6,
in the fourth step, the sigmoid function is used for normalizing the result, the total loss of the discriminator is calculated by using the binary system cross entropy loss, and the network is trained:
super-resolution generation of countermeasure network based on attention and UNet network loss L SR Is pixel lossContent loss, counter-loss,
L SR =L G +L D (1)
wherein L is G Loss function, L, representing the generator D Is a discriminant loss, the generator loss L of AU-SRGAN G Comprises the following steps:
L G =L MSE +γL VGG +δL GEN (2)
wherein gamma and delta are hyper-parameters, L MSE Is the pixel loss, L VGG For loss of network content, L GEN Is to combat the loss of the network.
8. The super resolution method for generating a countermeasure network based on an attention and UNet network according to claim 7,
said pixel loss L MSE The standard mean square error at the pixel level is used, expressed as
Figure 46592DEST_PATH_IMAGE001
(3)
Wherein r is a down-sampling factor, and W and H respectively represent the width and height of the image; i is LR 、I HR Respectively representing a low-resolution PET image and a high-resolution PET image, G θG (I LR ) Representing a super-resolution image generated by a generator, wherein subscript x and y represent a specific pixel point;
l for loss using the jth convolutional layer operation before the ith maximum pooled layer in VGG19 networks VGG A representation, which can be represented as:
Figure 213262DEST_PATH_IMAGE002
(4)
wherein, W i,j 、H i,j Shows the size of all the feature maps in the VGG19 network i,j Representing the jth convolutional layer operation before the ith largest pooling layer of the VGG19 network; defining a confrontation according to the probability of an arbiterLoss L GEN Comprises the following steps:
Figure 949137DEST_PATH_IMAGE003
(5)
wherein the image generated by the generator is judged by the discriminator to be the probability value of the real image by D θG (G θG (I LR ) Is) is shown.
9. The super resolution method for generating a countermeasure network based on an attention and UNet network according to claim 8,
employing UNet with attention mechanism as discrimination loss L for a super-resolution generation countermeasure network discriminator based on attention and UNet networks D The output of the UNet discriminator is a W × H matrix; it is expressed as:
Figure 741513DEST_PATH_IMAGE004
(6)
where C is the output matrix, D = σ (C), x r For real data, x f Is false data.
10. The super resolution method for generating a countermeasure network based on an attention and UNet network according to claim 9,
and in the fifth step, verifying the result by adopting the commonly used image quality evaluation indexes PSNR and SSIM, and comparing the result with the result of a bicubic interpolation method, a nearest neighbor interpolation method and an SRGAN method respectively.
CN202210941977.8A 2022-08-08 2022-08-08 Super-resolution method for generating countermeasure network based on attention and UNet network Pending CN115375544A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210941977.8A CN115375544A (en) 2022-08-08 2022-08-08 Super-resolution method for generating countermeasure network based on attention and UNet network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210941977.8A CN115375544A (en) 2022-08-08 2022-08-08 Super-resolution method for generating countermeasure network based on attention and UNet network

Publications (1)

Publication Number Publication Date
CN115375544A true CN115375544A (en) 2022-11-22

Family

ID=84063474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210941977.8A Pending CN115375544A (en) 2022-08-08 2022-08-08 Super-resolution method for generating countermeasure network based on attention and UNet network

Country Status (1)

Country Link
CN (1) CN115375544A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745725A (en) * 2024-02-20 2024-03-22 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, image processing model training method, three-dimensional medical image processing method, computing device, and storage medium
CN117745725B (en) * 2024-02-20 2024-05-14 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, image processing model training method, three-dimensional medical image processing method, computing device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034417A (en) * 2021-04-07 2021-06-25 湖南大学 Image enhancement system and image enhancement method based on generation countermeasure network
CN113487503A (en) * 2021-07-01 2021-10-08 安徽大学 PET (positron emission tomography) super-resolution method for generating antagonistic network based on channel attention
CN114743245A (en) * 2022-04-11 2022-07-12 网易传媒科技(北京)有限公司 Training method of enhanced model, image processing method, device, equipment and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034417A (en) * 2021-04-07 2021-06-25 湖南大学 Image enhancement system and image enhancement method based on generation countermeasure network
CN113487503A (en) * 2021-07-01 2021-10-08 安徽大学 PET (positron emission tomography) super-resolution method for generating antagonistic network based on channel attention
CN114743245A (en) * 2022-04-11 2022-07-12 网易传媒科技(北京)有限公司 Training method of enhanced model, image processing method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZIHAO WEI等: "A-ESRGAN: TRAINING REAL-WORLD BLIND SUPER-RESOLUTION WITH ATTENTION U-NET DISCRIMINATORS", 《HTTPS://ARXIV.ORG/ABS/2112.10046》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745725A (en) * 2024-02-20 2024-03-22 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, image processing model training method, three-dimensional medical image processing method, computing device, and storage medium
CN117745725B (en) * 2024-02-20 2024-05-14 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, image processing model training method, three-dimensional medical image processing method, computing device, and storage medium

Similar Documents

Publication Publication Date Title
CN110443867B (en) CT image super-resolution reconstruction method based on generation countermeasure network
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
KR102210474B1 (en) Positron emission tomography system and imgae reconstruction method using the same
CN108492269B (en) Low-dose CT image denoising method based on gradient regular convolution neural network
CN110753935A (en) Dose reduction using deep convolutional neural networks for medical imaging
CN108898642A (en) A kind of sparse angular CT imaging method based on convolutional neural networks
CN107871332A (en) A kind of CT based on residual error study is sparse to rebuild artifact correction method and system
Liang et al. Metal artifact reduction for practical dental computed tomography by improving interpolation‐based reconstruction with deep learning
CN114092330A (en) Lightweight multi-scale infrared image super-resolution reconstruction method
CN103559728B (en) PET image maximum posterior reconstruction method based on united prior model with dissection function
CN112435164B (en) Simultaneous super-resolution and denoising method for generating low-dose CT lung image based on multiscale countermeasure network
CN112396672B (en) Sparse angle cone-beam CT image reconstruction method based on deep learning
CN102184559B (en) Particle filtering-based method of reconstructing static PET (Positron Emission Tomograph) images
Shao et al. A learned reconstruction network for SPECT imaging
CN115187689A (en) Swin-Transformer regularization-based PET image reconstruction method
CN112561799A (en) Infrared image super-resolution reconstruction method
CN115601268A (en) LDCT image denoising method based on multi-scale self-attention generation countermeasure network
CN114358285A (en) PET system attenuation correction method based on flow model
CN113160057A (en) RPGAN image super-resolution reconstruction method based on generation countermeasure network
CN116385317B (en) Low-dose CT image recovery method based on self-adaptive convolution and transducer mixed structure
CN116245969A (en) Low-dose PET image reconstruction method based on deep neural network
CN116664710A (en) CT image metal artifact unsupervised correction method based on transducer
CN116612009A (en) Multi-scale connection generation countermeasure network medical image super-resolution reconstruction method
CN115375544A (en) Super-resolution method for generating countermeasure network based on attention and UNet network
CN115880158A (en) Blind image super-resolution reconstruction method and system based on variational self-coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221122