CN112150360A - IVUS image super-resolution reconstruction method based on dense residual error network - Google Patents

IVUS image super-resolution reconstruction method based on dense residual error network Download PDF

Info

Publication number
CN112150360A
CN112150360A CN202010973041.4A CN202010973041A CN112150360A CN 112150360 A CN112150360 A CN 112150360A CN 202010973041 A CN202010973041 A CN 202010973041A CN 112150360 A CN112150360 A CN 112150360A
Authority
CN
China
Prior art keywords
image
residual error
dense residual
inputting
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010973041.4A
Other languages
Chinese (zh)
Inventor
汪友生
满开亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202010973041.4A priority Critical patent/CN112150360A/en
Publication of CN112150360A publication Critical patent/CN112150360A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The invention discloses an IVUS image super-resolution reconstruction method based on a dense residual error network, aiming at the performance defects of large quantity and calculated quantity of traditional dense residual error network parameters and the problem that a dense residual error module is insufficient in image feature extraction, weight normalization is used for carrying out normalization correction on model weight parameters, and model convergence is accelerated. And by using various convolution kernels, the extraction effect of the model on the image characteristic features is enhanced, and the model parameters are reduced. And a global feature multiplexing module is added to repeatedly utilize the image information for many times, and the image information is fully extracted. The invention enables the original IVUS medical image features to be collected and extracted only by the dense residual error module, and is improved to be determined by the global feature multiplexing module and the dense residual error module together, so that the model can extract the features of the training medical image more comprehensively. And weight normalization is introduced to accelerate model convergence, so that the effect of the dense residual error network on image super-resolution reconstruction and the efficiency of model calculation can be effectively improved.

Description

IVUS image super-resolution reconstruction method based on dense residual error network
Technical Field
The invention relates to the technical field of computer vision and image super-resolution reconstruction, in particular to a super-resolution reconstruction method based on a Dense Residual error Network (RDN).
Background
With the continuous development of science and technology, high-definition display devices are popularized in daily life, and the requirements of human beings on image definition are improved. In a digital image, the determining factor of sharpness is resolution, which represents the amount of information contained in the image, and the higher the resolution, the more abundant the visual information of the image. Since hardware devices and the existing imaging technology are interfered by a plurality of degradation factors, the resolution of the obtained image is low or the content is defective, and the image super-resolution processing is favored by researchers due to the advantages of intuition, convenience, low price, quickness and the like. The super-resolution reconstruction technology is widely applied in the field of medical diagnosis and treatment and achieves good effect. The method has wide application range and important research significance in the fields of public security, satellite remote sensing, video perception and the like.
The medical image super-resolution aims at constructing a corresponding high-resolution image aiming at a low-resolution image, increasing the number of pixels and high-frequency information, and enabling the image to have clear edges and ordered textures so as to show more image details. The research goal of image restoration is to fill up lost information in an image or restore the texture characteristics of a fuzzy object, so that the restored image meets the visual perception of human beings, restoration traces are not easy to perceive, the edge is kept continuous and smooth, and the texture is kept consistent and orderly. The research target of the super-resolution of the image is to perform visual optimization on the degraded image and reconstruct the missing information in the image. The display effect with richer textures can be provided by using the technologies such as deep learning and convolutional neural network for image super-resolution reconstruction. Compared with the traditional methods (interpolation method and reconstruction method), the reconstruction technology based on deep learning is remarkably improved under the internationally recognized super-resolution evaluation indexes PSNR (peak signal-to-noise ratio) and SSIM (structural similarity). The image super-resolution reconstruction technology based on deep learning is to construct a high-resolution image by utilizing deep information of an image and priori knowledge obtained by analyzing a large amount of data.
The RDN is one of the advanced image super-resolution reconstruction models at present, and has the advantages of simple structure, multi-layer multiplexing of input image information and the like; but also has many disadvantages that can not be applied to many practical scenes. The disadvantages of RDN mainly include the following two aspects: first, a large amount of time is consumed in the model training process due to the large amount of calculation and parameters, resulting in low reconstruction efficiency and being unable to be applied to a small-scale arithmetic unit. Secondly, the model reconstruction performance is insufficient, and sufficient prior information cannot be extracted from the training samples. When the training data is severely unevenly distributed, the reconstruction effect may be severely affected. Considering the situation that various image reconstruction improved algorithms are difficult to take account of speed and precision at present, the designed RDN image reconstruction method with good super-resolution reconstruction effect and less calculated quantity parameters has important academic significance and practical value.
Disclosure of Invention
Based on the technical problems in the background art, the invention provides a dense residual error network-based IVUS image super-resolution reconstruction method by improving an RDN image reconstruction algorithm from the aspects of reconstruction effect and calculated quantity. The convolution layer is used to extract the image features and expand the number of feature maps. Inputting the feature maps into a plurality of dense residual error modules to extract depth features in series, realizing multi-layer multiplexing of image information through the dense residual errors and solving the gradient fading problem in depth learning, then restoring a part of high-definition images by using a pixel shuffling method, simultaneously inputting the output feature maps of the plurality of dense residual error modules into a global feature multiplexing module to multiplex the image information again and restore the rest part of high-definition images by using the pixel shuffling method, and finally adding the two parts of high-definition images to obtain a reconstructed image. Fig. 1 is a flowchart of the method, and fig. 2 is a network structure diagram of the method.
The method comprises the following steps:
step 1, reconstructing a data set by using super-resolution of IVUS intravascular ultrasound, wherein the data set comprises an image training set and an image testing set;
step 2, all low-resolution images I in the super-resolution reconstruction data set IVUS training setLRClipping is carried out to obtain a low-definition image block B of 48X48 pixelsLRSimultaneously for corresponding high resolution images IHRSimilarly cropped high definition image block B of 96X96 pixelsHR
Step 3, the cut low-definition image block BLRAnd sequentially reading the model into the network model, wherein the specific model structure is shown in FIG. 2. Extracting feature information from the 3X3 convolutional layer pairs and expanding the number of feature maps to 32;
step 4, inputting the feature map extracted in the previous step into an improved dense residual error module for depth feature extraction; (note feature map input to dense residual module as S1)
Step 4.1 input feature map into dense residual layer (note that feature map input into dense residual layer is C)1). The specific operation of the layer is that firstly, the number of characteristic graphs of the convolutional layer is expanded to 96 in the 1X1 convolutional layer, then the parameters of a pruning model of an activation function (ReLu) layer are input, 16 amplitude characteristic graphs of the extraction depth characteristic graphs of the 3X3 convolutional layer are input, and the C amplitude characteristic graphs are recorded as C2Finally, the feature map C of the dense residual layer is input1And finally depth extracted feature map C2A merge (concatanate) operation was performed.
Step 4.2, repeating the operation of the step 4.1 for 5 times to finally obtain 128 combined feature maps, screening and storing 32 images by a neural network through a 1X1 convolutional layer to be recorded as S2
Step 4.3, inputting the feature map S in the dense residual error module1And S2And adding to complete residual operation. (32 characteristic graphs after residual error recording operation are R1)
Step 5, repeating the operation of step 4 for 7 times to obtain R2,R3......R8. Extracting the characteristics of the complete 8 dense residual modules to obtain a characteristic diagram R8An input pixel shuffling module.
Step 5.1, the characteristic diagram R is processed8Inputting 1X1 convolution layers to modify their quantity to 12 characteristic maps P1
Step 5.2, adding P1Inputting the image into a PixelShuffle layer to obtain a high-definition image HR with 96X96 pixels1
Step 6, the characteristic diagram R is processed1R2......R8Input global feature multiplexing module
Step 6.1, inputting all characteristic graphs R1R2......R8A merge (concatanate) operation was performed. The combined 256 feature maps are input into a 1X1 convolutional layer, and 32 images are screened and stored by a neural network and recorded as G.
Step 6.2, inputting G into pixel shuffling module and outputtingModifying the number of the convolutional layers to 12 characteristic maps P in 1X12Then P is added2Inputting the image into a PixelShuffle layer to obtain a high-definition image HR with 96X96 pixels2
And 7, performing weight normalization on all convolution operations performed in the above steps.
Step 8, mixing HR1And HR2Adding to obtain the final reconstructed image BSRA 1 to BSRAnd BHRAnd evaluating the evaluation index peak signal-to-noise ratio (PSNR), and reversely updating the network model weight.
And 9, repeating the steps 3 to 8 for 100 times to obtain the trained super-resolution reconstruction model.
And step 10, inputting the image to be reconstructed into the trained super-resolution reconstruction model to obtain a reconstructed image as a reconstruction result.
The invention adopts a super-resolution reconstruction method based on a dense residual error network, so that the input image information can be fully extracted, and the multi-layer multiplexing of the image information is realized. The original dense residual error network model only uses the 3X3 convolution layer and has no other optimization acceleration algorithm, so that the parameter quantity is huge and the model training is slow; according to the method, the number of parameters of the network model is reduced and the convergence of the model is accelerated by adding the 1X1 convolution layer and weight normalization, so that the model is simple and convenient to train and is light in weight, the training speed can be effectively increased, and the quality of a reconstructed image is improved.
Drawings
FIG. 1 is a flow chart of a super-resolution reconstruction method for an improved dense residual network;
FIG. 2 is a diagram of a super-resolution reconstruction method for an improved dense residual network;
FIG. 3 is a result of an image numbered 0055 in the pre-improvement model reconstruction IVUS dataset;
FIG. 4 is the image result numbered 0055 in the improved model reconstruction IVUS dataset;
Detailed Description
The invention is realized by adopting the following technical means:
a super-resolution reconstruction method based on a dense residual error network. The convolution layer is used to extract the image features and expand the number of feature maps. Inputting the feature maps into a plurality of dense residual error modules to extract depth features in series, realizing multi-layer multiplexing of image information through the dense residual errors and solving the gradient fading problem in depth learning, then restoring a part of high-definition images by using a pixel shuffling method, simultaneously inputting the output feature maps of the plurality of dense residual error modules into a global feature multiplexing module to multiplex the image information again and restore the rest part of high-definition images by using the pixel shuffling method, and finally adding the two parts of high-definition images to obtain a reconstructed image.
The super-resolution reconstruction method based on the dense residual error network comprises the following steps:
step 1, reconstructing a data set by using super-resolution of IVUS intravascular ultrasound, wherein the data set comprises an image training set and an image testing set;
step 2, all low-resolution images I in the super-resolution reconstruction data set IVUS training setLRClipping is carried out to obtain a low-definition image block B of 48X48 pixelsLRSimultaneously for corresponding high resolution images IHRSimilarly cropped high definition image block B of 96X96 pixelsHR
Step 3, the cut low-definition image block BLRAnd sequentially reading the model into the network model, wherein the specific model structure is shown in FIG. 2. Extracting feature information from the 3X3 convolutional layer pairs and expanding the number of feature maps to 32;
step 4, inputting the feature map extracted in the previous step into an improved dense residual error module for depth feature extraction; (note feature map input to dense residual module as S1)
Step 4.1 input feature map into dense residual layer (note that feature map input into dense residual layer is C)1). The specific operation of the layer is that firstly, the number of characteristic graphs of the convolutional layer is expanded to 96 in the 1X1 convolutional layer, then the parameters of a pruning model of an activation function (ReLu) layer are input, 16 amplitude characteristic graphs of the extraction depth characteristic graphs of the 3X3 convolutional layer are input, and the C amplitude characteristic graphs are recorded as C2Finally, the feature map C of the dense residual layer is input1And finally depth extracted feature map C2A merge (concatanate) operation was performed.
Step 4.2, repeating the operation of the step 4.1 for 5 times to finally obtain 128 combined feature maps, screening and storing 32 images by a neural network through a 1X1 convolutional layer to be recorded as S2
Step 4.3, inputting the feature map S in the dense residual error module1And S2And adding to complete residual operation. (32 characteristic graphs after residual error recording operation are R1)
Step 5, repeating the operation of step 4 for 7 times to obtain R2,R3......R8. Extracting the characteristics of the complete 8 dense residual modules to obtain a characteristic diagram R8An input pixel shuffling module.
Step 5.1, the characteristic diagram R is processed8Inputting 1X1 convolution layers to modify their quantity to 12 characteristic maps P1
Step 5.2, adding P1Inputting the image into a PixelShuffle layer to obtain a high-definition image HR with 96X96 pixels1
Step 6, the characteristic diagram R is processed1R2......R8Input global feature multiplexing module
Step 6.1, inputting all characteristic graphs R1R2......R8A merge (concatanate) operation was performed. The combined 256 feature maps are input into a 1X1 convolutional layer, and 32 images are screened and stored by a neural network and recorded as G.
Step 6.2, inputting G into the pixel shuffling module, inputting 1X1 convolution layers to modify the quantity to 12 characteristic graphs and marking as P2Then P is added2Inputting the image into a PixelShuffle layer to obtain a high-definition image HR with 96X96 pixels2
And 7, performing weight normalization on all convolution operations performed in the above steps.
Step 8, mixing HR1And HR2Adding to obtain the final reconstructed image BSRA 1 to BSRAnd BHRAnd evaluating the evaluation index peak signal-to-noise ratio (PSNR), and reversely updating the network model weight.
And 9, repeating the steps 3 to 8 for 100 times to obtain the trained super-resolution reconstruction model.
And step 10, inputting the image to be reconstructed into the trained super-resolution reconstruction model to obtain a reconstructed image as a reconstruction result.
FIG. 3 shows the result of an image with number 0055 in the pre-improvement model reconstruction IVUS dataset, with a PSNR of 42.216dB, an SSIM of 0.9920, and a total model parameter of 2.3M; FIG. 4 shows the result of an improved model reconstruction of an image numbered 0055 in the IVUS dataset with a PSNR of 42.373dB and an SSIM of 0.9923, with a total model parameter of 1.07M. It can thus be seen that the reconstruction operation after using the method reduces a lot of computation cost and time cost and obtains a better reconstruction effect.

Claims (4)

1. A super-resolution reconstruction method based on a dense residual error network is characterized in that: the method comprises the following steps of,
step 1, reconstructing a data set by using a super-resolution of IVUS intravascular ultrasound, wherein the data set comprises an image training set and an image testing set;
step 2, all low-resolution images I in the super-resolution reconstruction data set IVUS training setLRClipping is carried out to obtain a low-definition image block B of 48X48 pixelsLRSimultaneously for corresponding high resolution images IHRSimilarly cropped high definition image block B of 96X96 pixelsHR
Step 3, the cut low-definition image block BLRSequentially reading in the network model, extracting feature information from the 3X3 convolutional layer pairs and weighting the number of feature maps to 32;
step 4, inputting the feature map extracted in the previous step into an improved dense residual error module for depth feature extraction; the feature map input to the dense residual module is recorded as S1
Step 5, repeating the operation of step 4 for 7 times to obtain R2,R3......R8(ii) a Extracting the characteristics of the complete 8 dense residual modules to obtain a characteristic diagram R8An input pixel shuffling module;
step 6, the characteristic diagram R is processed1R2......R8Input global feature multiplexing module
Step 7, carrying out weight normalization on all convolution operations executed in the steps 1-6;
step 8, mixing HR1And HR2Adding to obtain the final reconstructed image BSRA 1 to BSRAnd BHREvaluating the peak signal-to-noise ratio of the evaluation index, and reversely updating the weight of the network model;
step 9, repeating the steps 3 to 8 for 100 times to obtain a trained super-resolution reconstruction model;
and step 10, inputting the image to be reconstructed into the trained super-resolution reconstruction model to obtain a reconstructed image as a reconstruction result.
2. The dense residual network-based super-resolution reconstruction method according to claim 1, wherein: in the step 4, the process of the method,
step 4.1, inputting the feature map into the dense residual error layer, and recording the feature map input into the dense residual error layer as C1(ii) a The specific operation of the dense residual error layer is that the dense residual error layer firstly enters a 1X1 convolutional layer to expand the number of characteristic graphs to 96, then enters an activation function ReLu layer pruning model parameter, and 16 pieces of extracted depth characteristic graphs of a 3X3 convolutional layer are input and recorded as C2Finally, the feature map C of the dense residual layer is input1And finally depth extracted feature map C2Carrying out merging operation;
step 4.2, repeating the operation of the step 4.1 for 5 times to finally obtain 128 combined feature maps, screening and storing 32 images by a neural network through a 1X1 convolutional layer to be recorded as S2
Step 4.3, inputting the feature map S in the dense residual error module1And S2Adding, finishing residual error operation, recording 32 characteristic diagrams after residual error operation as R1
3. The dense residual network-based super-resolution reconstruction method according to claim 1, wherein: in the step 5, the process is carried out,
step 5.1, the characteristic diagram R is processed8Inputting 1X1 convolution layers to modify their quantity to 12 characteristic maps P1
Step 5.2, adding P1Inputting the image into a PixelShuffle layer to obtain a high-definition image HR with 96X96 pixels1
4. The dense residual network-based super-resolution reconstruction method according to claim 1, wherein: in the step 6, the process of the present invention,
step 6.1, inputting all characteristic graphs R1R2......R8Carrying out merging operation; inputting the combined 256 characteristic diagrams into a 1X1 convolutional layer, and screening and storing 32 images by a neural network to be recorded as G;
step 6.2, inputting G into the pixel shuffling module, inputting 1X1 convolution layers to modify the quantity to 12 characteristic graphs and marking as P2Then P is added2Inputting the image into a PixelShuffle layer to obtain a high-definition image HR with 96X96 pixels2
CN202010973041.4A 2020-09-16 2020-09-16 IVUS image super-resolution reconstruction method based on dense residual error network Pending CN112150360A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010973041.4A CN112150360A (en) 2020-09-16 2020-09-16 IVUS image super-resolution reconstruction method based on dense residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010973041.4A CN112150360A (en) 2020-09-16 2020-09-16 IVUS image super-resolution reconstruction method based on dense residual error network

Publications (1)

Publication Number Publication Date
CN112150360A true CN112150360A (en) 2020-12-29

Family

ID=73892972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010973041.4A Pending CN112150360A (en) 2020-09-16 2020-09-16 IVUS image super-resolution reconstruction method based on dense residual error network

Country Status (1)

Country Link
CN (1) CN112150360A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700366A (en) * 2021-01-04 2021-04-23 北京工业大学 Vascular pseudo-color image reconstruction method based on IVUS image
CN114820302A (en) * 2022-03-22 2022-07-29 桂林理工大学 Improved image super-resolution algorithm based on residual dense CNN and edge enhancement

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170046816A1 (en) * 2015-08-14 2017-02-16 Sharp Laboratories Of America, Inc. Super resolution image enhancement technique
CN106991646A (en) * 2017-03-28 2017-07-28 福建帝视信息科技有限公司 A kind of image super-resolution method based on intensive connection network
CN108288251A (en) * 2018-02-11 2018-07-17 深圳创维-Rgb电子有限公司 Image super-resolution method, device and computer readable storage medium
CN109064405A (en) * 2018-08-23 2018-12-21 武汉嫦娥医学抗衰机器人股份有限公司 A kind of multi-scale image super-resolution method based on dual path network
KR20190040586A (en) * 2017-10-11 2019-04-19 인하대학교 산학협력단 Method and apparatus for reconstructing single image super-resolution based on artificial neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170046816A1 (en) * 2015-08-14 2017-02-16 Sharp Laboratories Of America, Inc. Super resolution image enhancement technique
CN106991646A (en) * 2017-03-28 2017-07-28 福建帝视信息科技有限公司 A kind of image super-resolution method based on intensive connection network
KR20190040586A (en) * 2017-10-11 2019-04-19 인하대학교 산학협력단 Method and apparatus for reconstructing single image super-resolution based on artificial neural network
CN108288251A (en) * 2018-02-11 2018-07-17 深圳创维-Rgb电子有限公司 Image super-resolution method, device and computer readable storage medium
CN109064405A (en) * 2018-08-23 2018-12-21 武汉嫦娥医学抗衰机器人股份有限公司 A kind of multi-scale image super-resolution method based on dual path network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JIAHUI YU 等: "Wide Activation for Efficient and Accurate Image Super-Resolution", COMPUTER VISION AND PATTERN RECOGNITION, pages 1 - 10 *
YULUN ZHANG 等: "Residual Dense Network for Image Super-Resolution", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), pages 2472 - 2481 *
张新: "图像处理中超分辨率与修复方法的研究", 中国博士学位论文全文数据库(信息科技辑), pages 138 - 11 *
李梦醒: "基于卷积神经网络的低质图像增强及超分辨复原技术研究", 中国优秀硕士学位论文全文数据库(信息科技辑), pages 138 - 1365 *
王杰龙: "基于DCNN的图像超分辨率算法研究", 中国优秀硕士学位论文全文数据库(信息科技辑), pages 138 - 601 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700366A (en) * 2021-01-04 2021-04-23 北京工业大学 Vascular pseudo-color image reconstruction method based on IVUS image
CN114820302A (en) * 2022-03-22 2022-07-29 桂林理工大学 Improved image super-resolution algorithm based on residual dense CNN and edge enhancement

Similar Documents

Publication Publication Date Title
CN108898560B (en) Core CT image super-resolution reconstruction method based on three-dimensional convolutional neural network
CN114092330B (en) Light-weight multi-scale infrared image super-resolution reconstruction method
CN110599401A (en) Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
CN110443768B (en) Single-frame image super-resolution reconstruction method based on multiple consistency constraints
CN113658051A (en) Image defogging method and system based on cyclic generation countermeasure network
CN111127325B (en) Satellite video super-resolution reconstruction method and system based on cyclic neural network
CN112288632B (en) Single image super-resolution method and system based on simplified ESRGAN
CN110148088B (en) Image processing method, image rain removing method, device, terminal and medium
CN113450288B (en) Single image rain removing method and system based on deep convolutional neural network and storage medium
CN109146813B (en) Multitask image reconstruction method, device, equipment and medium
CN113269818B (en) Deep learning-based seismic data texture feature reconstruction method
CN109191411B (en) Multitask image reconstruction method, device, equipment and medium
CN111932461A (en) Convolutional neural network-based self-learning image super-resolution reconstruction method and system
CN112150360A (en) IVUS image super-resolution reconstruction method based on dense residual error network
CN114331831A (en) Light-weight single-image super-resolution reconstruction method
CN114881871A (en) Attention-fused single image rain removing method
CN114841856A (en) Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention
CN108416736B (en) Image super-resolution reconstruction method based on secondary anchor point neighborhood regression
CN116485741A (en) No-reference image quality evaluation method, system, electronic equipment and storage medium
CN113538246A (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN116757955A (en) Multi-fusion comparison network based on full-dimensional dynamic convolution
CN117197627B (en) Multi-mode image fusion method based on high-order degradation model
CN116091492B (en) Image change pixel level detection method and system
CN110675320A (en) Method for sharpening target image under spatial parameter change and complex scene
CN116029908A (en) 3D magnetic resonance super-resolution method based on cross-modal and cross-scale feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination