CN112070676B - Picture super-resolution reconstruction method of double-channel multi-perception convolutional neural network - Google Patents
Picture super-resolution reconstruction method of double-channel multi-perception convolutional neural network Download PDFInfo
- Publication number
- CN112070676B CN112070676B CN202010946074.XA CN202010946074A CN112070676B CN 112070676 B CN112070676 B CN 112070676B CN 202010946074 A CN202010946074 A CN 202010946074A CN 112070676 B CN112070676 B CN 112070676B
- Authority
- CN
- China
- Prior art keywords
- layer
- picture
- reconstruction
- network
- perception
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 14
- 239000010410 layer Substances 0.000 claims abstract description 64
- 230000004927 fusion Effects 0.000 claims abstract description 15
- 239000011229 interlayer Substances 0.000 claims abstract description 9
- 238000000605 extraction Methods 0.000 claims description 30
- 230000008569 process Effects 0.000 claims description 12
- 238000007499 fusion processing Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 230000001105 regulatory effect Effects 0.000 claims 1
- 238000012360 testing method Methods 0.000 abstract description 7
- 238000012545 processing Methods 0.000 abstract description 2
- 238000012549 training Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 11
- 230000000694 effects Effects 0.000 description 6
- 238000005070 sampling Methods 0.000 description 5
- 101001116774 Homo sapiens Methionine-R-sulfoxide reductase B2, mitochondrial Proteins 0.000 description 3
- 102100024862 Methionine-R-sulfoxide reductase B2, mitochondrial Human genes 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000001338 self-assembly Methods 0.000 description 2
- 101100365548 Caenorhabditis elegans set-14 gene Proteins 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a picture super-resolution reconstruction method of a double-channel multi-perception convolutional neural network, and relates to the technical field of image processing. The invention uses double convolution channels with different convolution kernels and combines local dense connection to obtain various perceptibility of the picture characteristic information, and an interlayer fusion structure with a convolution adjusting function restores more accurate fusion information. The network is trained by the DIV2K data set, and under the condition that only 8 layers of DMRB modules are used, the test result of a plurality of reference data sets is better than the current most advanced reconstruction algorithms such as MSRN, EDSR and the like. The reconstruction result graph of the DMCN contains richer high-frequency detail information, is closer to the original picture, can sense various information in the picture more comprehensively by the network structure of the DMCN, and has stronger reconstruction capability.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a picture super-resolution reconstruction method of a double-channel multi-perception convolutional neural network.
Background
Picture super-resolution reconstruction aims at reconstructing blurred Low-resolution (LR) pictures into clearer high-resolution (HR) pictures. The method can solve the problems of image blurring, noise interference and the like in the fields of video monitoring, medicine, satellite imaging and the like. Common super-resolution methods of pictures include interpolation methods, methods based on sparse representation, local linear regression methods and methods based on deep learning.
Recent studies have shown that deep neural networks can significantly improve the quality of single image super-resolution. Current research tends to use deeper convolutional neural networks to improve performance. However, blindly increasing the depth of the network does not effectively improve the network. Worse, as the depth of the network increases, more problems occur in the training process and more training skills are required.
For the existing super-resolution reconstruction technology of pictures, a new multi-scale residual error network (MSRN) is proposed by the MSRN to fully utilize the characteristics of the images: MSRB is used to acquire image features of different scales (local multiscale features). The outputs of each MSRB are combined for global feature fusion (HFFS, one with a 1 x 1 convolution kernel as the bottleneck layer). The local multiscale features are combined with the global features, LR image features are utilized to the greatest extent, the problem that features disappear in the transmission process is thoroughly solved, and a simple and efficient reconstruction structure is designed, so that multiscale amplification can be easily realized.
The EDSR is structurally compared to SRResNet by operating a batch normalization process (batch normalization, BN) to be eliminated. Because the batch normalization layer consumes the same amount of memory as the convolution layer in front of it, the EDSR can stack more network layers or extract more features from each layer with the same computing resources after this operation is removed, thus resulting in better performance. EDSR optimizes the network model with a loss function of the L1 norm pattern. The method comprises the steps of firstly training a low-multiple up-sampling model during training, then initializing a high-multiple up-sampling model by using parameters obtained by training the low-multiple up-sampling model, so that the training time of the high-multiple up-sampling model can be reduced, and meanwhile, the training result is better; the middle part of the MDSR is also like the EDSR, except that different pre-trained models are added in front of the network to reduce the difference of the input pictures by different multiples. And finally, the structures which are up-sampled at different times are arranged in parallel to obtain output results at different times.
The existing method has the following problems: first, the performance of the depth network for super-resolution reconstruction is highly dependent on the network width and depth, and the new approach tends to use wider networks and more network layers to enhance the reconstruction effect. However, the continuously-growing network scale enables the training difficulty to be synchronously improved, the network needs to be designed more reasonably to avoid the problems of gradient disappearance and the like in training, meanwhile, the time complexity and the space complexity of the network in the calculation process are multiplied, and the dependence on GPU hardware is high. Secondly, most super-resolution networks use residual stacking structures similar to those in ResNet to improve training effect, but the simple residual structures cannot fully extract image features in the network. Although the MSRN network uses a multi-scale method to extract the picture features and enhances the reconstruction effect, the MSRB module structure of the MSRN network can not extract the complete features of the picture, particularly the fusion feature information obtained through dense channel connection, and the extraction of the global features of the picture is insufficient.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a picture super-resolution reconstruction method of a double-channel multi-perception convolutional neural network.
In order to solve the technical problems, the invention adopts the following technical scheme:
a picture super-resolution reconstruction method of a double-channel multi-perception convolutional neural network comprises the following steps:
step 1: constructing a two-channel multi-perception residual error module DMRB as a basic module of a reconstruction network;
the reconstruction network comprises a shallow layer feature extraction layer, a deep layer feature extraction layer and an amplifying reconstruction layer;
step 2: low resolution picture I of shallow feature extraction layer to input network LR The dimension is increased from the 3 feature dimension of the RGB picture to the 64 feature dimension of the deep feature extraction layer, the preliminary feature information of the picture is obtained, and the feature value X is output 0 The process is as follows: x is X 0 =H SFE (I LR ) Wherein H is SFE (-) represents a shallow feature extraction function;
step 3: x is X 0 Input into deep feature extraction layer, transfer between multi-layer double-channel multi-perception residual error module DMRB, continuously extract feature information, output each layer through adjusting layer (1*1 convolution), input into interlayer fusion layer, and finally promote feature extraction efficiency through residual error structure, output feature value X d The process is: x is X d =H DFE (X 0 ) Wherein H is DFE (-) represents a depth feature extraction function;
the adjusting layer adjusts the proportion relation of each layer in the fusion process;
the interlayer fusion layer fuses the feature information output by each deep feature extraction layer;
step 4: amplifying the picture to a specific multiple through an amplifying reconstruction layer, wherein the process is as follows: i SR =H up_REC (X d ) Wherein H is up_REC () represents an amplifying and reconstructing function;
step 5: representing the entire network function as H DMCN (-), low resolution picture I LR Mapping to high resolution picture I SR :I SR =H up_REC (H DEF (H SFE (I LR )))。
The beneficial effects of adopting above-mentioned technical scheme to produce lie in:
the invention provides a picture super-resolution reconstruction method of a double-channel multi-perception convolutional neural network, which uses double-convolution channels with different convolution kernels and combines local dense connection to obtain multiple perception capacities on picture characteristic information, and an interlayer fusion structure with a convolution adjusting function restores more accurate fusion information. The network is trained by the DIV2K data set, and under the condition that only 8 layers of DMRB modules are used, the test result of a plurality of reference data sets is better than the current most advanced reconstruction algorithms such as MSRN, EDSR and the like. The reconstruction result graph of the DMCN contains richer high-frequency detail information, is closer to the original picture, can sense various information in the picture more comprehensively by the network structure of the DMCN, and has stronger reconstruction capability.
Drawings
FIG. 1 is a flow chart of a picture super-resolution reconstruction method of a double-channel multi-perception convolutional neural network;
FIG. 2 is a diagram of a reconstruction network architecture according to the present invention;
FIG. 3 is a block diagram of a dual-channel multi-perception residual module according to the present invention;
fig. 4 is a schematic diagram of a comparison of quality in a 4-fold reconstruction of an embodiment of the present invention.
Detailed Description
The following describes the embodiments of the present invention in detail with reference to the drawings.
A picture super-resolution reconstruction method of a double-channel multi-perception convolutional neural network, as shown in figure 1, comprises the following steps:
step 1: constructing a dual-channel multi-perception residual error module DMRB as a basic module of a reconstruction network, wherein the module can maximize the perception picture characteristics and has stronger high-frequency information reduction capability in reconstruction;
the reconstruction network is shown in fig. 2, and comprises a shallow layer feature extraction layer, a deep layer feature extraction layer and an amplifying reconstruction layer;
step 2: low resolution picture I of shallow feature extraction layer to input network LR The dimension is increased from the 3 feature dimension of the RGB picture to the 64 feature dimension of the deep feature extraction layer, the preliminary feature information of the picture is obtained, and the feature value X is output 0 The process is as follows: x is X 0 =H SFE (I LR ) Wherein H is SFE (-) represents a shallow feature extraction function;
step 3: x is X 0 Input into deep feature extraction layer, transfer between multi-layer double-channel multi-perception residual error module DMRB, continuously extract feature information, output each layer through adjusting layer (1*1 convolution), input into interlayer fusion layer, and finally promote feature extraction efficiency through residual error structure, output feature value X d The process is: x is X d =H DFE (X 0 ) Wherein H is DFE (-) represents a depth feature extraction function;
the adjusting layer adjusts the proportion relation of each layer in the fusion process;
the interlayer fusion layer fuses the feature information output by each deep feature extraction layer;
step 4: amplifying the picture to a specific multiple through an amplifying reconstruction layer, wherein the process is as follows: i SR =H up_REC (X d ) Wherein H is up_REC () represents an amplifying and reconstructing function;
this embodiment uses sub-pixel convolution up-sampling.
Step 5: representing the entire network function as H DMCN (-), low resolution picture I LR Mapping to high resolution picture I SR :I SR =H up_REC (H DEF (H SFE (I LR )))=H DMCN (I LR )
The network is trained by the DIV2K data set, and under the condition that only 8 layers of DMRB modules are used, the test result of a plurality of reference data sets is better than that of the current most advanced reconstruction models.
The DMRB structure of the two-channel multi-perception residual module is shown in fig. 3. The feature extraction channels on the left and right sides respectively adopt convolution kernels of 3×3 and 5×5. Different convolution kernels can enable convolution operation to obtain picture feature information on different scales, if the information is fused and further feature extraction is carried out, the perception capability of a depth structure can be effectively enhanced, the method is successfully applied to a GoogLeNet network, and a similar structure is used in a later MSRN network. The structure of the invention is different from the structure of the invention in that the DMRB is fused with local dense connection information besides the characteristic values output by two convolution operations.
In the embodiment, 800 pictures in the DIV2K dataset are used for training a convolution network, input pictures are RGB images and cut into 48 multiplied by 48, and the input images are subjected to rotation, turnover and other transformation according to the method in the EDSR network so as to enhance the training effect. Each training sample number (batch size) was 16 for a total of 1000 iterations. In the training, 2 times, 3 times and 4 times reconstruction are respectively trained. Training results were tested based on Set5, set14, B100 and Urban100 baseline data sets, with evaluation indices peak signal-to-noise ratio (PSNR) and structural similarity (Structural Similarity Inex, SSIM). Table 1 gives a comparison of the method herein with several classical SR methods.
Table 1 benchmark dataset test results
The underlined data in the table is the best result in the test, and the DMCN+ uses a Geometric Self-assembly (Self-assembly) method to improve the test effect. It can be seen that the best test data is obtained for the DMCN network we propose in most training sets.
Fig. 4 shows a comparison of the reconstruction effect of a DMCN in a 4-fold reconstruction with a high degree of difficulty with several mainstream reconstruction algorithms VDSR, MSRN, etc. From subjective visual experience, it can be obviously seen that the reconstructed result image of the DMCN contains more abundant high-frequency detail information, and is more similar to the original image. The result is mainly beneficial to the DMCN network structure to more comprehensively sense various information in the picture, and the reconstruction capability is stronger.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.
Claims (4)
1. A picture super-resolution reconstruction method of a double-channel multi-perception convolutional neural network is characterized by comprising the following steps of: the method comprises the following steps:
step 1: constructing a two-channel multi-perception residual error module DMRB as a basic module of a reconstruction network;
step 2: low resolution picture I of shallow feature extraction layer to input network LR The dimension is increased from the 3 feature dimension of the RGB picture to the 64 feature dimension of the deep feature extraction layer, the preliminary feature information of the picture is obtained, and the feature value X is output 0 The process is as follows: x is X 0 =H SFE (I LR ) Wherein H is SFE (-) represents a shallow feature extraction function;
step 3: x is X 0 Input into deep feature extraction layer, transfer between multi-layer double-channel multi-perception residual error module DMRB, continuously extract feature information, output each layer through adjusting layer (1*1 convolution), input into interlayer fusion layer, and finally promote special through residual error structureThe sign extraction efficiency, output characteristic value X d The process is: x is X d =H DFE (X 0 ) Wherein H is DFE (-) represents a depth feature extraction function;
step 4: amplifying the picture to a specific multiple through an amplifying reconstruction layer, wherein the process is as follows: i SR =H up_REC (X d ) Wherein H is up_REC () represents an amplifying and reconstructing function;
step 5: representing the entire network function as H DMCN (-), low resolution picture I LR Mapping to high resolution picture I SR :I SR =H up_REC (H DEF (H SFE (I LR )))。
2. The method for reconstructing the super-resolution of the picture of the two-channel multi-perception convolutional neural network according to claim 1, wherein the reconstruction network in the step 1 comprises a shallow feature extraction layer, a deep feature extraction layer and an amplifying reconstruction layer.
3. The method for reconstructing the super-resolution of the picture of the double-channel multi-perception convolutional neural network according to claim 1, wherein in the step 3, the adjusting layer adjusts the proportional relation of each layer in the fusion process; and the interlayer fusion layer fuses the characteristic information output by each deep characteristic extraction layer.
4. The method for reconstructing super resolution of a picture of a two-channel multi-perception convolutional neural network according to claim 3, wherein said inter-layer fusion module outputs X except for the last layer n In addition, X 0 To X n-1 When the jump connection is carried out to the fusion layer, a 1 multiplied by 1 convolution layer is passed, and besides the output of the last layer has a fixing function in the fusion, the other layers are dynamically regulated by the convolution layer, so that the accurate picture characteristic value is extracted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010946074.XA CN112070676B (en) | 2020-09-10 | 2020-09-10 | Picture super-resolution reconstruction method of double-channel multi-perception convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010946074.XA CN112070676B (en) | 2020-09-10 | 2020-09-10 | Picture super-resolution reconstruction method of double-channel multi-perception convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112070676A CN112070676A (en) | 2020-12-11 |
CN112070676B true CN112070676B (en) | 2023-10-27 |
Family
ID=73663606
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010946074.XA Active CN112070676B (en) | 2020-09-10 | 2020-09-10 | Picture super-resolution reconstruction method of double-channel multi-perception convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112070676B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950470B (en) * | 2021-02-26 | 2022-07-15 | 南开大学 | Video super-resolution reconstruction method and system based on time domain feature fusion |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106910161A (en) * | 2017-01-24 | 2017-06-30 | 华南理工大学 | A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks |
EP3319039A1 (en) * | 2016-11-07 | 2018-05-09 | UMBO CV Inc. | A method and system for providing high resolution image through super-resolution reconstruction |
CN108921786A (en) * | 2018-06-14 | 2018-11-30 | 天津大学 | Image super-resolution reconstructing method based on residual error convolutional neural networks |
WO2018221863A1 (en) * | 2017-05-31 | 2018-12-06 | Samsung Electronics Co., Ltd. | Method and device for processing multi-channel feature map images |
CN109509149A (en) * | 2018-10-15 | 2019-03-22 | 天津大学 | A kind of super resolution ratio reconstruction method based on binary channels convolutional network Fusion Features |
CN109903226A (en) * | 2019-01-30 | 2019-06-18 | 天津城建大学 | Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks |
WO2019120110A1 (en) * | 2017-12-20 | 2019-06-27 | 华为技术有限公司 | Image reconstruction method and device |
CN110276721A (en) * | 2019-04-28 | 2019-09-24 | 天津大学 | Image super-resolution rebuilding method based on cascade residual error convolutional neural networks |
CN111192200A (en) * | 2020-01-02 | 2020-05-22 | 南京邮电大学 | Image super-resolution reconstruction method based on fusion attention mechanism residual error network |
-
2020
- 2020-09-10 CN CN202010946074.XA patent/CN112070676B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3319039A1 (en) * | 2016-11-07 | 2018-05-09 | UMBO CV Inc. | A method and system for providing high resolution image through super-resolution reconstruction |
CN106910161A (en) * | 2017-01-24 | 2017-06-30 | 华南理工大学 | A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks |
WO2018221863A1 (en) * | 2017-05-31 | 2018-12-06 | Samsung Electronics Co., Ltd. | Method and device for processing multi-channel feature map images |
WO2019120110A1 (en) * | 2017-12-20 | 2019-06-27 | 华为技术有限公司 | Image reconstruction method and device |
CN108921786A (en) * | 2018-06-14 | 2018-11-30 | 天津大学 | Image super-resolution reconstructing method based on residual error convolutional neural networks |
CN109509149A (en) * | 2018-10-15 | 2019-03-22 | 天津大学 | A kind of super resolution ratio reconstruction method based on binary channels convolutional network Fusion Features |
CN109903226A (en) * | 2019-01-30 | 2019-06-18 | 天津城建大学 | Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks |
CN110276721A (en) * | 2019-04-28 | 2019-09-24 | 天津大学 | Image super-resolution rebuilding method based on cascade residual error convolutional neural networks |
CN111192200A (en) * | 2020-01-02 | 2020-05-22 | 南京邮电大学 | Image super-resolution reconstruction method based on fusion attention mechanism residual error network |
Non-Patent Citations (3)
Title |
---|
Enhanced deep residual networks for single image super-resolution;Lim B et al.;《IEEE Conference on Computer Vision and Pattern Recognition Workshops ( CVPRW) 》;全文 * |
Image Super-resolution Algorithm Based on Dual-channel Convolutional Neural Networks;Yuantao Chen et al.;《APPLIED SCIENCES》;全文 * |
基于金字塔式双通道卷积神经网络的深度图像超分辨率重建;于淑侠 等;《计算机应用》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112070676A (en) | 2020-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | Video super-resolution based on deep learning: a comprehensive survey | |
Liang et al. | High-resolution photorealistic image translation in real-time: A laplacian pyramid translation network | |
CN109886871B (en) | Image super-resolution method based on channel attention mechanism and multi-layer feature fusion | |
CN110599401A (en) | Remote sensing image super-resolution reconstruction method, processing device and readable storage medium | |
CN110232653A (en) | The quick light-duty intensive residual error network of super-resolution rebuilding | |
CN109949223B (en) | Image super-resolution reconstruction method based on deconvolution dense connection | |
CN111340744B (en) | Attention double-flow depth network-based low-quality image down-sampling method and system | |
CN110322402B (en) | Medical image super-resolution reconstruction method based on dense mixed attention network | |
CN111932461A (en) | Convolutional neural network-based self-learning image super-resolution reconstruction method and system | |
CN111951164B (en) | Image super-resolution reconstruction network structure and image reconstruction effect analysis method | |
Luo et al. | Lattice network for lightweight image restoration | |
CN111986092B (en) | Dual-network-based image super-resolution reconstruction method and system | |
CN113781308A (en) | Image super-resolution reconstruction method and device, storage medium and electronic equipment | |
CN112614061A (en) | Low-illumination image brightness enhancement and super-resolution method based on double-channel coder-decoder | |
CN108460723B (en) | Bilateral total variation image super-resolution reconstruction method based on neighborhood similarity | |
Wang et al. | Underwater image super-resolution and enhancement via progressive frequency-interleaved network | |
Zhang et al. | Med-SRNet: GAN-based medical image super-resolution via high-resolution representation learning | |
CN115526779A (en) | Infrared image super-resolution reconstruction method based on dynamic attention mechanism | |
CN115797176A (en) | Image super-resolution reconstruction method | |
CN115170392A (en) | Single-image super-resolution algorithm based on attention mechanism | |
CN112070676B (en) | Picture super-resolution reconstruction method of double-channel multi-perception convolutional neural network | |
CN112150356A (en) | Single compressed image super-resolution reconstruction method based on cascade framework | |
Wu et al. | Dcanet: Dual convolutional neural network with attention for image blind denoising | |
CN116071239B (en) | CT image super-resolution method and device based on mixed attention model | |
Luo et al. | A fast denoising fusion network using internal and external priors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |