CN116563100A - Blind super-resolution reconstruction method based on kernel guided network - Google Patents

Blind super-resolution reconstruction method based on kernel guided network Download PDF

Info

Publication number
CN116563100A
CN116563100A CN202310353790.0A CN202310353790A CN116563100A CN 116563100 A CN116563100 A CN 116563100A CN 202310353790 A CN202310353790 A CN 202310353790A CN 116563100 A CN116563100 A CN 116563100A
Authority
CN
China
Prior art keywords
image
kernel
generator
resolution
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310353790.0A
Other languages
Chinese (zh)
Inventor
张艳宁
闫庆森
刘胜强
朱宇
孙瑾秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202310353790.0A priority Critical patent/CN116563100A/en
Publication of CN116563100A publication Critical patent/CN116563100A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a blind super-resolution reconstruction method based on a kernel guide network, which comprises the steps of firstly inputting an image block into a downscaling generator to obtain a downsampled image and a kernel, and then inputting the other image block and the downsampled image into a discriminator at the same time; and then the downsampled image and the kernel are simultaneously input into an upscaling generator, and finally the super-resolution reconstructed image is obtained. The invention can capture the direction information and improve the accuracy of the generated image. Further, a high-quality high-resolution image is thereby generated.

Description

Blind super-resolution reconstruction method based on kernel guided network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a blind super-resolution reconstruction method.
Background
The image super-resolution target is to recover a sharp high-resolution image from a low-resolution image. Most of the existing methods are supervised super-resolution methods, which use a large number of low-resolution-high-resolution image pairs to learn the mapping relationship between low frequencies and high frequencies, and then reconstruct images requiring super-resolution by using the mapping relationship. However, such supervised methods require a large number of low-resolution-high-resolution image pairs, and real world imaging processes are complex and large-scale and high-quality data sets are difficult to obtain. Document "image super-resolution reconstruction of multi-scale dense feature fusion", optical precision engineering, 2022, vol 30 (20), pp 2489-2500 "discloses an image super-resolution reconstruction method based on a multi-scale dense feature fusion network. According to the method, the multi-scale feature fusion residual error module containing different scale convolution kernels is used for extracting different scale image features and fusing the features of different scales so as to extract rich image features. And the intensive feature fusion structure is adopted among the modules to fully fuse the feature information extracted by different modules so as to better reserve the high-frequency details of the image and obtain better visual perception. The method described in the literature is a supervised method requiring a large number of image pairs, but constructing a large-scale and high-quality super-resolution dataset is a difficult or even impossible task to accomplish, especially when capturing real-world images is required, and therefore is not applicable for processing super-resolution of real-scene images. In addition, the method is more focused on the weight reduction of parameters, and is still deficient in the aspects of the structure and the performance of the model.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a blind super-resolution reconstruction method based on a kernel guide network, which comprises the steps of firstly inputting an image block into a downscaling generator to obtain a downsampled image and a kernel, and then inputting the other image block and the downsampled image into a discriminator at the same time; and then the downsampled image and the kernel are simultaneously input into an upscaling generator, and finally the super-resolution reconstructed image is obtained. The invention can capture the direction information and improve the accuracy of the generated image. Further, a high-quality high-resolution image is thereby generated.
The technical scheme adopted by the invention for solving the technical problems comprises the following steps:
step (a)1: inputting image block x of image a into downscaling generator G DN Obtaining a downsampled image x' and a kernel K s
The downsampled version of the given resolution image is obtained by a downscaling generator, the image block x downsampling process being as follows:
x′,K s =G DN (x) (1)
where x is the high resolution image, x' is the downsampled image, K s Is the kernel, s is the scaling factor;
the downscaling generator comprises 6 hidden convolution layers, each convolution layer having 64 channels; first, sub-sampling is represented by 3 convolution layers and then by a pooling operation, which includes 3 additional 1×1 filters, to obtain a downsampled image;
step 2: the image block y of the image A and the downsampled image x' are simultaneously input to the discriminator D DN The discriminator makes the downsampled image x' approach the original image by judging the true or false of the image block;
the discriminator consists of convolution, spectrum normalization, batch normalization, reLU and Sigmoid activation; the first layer is a convolution with the size of 7×7 and spectrum normalization operation, the middle 5 layers are 5 1×1 convolutions, each convolution is followed by batch normalization operation and uses a ReLU activation function, the last layer uses three parallel convolutions and uses a Sigmoid activation function, wherein the parallel convolutions have kernel sizes of 1×3 in the horizontal direction, 3×1 in the vertical direction and 3×3 in the diagonal direction, respectively; the output of the discriminator is an average mapping of three parallel convolved outputs belonging to interval 0,1, the size of the output being the same as the size of the input;
designing a mapping with the same size as the input of the discriminator as a label of the discriminator, wherein the mapping is a true/false matrix, namely representing 1/0 label respectively;
step 3: downsampled image x' and kernel K s Simultaneous input upscaling generator G UP Obtaining an image x';
the image x "output by the upscaler is approximately equal to the original input image block x as follows:
x″=G UP (G DN (x))≈x (2)
wherein x' is the image after super-resolution reconstruction;
the upscaling generator comprises an image branch and a kernel branch; the image branch comprises 9 hidden 3×3 convolution layers and three feature transformation FT modules, wherein the last layer is a 3×3 convolution layer for generating a final image; the kernel branch comprises two hidden layers and an output layer of 64 nodes, and the output of the kernel branch is sent to the FT module; finally, obtaining a high-resolution image through global residual error connection between input and output;
the feature transformation FT module is used for transforming the feature according to the nuclear feature F K Features F to the image I Refining, in each FT module, two kernel-aware FC layers are used for adjusting features, and the FT module is used for adjusting kernel features F through scaling and shifting operations K Feature map F for condition I Providing an affine transformation:
FT(F I ,F K )=F I +γ⊙F K +β (3)
wherein γ and β represent parameters of scaling and translation, respectively, and as indicated by the Hadamard product;
transformation parameters gamma and beta consist of a kernel characteristic F K Obtained by the FC layer:
γ=FC 2 (FC 1 (F K )) (4)
β=FC 4 (FC 3 (F K )) (5)
FC in i (·), i=1, 2,3,4 is a fully connected layer;
step 4: realizing the reverse process, and combining the image block y of the image A with the kernel K s The up-scale generator is input and then the down-scale generator obtains the image y ", the process of which is as follows:
y″=G DN (G UP (y))≈y (6)
step five: training a downscaling generator, a discriminator and an upscaling generator;
using loop consistency penalty L cycle To restrict forward and backward circulation flows; mask interpolation loss L interp For use inRemoval of G UP Unsatisfactory artifacts in the results; using L GAN To improve the accuracy of the learning degradation process downscaling generator;
the loss function used for training is:
L total =L GANcycle L cycleinterp L interp (7)
L GAN =E x [D DN (G DN (x)) 2 ]+E y [(D DN (y)-1) 2 ]+R (8)
wherein R is represented by generator G DN Regularization term on the resulting degenerate super-resolution kernel; e (E) x And E is y Representing the desire;
the loop consistency loss is written as:
L cycle =E x [G UP (G DN (x))-x]+E y [G DN (G UP (y))-y] (9)
for mask interpolation loss, the frequency mask f is first calculated by applying the Sobel operator to the Bicubic (y) Bicubic up-sampled image mask
f mask =1-Sobel(Bicubic(y)) (10)
The mask interpolation penalty is defined as:
L interp =E y ||[G UP (y)-Bicubic(y)]×f mask || 1 (11)
preferably, 3 filters are used in the step 1 to represent that the 3 filters in the convolution process in the formula (1) have sizes of 7×7, 5×5, and 3×3, respectively.
The beneficial effects of the invention are as follows:
in order to overcome the defects of the existing super-resolution method based on the supervised image, the invention provides a blind super-resolution reconstruction method based on a kernel guided network. The present invention trains two networks, an upscale network and a downscale network, using image blocks of a given low resolution image, requiring only the given image, without requiring a large number of high quality data sets. The downscaling network may be directly based on generating a degradation process specific to the anti-network learning image and obtaining a downsampled version of the given low resolution image. In order to better estimate the degradation of the input, the present invention uses a specific discriminator to force the network to focus on estimating the direction of a specific kernel of the image, and in order to focus on different features of the output, the present invention extracts three directional features, horizontal, vertical and diagonal. In other words, the discriminator in the method of the invention is more powerful, can capture direction information and improve the accuracy of the generated image. Furthermore, the present invention uses a feature transformation module to direct the upscaled network to generate high quality high resolution images based on the learned kernel and given downsampled versions of the low resolution.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is an image block x of an image a according to an embodiment of the present invention.
Fig. 3 is an image block y of an image a according to an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
In the following, a specific embodiment will be described taking an image block x of an image a as shown in fig. 2 as an example. The blind super-resolution reconstruction method based on the kernel guided network in the embodiment comprises the following steps:
step one: inputting the image block x of the image A into a downscaling generator to obtain a downsampled image x' and a kernel K s
Scale-down generator (G) DN ) Tends to learn the underlying image-specific super-resolution kernel, which best preserves the distribution of image blocks on the low resolution image scale. The downsampled version of the given resolution image may be obtained by a downscaling generator, with the image block x downsampling process as follows:
x′,K s =G DN (x) (1)
where x is the high resolution image, x' is the downsampled image, K s Is the kernel and s is the scaling factor. Downsampling downscaling is a linear transformation, thus using a linear generator.
The downscaling generator consists of several inactive linear layers, consisting of 6 hidden convolutional layers, each with 64 channels. First, the convolution process in equation (1) is represented using 3 filters (i.e., 7×7, 5×5, 3×3). Using the first three filters, the receptive field of the network was 13X 13, corresponding to a super-resolution kernel of 13X 13. The pooling operation then represents subsampling, i.e. 3 1 x 1 filters are used to obtain a downsampled image. Once trained, an accurate downscaled generator, image specific kernel K s Implicit learning by the training weights of the downscaling generator and also depends on the scale factor required. In iterations during training, the downscaling generator performs a downscaling operation of the particular low resolution input image currently being estimated.
Step two: the image block y of the image a and the downsampled image x 'are input to a discriminator which approximates the downsampled image x' to the original image by determining the authenticity of the image block, the image block y of the image a being shown in fig. 3.
Discriminator (D) DN ) The discriminator can distinguish between true image blocks and false image blocks by learning a unique internal distribution of image blocks from the input image. The true image block is cut out from the input image, belonging to the distribution learned by the discriminator, and the false image block is cut out from the image generated by the downscaling generator. The present invention uses a full convolution image block discriminator to learn the image block distribution of a single input. To distinguish small image blocks, pooling is not used throughout the network. However, the degenerated kernel is not always a gaussian-like kernel, but includes kernels with arbitrary directions. In order to focus on the different features of the output, the invention extracts three directional features, horizontal, vertical and diagonal. In other words, the discriminator in the method of the invention is more powerful, can capture direction information and improve the accuracy of the generated image.
The composition of the discriminator is described below. The discriminator consists of convolution, spectral normalization, batch normalization, reLU and Sigmoid activation. The first layer is a 7 x 7 size convolution and spectral normalization operation, the middle 5 layers are 5 1 x 1 convolutions, each convolution is followed by a batch normalization operation and uses a ReLU activation function, the last layer uses three parallel convolutions and uses a Sigmoid activation function, wherein the parallel convolutions have kernel sizes of 1 x 3 in the horizontal direction, 3 x 1 in the vertical direction and 3 x 3 in the diagonal direction, respectively. The output of the arbiter is an average map of each parallel convolution output, the map belonging to interval 0,1, the output being the same size as the input. Furthermore, the present invention designs a map of the same size as the discriminator input as the label of the discriminator, which map is a true/false matrix, i.e. representing 1/0 label, respectively. After the heat map (D-map) is obtained, it can be estimated the likelihood that it will draw its surrounding tiles from the learned tile distribution represented by each pixel in the D-map.
Step three: downsampled image x' and kernel K s An upscaling generator is input to obtain an image x ".
The downscaling generator learns a degraded super-resolution kernel, which kernel K s Representing a true degradation process, better high resolution results are obtained when the learned kernel is used for reconstruction. Based on this motivation, the present invention devised an upscaling generator (UP) for reconstructing high quality images. The invention uses the kernel K of learning s The reconstruction process is directed step by step, with the upscaled generator outputting an image x "approximately equal to the original input image block x, as follows:
x″=G UP (G DN (x))≈x (2)
where x is the image block of the input image A, x "is the super-resolution reconstructed image, DN is the downscaler, and UP is the upscaler.
The architecture of the upscaling generator includes an image branch and a kernel branch. The image branches have 9 hidden 3 x 3 convolutional layers and three Feature Transform (FT) modules. The last layer is a 3 x 3 convolutional layer for generating the final image. In addition, due to degenerated kernel K s Typically contains much less information than input and features can be captured well by a simpler network. Thus, only one Fully Connected Network (FCN) needs to be used in the core branch. The core branch takes a flattened core as an input, and has two hidden layers and an output layer with 64 nodes.The output of the core branch is fed into the FT module. And finally, obtaining a high-resolution image through global residual error connection between input and output.
The Feature Transformation (FT) module is based on the feature F of the core K Features F to the image I And (5) refining. In each FT module, two kernel-aware FC layers are used to adjust the characteristics. FT module uses kernel feature F through scaling and shifting operation K Feature map F for condition I Providing an affine transformation:
FT(F I ,F K )=F I +γ⊙F K +β (3)
where γ and β represent parameters of scaling and translation, and by which is meant the Hadamard product. Transformation parameters gamma and beta consist of a kernel characteristic F K Obtained with a small FC layer.
γ=FC 2 (FC 1 (F K )) (4)
β=FC 4 (FC 3 (F K )) (5)
FC in i (. Cndot.) is the full connectivity layer, assuming F I Where C is the number of feature maps and H and W each represent F I Is a height and width of (a). The sizes of γ and β are both c×1×1. Kernel K s Is an accurate version of the downscaling generator estimate, thus directly using kernel K s To calculate gamma and beta to refine F I
Step four: realizing the reverse process, and combining the image block y of the image A with the kernel K s The up-scale generator is input and then the down-scale generator obtains the image y ", the process of which is as follows:
y″=G DN (G UP (y))≈y (6)
step five: the downscaling generator, discriminator, and upscaling generator are trained with the following penalty functions.
The invention uses L cycle To restrict forward and backward circulation flows. L (L) interp Is a novel mask interpolation loss for removing G UP Unsatisfactory artifacts in the results. The invention uses L GAN To improve the accuracy of the learning degradation process downscaling generator. In general termsThe complete loss function is:
L total =L GANcycle L cycleinterp L interp (7)
since the performance of upscaled generators depends on the accuracy of the kernel, more attention is required to downscaled generators and discriminators. To estimate the super-resolution Kernel, the present invention introduces Kernel-GAN that learns image-specific degenerate kernels from an image by generating a countermeasure network (GAN).
L GAN =E x [D DN (G DN (x)) 2 ]+E y [(D DN (y)-1) 2 ]+R (8)
Wherein R is represented by generator G DN Regularization term on the resulting degenerate super-resolution kernel.
In the forward circulation stream, given one image block x, if an accurate downscaling generator and upscaling generator are obtained, the same image block, i.e. G UP (G DN (x) X). Similarly, in the reverse circulation flow, the image blocks input after the up-scale generator and the down-scale generator are also equal to themselves, i.e., G DN (G UP (x) X). Thus, the final loop consistency loss can be written as:
L cycle =E x [G UP (G DN (x))-x]+E y [G DN (G UP (y))-y] (9)
where k is G randomly initialized according to the first step DN And calculating parameters.
Mask interpolation loss is an emerging technique that can ensure that the reconstruction result does not contain artifacts and ringing effects in super-resolution networks without supervisory information. In particular, the purpose of this loss is to apply interpolation costs only to the low frequency part of the image. To this end, it first calculates the frequency mask f by applying the Sobel operator to the Bicubic (y) Bicubic up-sampled image mask
f mask =1-Sobel(Bicubic(y)) (10)
The mask has a higher pixel value in the low frequency region of the image and a lower pixel value in the high frequency region. Thus, the mask interpolation penalty may be defined as:
L interp =E y ||[G UP (y)-Bicubic(y)]×f mask || 1 (11)
since the up-sampling network also has no supervisory information, this loss is introduced to avoid artifacts and ringing problems.

Claims (2)

1. The blind super-resolution reconstruction method based on the kernel guided network is characterized by comprising the following steps of:
step 1: inputting image block x of image a into downscaling generator G DN Obtaining a downsampled image x' and a kernel K s
The downsampled version of the given resolution image is obtained by a downscaling generator, the image block x downsampling process being as follows:
x′,K s =G DN (x) (1)
where x is the high resolution image, x' is the downsampled image, K s Is the kernel, s is the scaling factor;
the downscaling generator comprises 6 hidden convolution layers, each convolution layer having 64 channels; first, sub-sampling is represented by 3 convolution layers and then by a pooling operation, which includes 3 additional 1×1 filters, to obtain a downsampled image;
step 2: the image block y of the image A and the downsampled image x' are simultaneously input to the discriminator D DN The discriminator makes the downsampled image x' approach the original image by judging the true or false of the image block;
the discriminator consists of convolution, spectrum normalization, batch normalization, reLU and Sigmoid activation; the first layer is a convolution with the size of 7×7 and spectrum normalization operation, the middle 5 layers are 5 1×1 convolutions, each convolution is followed by batch normalization operation and uses a ReLU activation function, the last layer uses three parallel convolutions and uses a Sigmoid activation function, wherein the parallel convolutions have kernel sizes of 1×3 in the horizontal direction, 3×1 in the vertical direction and 3×3 in the diagonal direction, respectively; the output of the discriminator is an average mapping of three parallel convolved outputs belonging to interval 0,1, the size of the output being the same as the size of the input;
designing a mapping with the same size as the input of the discriminator as a label of the discriminator, wherein the mapping is a true/false matrix, namely representing 1/0 label respectively;
step 3: downsampled image x' and kernel K s Simultaneous input upscaling generator G UP Obtaining an image x';
the image x "output by the upscaler is approximately equal to the original input image block x as follows:
x″=G UP (G DN (x))≈x (2)
wherein x' is the image after super-resolution reconstruction;
the upscaling generator comprises an image branch and a kernel branch; the image branch comprises 9 hidden 3×3 convolution layers and three feature transformation FT modules, wherein the last layer is a 3×3 convolution layer for generating a final image; the kernel branch comprises two hidden layers and an output layer of 64 nodes, and the output of the kernel branch is sent to the FT module; finally, obtaining a high-resolution image through global residual error connection between input and output;
the feature transformation FT module is used for transforming the feature according to the nuclear feature F K Features F to the image I Refining, in each FT module, two kernel-aware FC layers are used for adjusting features, and the FT module is used for adjusting kernel features F through scaling and shifting operations K Feature map F for condition I Providing an affine transformation:
FT(F I ,F K )=F I +γ⊙F K +β (3)
wherein γ and β represent parameters of scaling and translation, respectively, and as indicated by the Hadamard product;
transformation parameters gamma and beta consist of a kernel characteristic F K Obtained by the FC layer:
γ=FC 2 (FC 1 (F K )) (4)
β=FC 4 (FC 3 (F K )) (5)
FC in i (. Cndot.) i=1, 2,3,4 isA full connection layer;
step 4: realizing the reverse process, and combining the image block y of the image A with the kernel K s The up-scale generator is input and then the down-scale generator obtains the image y ", the process of which is as follows:
y″=G DN (G UP (y))≈y (6)
step five: training a downscaling generator, a discriminator and an upscaling generator;
using loop consistency penalty L cycle To restrict forward and backward circulation flows; mask interpolation loss L interp For removing G UP Unsatisfactory artifacts in the results; using L GAN To improve the accuracy of the learning degradation process downscaling generator;
the loss function used for training is:
L total =L GANcycle L cycleinterp L interp (7)
L GAN =E x [D DN (G DN (x)) 2 ]+E y [(D DN (y)-1) 2 ]+R (8)
wherein R is represented by generator G DN Regularization term on the resulting degenerate super-resolution kernel; e (E) x And E is y Representing the desire;
the loop consistency loss is written as:
L cycle =E x [G UP (G DN (x))-x]+E y [G DN (G UP (y))-y] (9)
for mask interpolation loss, the frequency mask f is first calculated by applying the Sobel operator to the Bicubic (y) Bicubic up-sampled image mask
f mask =1-Sobel(Bicubic(y)) (10)
The mask interpolation penalty is defined as:
L interp =E y ||[G UP (y)-Bicubic(y)]×f mask || 1 (11)。
2. the blind super-resolution reconstruction method according to claim 1, wherein 3 filters are used in step 1 to represent that 3 filters in the convolution process in formula (1) have sizes of 7×7, 5×5 and 3×3, respectively.
CN202310353790.0A 2023-04-05 2023-04-05 Blind super-resolution reconstruction method based on kernel guided network Pending CN116563100A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310353790.0A CN116563100A (en) 2023-04-05 2023-04-05 Blind super-resolution reconstruction method based on kernel guided network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310353790.0A CN116563100A (en) 2023-04-05 2023-04-05 Blind super-resolution reconstruction method based on kernel guided network

Publications (1)

Publication Number Publication Date
CN116563100A true CN116563100A (en) 2023-08-08

Family

ID=87485152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310353790.0A Pending CN116563100A (en) 2023-04-05 2023-04-05 Blind super-resolution reconstruction method based on kernel guided network

Country Status (1)

Country Link
CN (1) CN116563100A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745725A (en) * 2024-02-20 2024-03-22 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, image processing model training method, three-dimensional medical image processing method, computing device, and storage medium
CN117746171A (en) * 2024-02-20 2024-03-22 成都信息工程大学 Unsupervised weather downscaling method based on dual learning and auxiliary information
CN117745725B (en) * 2024-02-20 2024-05-14 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, image processing model training method, three-dimensional medical image processing method, computing device, and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745725A (en) * 2024-02-20 2024-03-22 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, image processing model training method, three-dimensional medical image processing method, computing device, and storage medium
CN117746171A (en) * 2024-02-20 2024-03-22 成都信息工程大学 Unsupervised weather downscaling method based on dual learning and auxiliary information
CN117746171B (en) * 2024-02-20 2024-04-23 成都信息工程大学 Unsupervised weather downscaling method based on dual learning and auxiliary information
CN117745725B (en) * 2024-02-20 2024-05-14 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, image processing model training method, three-dimensional medical image processing method, computing device, and storage medium

Similar Documents

Publication Publication Date Title
Xiao et al. Satellite video super-resolution via multiscale deformable convolution alignment and temporal grouping projection
Engin et al. Cycle-dehaze: Enhanced cyclegan for single image dehazing
Dong et al. Multi-scale boosted dehazing network with dense feature fusion
CN110969577B (en) Video super-resolution reconstruction method based on deep double attention network
Zheng et al. Crossnet: An end-to-end reference-based super resolution network using cross-scale warping
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
CN111709895A (en) Image blind deblurring method and system based on attention mechanism
CN109727195B (en) Image super-resolution reconstruction method
CN111784582B (en) DEC-SE-based low-illumination image super-resolution reconstruction method
CN109214989A (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
Chen et al. Single image super-resolution using deep CNN with dense skip connections and inception-resnet
CN112837224A (en) Super-resolution image reconstruction method based on convolutional neural network
Yang et al. Image super-resolution based on deep neural network of multiple attention mechanism
CN113706388B (en) Image super-resolution reconstruction method and device
Xu et al. Joint demosaicing and super-resolution (JDSR): Network design and perceptual optimization
López-Tapia et al. A single video super-resolution GAN for multiple downsampling operators based on pseudo-inverse image formation models
Chen et al. Remote sensing image super-resolution via residual aggregation and split attentional fusion network
CN110689509A (en) Video super-resolution reconstruction method based on cyclic multi-column 3D convolutional network
CN116563100A (en) Blind super-resolution reconstruction method based on kernel guided network
Chen et al. Self-supervised cycle-consistent learning for scale-arbitrary real-world single image super-resolution
Zheng et al. Double-branch dehazing network based on self-calibrated attentional convolution
CN111899166A (en) Medical hyperspectral microscopic image super-resolution reconstruction method based on deep learning
Lyn Multi-level feature fusion mechanism for single image super-resolution
Han et al. MPDNet: An underwater image deblurring framework with stepwise feature refinement module
CN114332625A (en) Remote sensing image colorizing and super-resolution method and system based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination