CN117593188B - Super-resolution method based on unsupervised deep learning and corresponding equipment - Google Patents

Super-resolution method based on unsupervised deep learning and corresponding equipment Download PDF

Info

Publication number
CN117593188B
CN117593188B CN202410077396.3A CN202410077396A CN117593188B CN 117593188 B CN117593188 B CN 117593188B CN 202410077396 A CN202410077396 A CN 202410077396A CN 117593188 B CN117593188 B CN 117593188B
Authority
CN
China
Prior art keywords
image
super
representing
loss function
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410077396.3A
Other languages
Chinese (zh)
Other versions
CN117593188A (en
Inventor
顾舒航
赵小锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Yitu Zhixiang Information Technology Co ltd
Original Assignee
Chengdu Yitu Zhixiang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Yitu Zhixiang Information Technology Co ltd filed Critical Chengdu Yitu Zhixiang Information Technology Co ltd
Priority to CN202410077396.3A priority Critical patent/CN117593188B/en
Publication of CN117593188A publication Critical patent/CN117593188A/en
Application granted granted Critical
Publication of CN117593188B publication Critical patent/CN117593188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks

Abstract

The super-resolution method based on the unsupervised deep learning and the corresponding equipment are provided; comprising the following steps: constructing a dataset based on unpaired real LR images and high quality HR images; inputting the high-quality HR image into a degradation generation network to obtain a synthesized LR image; the high-quality HR image passes through a bicubic downsampling module to obtain a bicubic downsampling result; training the degradation generation network through a content loss function and a perception loss function between the synthesized LR image and the bicubic downsampling result and a counterloss function between the synthesized LR image and the real LR image to obtain a trained degradation generation network; taking a high-quality HR image, a real LR image and a synthesized LR image as inputs, and training a super-resolution network by adopting a training method combining domain difference perception and domain distance weighting supervision to obtain a super-resolution result; the method has the beneficial effect of generating the super-resolution result with good real and visual quality, and is suitable for the field of image super-resolution.

Description

Super-resolution method based on unsupervised deep learning and corresponding equipment
Technical Field
The invention relates to the field of image super-resolution, in particular to a super-resolution method based on unsupervised deep learning and corresponding equipment.
Background
The image Super Resolution (SR) refers to: a process of recovering a high resolution image (HR) from a given low resolution image (LR). In the past few years, the field of image super-resolution has been vigorously developed due to its extremely high practical value in generating visually pleasing images and enhancing details.
Currently, many studies have been made to more accurately recover LR texture details by means of Deep Neural Networks (DNNs), such as: dong et al propose srcn networks, which is the first DNN-based super-resolution method, by using a three-layer Convolutional Neural Network (CNN) to capture the mapping function between LR and HR images; after that, on the basis of this pioneering work, many works have emerged, by designing deeper networks and better training strategies to improve super-resolution performance, and a powerful model has been obtained. In addition, in order to enable the super-resolution network to recover more realistic details, johnson et al proposed a perceptual loss function, and Ledig et al introduced an antagonistic learning method, thereby improving the overall visual quality of the image.
Although the above method has been successful on the reference data set, the generalization capability of the discriminant trained super-resolution network is poor, limiting their application in real-world scenarios; the specific expression is as follows: due to operations such as device limitations, image processing or image compression at a storage stage, a real image often undergoes an inevitably complex degradation process; when applying super-resolution networks trained based on analog datasets to super-resolution reconstruction of real world images, these methods often produce undesirable artifacts in the reconstruction results; accordingly, recent research is focused on the problem of real world super resolution.
To provide data support for real-world super-resolution tasks, one approach collects pairs of LR and HR training data in a real scene by varying focal length and camera type; however, this method is labor intensive in the process of collecting sufficient data, and a super-resolution network trained using images taken under specific conditions still fails to produce satisfactory super-resolution results on pictures under other conditions. Another method, such as: the blind image super-resolution method describes the complex degradation process of real world LR images using parameterized degradation models and studies how to adapt to unknown degradation super-parameters of the test phase, which has been shown to increase the generalization ability of the model compared to models trained on predetermined clean synthetic data, but because real images tend to be affected by complex degradation factors including sensor noise and compression artifacts, the strict assumption of blind image super-resolution on image degradation processes severely limits their performance on real world data.
In summary, the existing super-resolution algorithm has the problem of non-ideal image construction, and needs to be improved.
Disclosure of Invention
In view of the above, it is necessary to provide a super-resolution method and corresponding device based on unsupervised deep learning, which can train a super-resolution model by using unpaired training data to generate a real and visual super-resolution result.
The invention provides a super-resolution method based on unsupervised deep learning, which comprises the following steps:
s10, constructing a data set based on unpaired real LR images and high quality HR images;
s20, inputting the high-quality HR image into a degradation generation network with a downsampling module to obtain a synthesized LR image; the high-quality HR image passes through a bicubic downsampling module to obtain a bicubic downsampling result;
s30, training the degradation generation network through a content loss function and a perception loss function between the synthesized LR image and the downsampling result and an antagonism loss function between the synthesized LR image and the real LR image to obtain a trained degradation generation network;
s40, taking the high-quality HR image, the real LR image and the synthesized LR image as input, and training the super-resolution network by adopting a training method combining domain difference perception and domain distance weighting supervision to obtain a super-resolution result.
Optionally, in the step S40, the super-resolution network is a domain distance adaptive super-resolution network;
the domain distance adaptive super-resolution network includes: the device comprises a convolutional neural network module, a domain distance self-adaptive adjustment RRDB module and an up-sampling module.
Optionally, in S40, the training method combining domain difference sensing and domain distance weighting supervision trains the super-resolution network, including:
s401, establishing an fight loss function based on a real LR image based on domain difference perception, so that a trained super-resolution network generates a real super-resolution result with high visual quality;
s402, performing domain distance weighting supervision training by adopting an output domain distance graph of a discriminator in a degradation generation network, and adaptively adjusting the magnitudes of a content loss function and a perception loss function.
Optionally, the domain distance adaptive adjustment RRDB module includes:
through domain distance mapGenerating a spatial attention map->To adjust the local mapping function of the domain distance adaptive adjustment RRDB module.
Optionally, in S30, the expression of the content loss function is:
the expression of the perceptual loss function is:
wherein,representing content loss function, ++>Representing a perceptual loss function;
representing a high quality HR image set; />Representing the j-th high quality HR image in the high quality HR image set, and>representing a bicubic downsampling result corresponding to the j-th high-quality HR image;
calculating the loss average value of all pixel points of each image in the high-quality HR image set;
(symbol)is a norm, representing the sum of the absolute values of the vectors of the operational formula ();
an output representing a degradation generating network;
a feature map extracted using a pre-trained neural network is represented.
Optionally, in S30, the expression of the countermeasures loss function is:
wherein,representing an fight loss function;
representing an output of the arbiter network; />Representing an average pooling algorithm.
Optionally, the expression of the countermeasure loss function based on the real LR image is:
wherein,an fight loss function representing super-resolution results based on high quality HR image and real LR image, < ->Representing an contrast loss function based on an intermediate feature map of the synthetic LR image and the real LR image extracted from the super-resolution network;
representing a high quality HR image set,/->Representing an i-th real LR image in the high quality HR image set;
calculating the loss average value of all pixel points of each image in the real LR image set;
representing a super-resolution result;
an intermediate feature map extracted from a super-resolution network is shown.
Optionally, in the adaptively adjusting the magnitudes of the content loss function and the perceptual loss function, the adjusted content loss function and the perceptual loss function are expressed as:
wherein,representing a content loss function based on domain distance weighting, < >>Representing a perceptual loss function based on domain distance weighting;
representing a set of synthetic LR images,/->Representing a jth synthetic LR image;
representing the calculation of the synthetic LR image in the supervision training>The loss average value of all pixel points of each image is concentrated;
indicating pass->Domain distance weight obtained by transforming the size, +.>Representing a dot product operation.
The invention also provides an electronic device, comprising:
a memory;
a processor; a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method as described above.
The present invention also provides a computer-readable storage device, characterized in that a computer program is stored thereon; the computer program is executed by a processor to implement the method as described above.
The technical scheme that this application provided has the advantage that:
1. in the application, a high-quality HR image is input into a degradation generation network with a downsampling module to obtain a synthesized LR image, and the degradation generation network with the downsampling module can improve the similarity of the synthesized LR image and a real LR image through learning by generating countermeasure training, so that the domain gap between synthesized data and real data is reduced; meanwhile, in order to further reduce the negative influence of the domain gap on the performance of the super-resolution network, the embodiment adopts a training method combining domain difference perception and domain distance weighted supervision to train the super-resolution network, and in the training process, the performance of the super-resolution network on solving the real world problem can be remarkably improved by introducing the concept of the domain gap, a super-resolution result with higher visual effect is generated, and the practicability is extremely strong.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the related art, the drawings that are required to be used in the embodiments or the description of the related art will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
Fig. 1 is a schematic flow chart of a super-resolution method based on unsupervised deep learning according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an algorithm implementation process of a super-resolution method based on unsupervised deep learning according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a degradation generation network in an embodiment of the present invention;
FIG. 4 is a schematic diagram of an algorithm implementation process for training a super-resolution network by using a training method combining domain difference perception and domain distance weighting supervision in an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an RRDB module for domain distance adaptation in an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention; all other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1 and 2, the super-resolution method based on unsupervised deep learning provided in this embodiment includes:
s10, constructing a data set based on unpaired real LR images and high quality HR images;
s20, inputting the high-quality HR image into a degradation generation network with a downsampling module to obtain a synthesized LR image; the high-quality HR image passes through a bicubic downsampling module to obtain a bicubic downsampling result;
s30, training the degradation generation network through a content loss function and a perception loss function between the synthesized LR image and the downsampling result and an antagonism loss function between the synthesized LR image and the real LR image to obtain a trained degradation generation network;
s40, taking the high-quality HR image, the real LR image and the synthesized LR image as input, and training the super-resolution network by adopting a training method combining domain difference perception and domain distance weighting supervision to obtain a super-resolution result.
Note that unpaired data sets: the two sets of data X, Y need not be paired two by two, but only the data within X is required to be distributed consistently and the data within Y is required to be distributed consistently. Namely: only need to correspond to two sets, and does not requireAnd->There is a correspondence.
To achieve unsupervised training, in this embodiment, a dataset of unpaired real LR images and high quality HR images is collected and employed for training. Pictures in the dataset can be taken with DSLR cameras in real indoor or outdoor motionless scenes, and by increasing the focal length, details of the scene can be recorded naturally in the camera sensor.
It should be noted that, in addition to the field of view (FoV), adjusting the focal length may cause many other changes in the image processing process, such as the displacement of the optical center, the change of the scale factor, different exposure times, and lens distortion; in this embodiment, an efficient image registration algorithm may be employed to progressively align image pairs to enable end-to-end training of the SISR (single image super resolution) model.
In the embodiment, a high-quality HR image is input into a degradation generation network with a downsampling module to obtain a synthesized LR image, and the degradation generation network with the downsampling module can improve the similarity of the synthesized LR image and a real LR image through learning by generating countermeasure training, so that the domain gap between synthesized data and real data is reduced; meanwhile, in order to further reduce the negative influence of the domain gap on the performance of the super-resolution network, the embodiment adopts a training method combining domain difference perception and domain distance weighted supervision to train the super-resolution network, and in the training process, the performance of the super-resolution network on solving the real world problem can be remarkably improved by introducing the concept of the domain gap, a super-resolution result with higher visual effect is generated, and the practicability is extremely strong.
Example two
Referring to fig. 3, in a first embodiment, a super-resolution method based on unsupervised deep learning, the input of the degradation generating network is a high quality HR imageThe aim is toGet and->Has the same image content and is +.>Synthetic LR image with similar degradation model +.>
First, converting an input from an image dimension to a feature dimension using a convolution layer;
second, a mapping network is learned and fitted by using 23 residual blocks, wherein: each residual block is connected by two convolutional layers and a ReLU activation function;
again, a bilinear downsampling operation is applied to reduce the data space size to the same size as LR;
finally, the feature representation is converted back to the image dimension using two convolution layers.
In the process of training the degradation generation network, in order to ensureAnd->Having the same content information, the present embodiment uses +.>And->Is>Content loss function between->And perceptual loss function->To constrain.
In this embodiment, in S30, the content loss functionThe expression of (2) is:
perceptual loss functionThe expression of (2) is:
wherein,representing content loss function, ++>Representing a perceptual loss function;
representing a high quality HR image set; />Representing the j-th high quality HR image in the high quality HR image set, and>representing a bicubic downsampling result corresponding to the j-th high-quality HR image;
calculating the loss average value of all pixel points of each image in the high-quality HR image set;
(symbol)is a norm, representing the sum of the absolute values of the vectors of the operational formula ();
an output representing a degradation generating network;
a feature map extracted using a pre-trained neural network is represented.
Specifically, the feature map extracted by using the pre-trained neural network may be: the conv5_3 convolution layer of the VGG-19 network extracts the feature map.
In order to ensure thatAnd->Having a similar degradation model, the present application uses +.>And->Fight against losses between->To perform constraints; in addition, in order to reduce the training difficulty of generating an countermeasure network, the application selects to input the high-frequency information related to the super-resolution task into the discriminator for training, and an average pooling method can be used in the process of extracting the high-frequency information>The low frequency information is obtained from the artwork, after which the required high frequency input is obtained by subtracting the low frequency information from the artwork.
In this embodiment, in S30, the expression of the countermeasures loss function is:
wherein,representing an fight loss function;
representing an output of the arbiter network; />Representing an average pooling algorithm.
In this embodiment, the expression of the countermeasures loss function based on the real LR image is as follows:
wherein,an fight loss function representing super-resolution results based on high quality HR image and real LR image, < ->Representing an contrast loss function based on an intermediate feature map of the synthetic LR image and the real LR image extracted from the super-resolution network;
representing a high quality HR image set,/->Representing an i-th real LR image in the high quality HR image set;
calculating the loss average value of all pixel points of each image in the real LR image set;
representing a super-resolution result;
an intermediate feature map extracted from a super-resolution network is shown.
In this embodiment, by generating the countermeasure training method, the degradation generating network with the downsampling module can learn and promote the synthesized LR image to have a degradation model very similar to the real LR image, thereby reducing the domain gap existing between the synthesized data and the real data to some extent.
Example III
Referring to fig. 4 and fig. 5, in the step S40, the super-resolution network is a domain distance adaptive super-resolution network; the domain distance adaptive super-resolution network includes: the device comprises a convolutional neural network module, a domain distance self-adaptive adjustment RRDB module and an up-sampling module.
In this embodiment, the domain distance adaptive adjustment RRDB module (abbreviated as DA-RRDB module) includes: through domain distance mapGenerating a spatial attention map->To adjust the local mapping function of the domain distance adaptive adjustment RRDB module.
In the training stage, a discriminator in the degradation generation network may be used to generate a domain distance map corresponding to the real LR image and the synthesized LR imageThe method comprises the steps of carrying out a first treatment on the surface of the During the verification phase, a real LR training sample can be usedAverage Domain distance->As an input to a domain distance adaptive super resolution network.
It should be noted that super-resolution networks use high quality HR imagesReal LR image->And synthesize LR image->As input, the target obtains super-resolution result with real and clear image details and high visual quality>
To further reduceAnd->The domain gap has negative influence on the performance of the final super-resolution network, and the ESRGAN super-resolution network is adopted as a basic framework in the embodiment, and is modified in a targeted manner on the basis; a training method based on the combination of domain difference perception and domain distance weighted supervision is provided.
In this embodiment, in S40, the training method combining domain difference sensing and domain distance weighting supervision trains the super-resolution network, including:
s401, establishing an fight loss function based on a real LR image based on domain difference perception, so that a trained super-resolution network generates a real super-resolution result with high visual quality;
s402, performing domain distance weighting supervision training by adopting an output domain distance graph of a discriminator in a degradation generation network, and adaptively adjusting the magnitudes of a content loss function and a perception loss function.
It should be noted that, in contrast to previous methods that use only synthetic LR images and high quality HR images for direct supervised training, the present application utilizes data in both domains simultaneously, thereby enabling full utilization of available training data.
Specifically, in the source domain, due to synthesizing the LR imagesAnd high quality HR image->Paired in content, we train the network in a supervised manner using the loss function; whereas for the target domain data, due to the real LR image +.>No corresponding high quality high resolution image is used as label, the present embodiment uses counter-loss +.>And->To facilitate the training of the super-resolution network to produce real super-resolution results with high visual quality:
in this embodiment, the expression of the contrast loss function based on the true LR image is:
wherein,an fight loss function representing super-resolution results based on high quality HR image and real LR image, < ->Representing an contrast loss function based on an intermediate feature map of the synthetic LR image and the real LR image extracted from the super-resolution network;
representing a high quality HR image set,/->Representing an i-th real LR image in the high quality HR image set;
calculating the loss average value of all pixel points of each image in the real LR image set;
representing a super-resolution result;
an intermediate feature map extracted from a super-resolution network is shown.
In the calculation ofAnd->When the super-resolution task is lost, only the high-frequency component of the image is input into the discriminator network, so that the network ignores the content information which is less relevant to the super-resolution task, and more attention is paid to recovery and more real high-frequency detail information is generated.
In the degradation generation network, a discriminator is used to distinguish between a synthesized LR image and a true LR image, and the output thereof indicates the possibility of inputting from a target domain (true image); thus, a higher magnitude arbiter output means that the synthesized LR image is more likely to be identified as a real world LR image, and thus the smaller the distance from the target domain. When we useWhen performing supervision training as source domain data, different training pairs should have different importance according to their distance from the target domain. Thus, the present embodiment uses the output domain distance map ++of the discriminators in the degradation generating network>The size of the supervised training content and the perceived loss is adaptively adjusted.
It should be noted that, in the adaptive adjustment of the magnitudes of the content loss function and the perceptual loss function, the adjusted content loss function and the perceptual loss function have the following expressions:
wherein,representing a content loss function based on domain distance weighting, < >>Representing a perceptual loss function based on domain distance weighting;
representing a set of synthetic LR images,/->Representing a jth synthetic LR image;
representing the calculation of the synthetic LR image in the supervision training>The loss average value of all pixel points of each image is concentrated;
indicating pass->Domain distance weight obtained by transforming the size, +.>Representing a dot product operation.
Because most of the parameters of the super-resolution network are shared, the embodiment only needs to adjust a small amount of parameters according to the domain information, and the calculation is simple; the domain distance adaptive super-resolution network in this embodiment can benefit from the generated pseudo training pair when training the complex network, and can also tolerate the difference between the synthesized LR image and the true LR image.
In this embodiment, the flexibility of the mapping function is adjusted according to the degradation model, so that the super-resolution network can better capture the mapping function between the real-world low-resolution image and the high-resolution image.
In addition, the embodiment of the application also provides electronic equipment, which comprises:
a memory; a processor; a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method as described above.
Furthermore, the embodiment of the application also provides a computer readable storage device, on which a computer program is stored; the computer program is executed by a processor to implement the method described above.
In this application, the method and the apparatus are based on the same inventive concept, and because the principles of solving the problems by the method and the apparatus are similar, implementation of the method and the apparatus may refer to each other, and repeated parts are not repeated.
The storage device may be a computer readable storage medium, and may include: a U-disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (7)

1. A super-resolution method based on unsupervised deep learning, comprising:
s10, constructing a data set based on unpaired real LR images and high quality HR images;
s20, inputting the high-quality HR image into a degradation generation network with a downsampling module to obtain a synthesized LR image; the high-quality HR image passes through a bicubic downsampling module to obtain a bicubic downsampling result;
s30, training the degradation generation network through a content loss function and a perception loss function between the synthesized LR image and the bicubic downsampling result and an antagonism loss function between the synthesized LR image and the real LR image to obtain a trained degradation generation network;
s40, taking a high-quality HR image, a real LR image and a synthesized LR image as inputs, and training a super-resolution network by adopting a training method combining domain difference perception and domain distance weighting supervision to obtain a super-resolution result;
in the step S40, the super-resolution network is a domain distance adaptive super-resolution network;
the domain distance adaptive super-resolution network includes: the device comprises a convolutional neural network module, a domain distance self-adaptive adjustment RRDB module and an up-sampling module;
the RRDB module for domain distance adaptive adjustment comprises: general purpose medicineCross-domain distance mapGenerating a spatial attention map->To adjust the local mapping function of the domain distance adaptive adjustment RRDB module;
in S40, the training method combining domain difference perception and domain distance weighted supervision trains the super-resolution network, which includes:
s401, establishing an fight loss function based on a real LR image based on domain difference perception, so that a trained super-resolution network generates a real super-resolution result with high visual quality;
s402, performing domain distance weighting supervision training by adopting an output domain distance graph of a discriminator in a degradation generation network, and adaptively adjusting the magnitudes of a content loss function and a perception loss function.
2. The method according to claim 1, wherein in S30, the expression of the content loss function is:
the expression of the perceptual loss function is:
wherein,representing content loss function, ++>Representing a perceptual loss function;
representing a high quality HR image set; />Representing the j-th high quality HR image in the high quality HR image set, and>representing a bicubic downsampling result corresponding to the j-th high-quality HR image;
calculating the loss average value of all pixel points of each image in the high-quality HR image set;
(symbol)is a norm, representing the sum of the absolute values of the vectors of the operational formula ();
an output representing a degradation generating network;
a feature map extracted using a pre-trained neural network is represented.
3. The method according to claim 2, wherein in S30, the expression of the countermeasures loss function is:
wherein,representing an fight loss function;
representing an output of the arbiter network; />Representing an average pooling algorithm.
4. A super-resolution method based on unsupervised deep learning as claimed in claim 3, wherein the expression of the countering loss function based on the real LR image is:
wherein,an fight loss function representing super-resolution results based on high quality HR image and real LR image, < ->Representing an contrast loss function based on an intermediate feature map of the synthetic LR image and the real LR image extracted from the super-resolution network;
representing a high quality HR image set,/->Representing an i-th real LR image in the high quality HR image set;
calculating the loss average value of all pixel points of each image in the real LR image set;
representing a super-resolution result;
an intermediate feature map extracted from a super-resolution network is shown.
5. The method for super-resolution learning based on unsupervised deep learning as claimed in claim 4, wherein in the adaptively adjusting the magnitudes of the content loss function and the perceptual loss function, the adjusted content loss function and the perceptual loss function are expressed as:
wherein,representing a content loss function based on domain distance weighting, < >>Representing a perceptual loss function based on domain distance weighting;
representing a set of synthetic LR images,/->Representing a jth synthetic LR image;
representing the calculation of the synthetic LR image in the supervision training>The loss average value of all pixel points of each image is concentrated;
indicating pass->Domain distance weight obtained by transforming the size, +.>Representing a dot product operation.
6. An electronic device, comprising:
a memory;
a processor; a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1 to 5.
7. A computer readable storage device having a computer program stored thereon; the computer program being executed by a processor to implement the method of any one of claims 1 to 5.
CN202410077396.3A 2024-01-19 2024-01-19 Super-resolution method based on unsupervised deep learning and corresponding equipment Active CN117593188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410077396.3A CN117593188B (en) 2024-01-19 2024-01-19 Super-resolution method based on unsupervised deep learning and corresponding equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410077396.3A CN117593188B (en) 2024-01-19 2024-01-19 Super-resolution method based on unsupervised deep learning and corresponding equipment

Publications (2)

Publication Number Publication Date
CN117593188A CN117593188A (en) 2024-02-23
CN117593188B true CN117593188B (en) 2024-04-12

Family

ID=89918801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410077396.3A Active CN117593188B (en) 2024-01-19 2024-01-19 Super-resolution method based on unsupervised deep learning and corresponding equipment

Country Status (1)

Country Link
CN (1) CN117593188B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636727A (en) * 2018-12-17 2019-04-16 辽宁工程技术大学 A kind of super-resolution rebuilding image spatial resolution evaluation method
CN112330543A (en) * 2020-12-01 2021-02-05 上海网达软件股份有限公司 Video super-resolution method and system based on self-supervision learning
WO2021185225A1 (en) * 2020-03-16 2021-09-23 徐州工程学院 Image super-resolution reconstruction method employing adaptive adjustment
CN115131203A (en) * 2022-06-07 2022-09-30 西安电子科技大学 LR image generation method and real image super-resolution method based on uncertainty
CN117351216A (en) * 2023-12-05 2024-01-05 成都宜图智享信息科技有限公司 Image self-adaptive denoising method based on supervised deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636727A (en) * 2018-12-17 2019-04-16 辽宁工程技术大学 A kind of super-resolution rebuilding image spatial resolution evaluation method
WO2021185225A1 (en) * 2020-03-16 2021-09-23 徐州工程学院 Image super-resolution reconstruction method employing adaptive adjustment
CN112330543A (en) * 2020-12-01 2021-02-05 上海网达软件股份有限公司 Video super-resolution method and system based on self-supervision learning
CN115131203A (en) * 2022-06-07 2022-09-30 西安电子科技大学 LR image generation method and real image super-resolution method based on uncertainty
CN117351216A (en) * 2023-12-05 2024-01-05 成都宜图智享信息科技有限公司 Image self-adaptive denoising method based on supervised deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Domain-distance adapted super-resolution reconstruction of low-field MR brian images;Shan Cong等;《MedRxiv》;20230701;1-10 *
基于域距离感知训练的真实世界图像超分辨率算法研究;魏云轩;《中国优秀硕士学位论文全文数据库》;20230115(第1期);32-39 *

Also Published As

Publication number Publication date
CN117593188A (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN111275637B (en) Attention model-based non-uniform motion blurred image self-adaptive restoration method
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN113284051B (en) Face super-resolution method based on frequency decomposition multi-attention machine system
CN112507617B (en) Training method of SRFlow super-resolution model and face recognition method
CN110189286B (en) Infrared and visible light image fusion method based on ResNet
Min et al. Blind deblurring via a novel recursive deep CNN improved by wavelet transform
CN112001843A (en) Infrared image super-resolution reconstruction method based on deep learning
CN115170410A (en) Image enhancement method and device integrating wavelet transformation and attention mechanism
CN115578262A (en) Polarization image super-resolution reconstruction method based on AFAN model
CN112163998A (en) Single-image super-resolution analysis method matched with natural degradation conditions
CN116934592A (en) Image stitching method, system, equipment and medium based on deep learning
CN115526777A (en) Blind over-separation network establishing method, blind over-separation method and storage medium
Shen et al. Deeper super-resolution generative adversarial network with gradient penalty for sonar image enhancement
Chen et al. Improving dynamic hdr imaging with fusion transformer
CN117593188B (en) Super-resolution method based on unsupervised deep learning and corresponding equipment
CN116188265A (en) Space variable kernel perception blind super-division reconstruction method based on real degradation
ZhiPing et al. A new generative adversarial network for texture preserving image denoising
Chen et al. A deep motion deblurring network using channel adaptive residual module
CN112907456B (en) Deep neural network image denoising method based on global smooth constraint prior model
Liu et al. LG-DBNet: Local and Global Dual-Branch Network for SAR Image Denoising
CN117078516B (en) Mine image super-resolution reconstruction method based on residual mixed attention
Teng et al. Multi-Scale Spatial Transformation and Attention Based Super-Resolution Reconstruction Algorithm
CN108416756B (en) Regional perception image denoising method based on machine learning
Jin et al. Generative Adversarial Network Based on Multi-feature Fusion Strategy for Motion Image Deblurring
CN117670650A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant