CN112233038A - True image denoising method based on multi-scale fusion and edge enhancement - Google Patents

True image denoising method based on multi-scale fusion and edge enhancement Download PDF

Info

Publication number
CN112233038A
CN112233038A CN202011149797.3A CN202011149797A CN112233038A CN 112233038 A CN112233038 A CN 112233038A CN 202011149797 A CN202011149797 A CN 202011149797A CN 112233038 A CN112233038 A CN 112233038A
Authority
CN
China
Prior art keywords
multiplied
image
convolution
input
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011149797.3A
Other languages
Chinese (zh)
Other versions
CN112233038B (en
Inventor
门爱东
鞠国栋
沈良恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qidi Yuanjing Shenzhen Technology Co ltd
Original Assignee
Guangdong Qidi Tuwei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Qidi Tuwei Technology Co ltd filed Critical Guangdong Qidi Tuwei Technology Co ltd
Priority to CN202011149797.3A priority Critical patent/CN112233038B/en
Publication of CN112233038A publication Critical patent/CN112233038A/en
Application granted granted Critical
Publication of CN112233038B publication Critical patent/CN112233038B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a real image denoising method based on multi-scale fusion and edge enhancement, and belongs to the technical field of computer vision images. In the image input stage, in order to improve the generalization capability of the model, data enhancement is designed, and part of pixels randomly selected from the content of an input noise image are replaced by a corresponding noise-free image; carrying out multilevel smoothing processing on an input noise image by utilizing three convolution kernels with different receptive field sizes to obtain three preliminary smoothing results with different scales; carrying out self-adaptive expression on the multi-scale denoising result by utilizing a channel attention mechanism, and further fusing; edges are extracted through a Laplacian operator, the edge and texture information of the original noise image is introduced, and the fused smooth image is subjected to detail enhancement to improve the visual effect; the invention has reasonable design, keeps faster running speed while obtaining better denoising effect, and obtains better effect on denoising of real images integrally.

Description

True image denoising method based on multi-scale fusion and edge enhancement
Technical Field
The invention relates to the technical field of computer vision images, in particular to a real image denoising method based on multi-scale fusion and edge enhancement.
Background
With the progress of science and technology, mobile equipment is increasingly popularized, and the image is more convenient to acquire. Due to the use of relatively low cost sensors and lenses, images captured by mobile cameras such as mobile phone cameras are often disturbed by noise, especially when the light is insufficient, the noise is more affected, which may cause the image quality to be degraded, and cause difficulties for subsequent applications. Ensuring image quality is the basis for high-level visual applications such as target detection, semantic segmentation, etc. on images. Therefore, how to efficiently denoise a real image and further improve the quality of the image is an important research topic in the field of computer vision.
Real-Image Denoising (Real-Image Denoising) is an important technical means for solving the problem of Image noise removal from the perspective of software, by recovering a corresponding noise-free Image from an observed noisy Image from the Real world. The noise removal of the real image provides important technical support for enabling a computer to better observe, analyze and process pictures, and has very important application value in many fields such as high-definition televisions, medical images, satellite imaging, monitoring systems and the like.
The traditional real image denoising algorithm models real noise into Gaussian distribution, wherein the common methods comprise a non-local block matching algorithm (BM3D), a sparse coding algorithm (KSVD) and the like, the methods can remove certain noise, but the use stage involves complicated optimization steps, the time cost is high, the trouble is brought to quick application, in addition, the adjustable parameters involved in use are too much, and the denoising effect cannot be ensured.
Convolutional neural networks, which are neural networks specifically designed to process data having a grid-like structure (e.g., an image can be viewed as a two-dimensional grid of pixels), have been successful in a number of different types of computer vision processing tasks (e.g., image classification, object detection, etc.). Many solutions for de-noising real images based on convolutional neural networks have been developed, such as, expanding the traditional nonlinear reaction diffusion model Technology (TNRD) by several parameterized linear filters and several parameterized influence functions, the Gaussian denoising technology (REDNet) based on codec and jump-connected full convolution neural networks, the convolutional neural network denoising technology (DnCNN) integrating residual learning and batch normalization, the denoising technology (FFDNet) using noise estimation graph and input, balancing noise suppression and detail preservation, on the basis of FFDNet, the noise level estimation process is also realized by using a sub-network, so that the technology of blind denoising (CBDNet) of the whole network is realized, by reinforcement learning, establishing a multi-Path CNN with a Path finder, a suitable Path (Path-Restore) can be dynamically selected for each image area, and so on. However, these methods do not take into account the diversity and complexity of the content of the real noise, do not take into account the different importance among feature channels, and do not fully utilize the multi-scale features, thereby achieving a more limited effect.
Disclosure of Invention
The invention aims to solve the problems that the existing image denoising method fails to pay attention to the content diversity and complexity of real noise, does not consider different importance among characteristic channels, and fails to fully utilize multi-scale characteristics, thereby obtaining a more limited effect, and provides a real image denoising method based on multi-scale fusion and edge enhancement, which is reasonable in design, fully considers multi-scale information to improve the noise removal effect, and is relatively light in weight.
The purpose of the invention can be realized by the following technical scheme: a real image denoising method based on multi-scale fusion and edge enhancement comprises the following steps:
the method comprises the following steps: in the image input stage, the data enhancement technology is randomly adopted to transform the sample content;
step two: inputting an original noisy picture into a network, performing convolution operation of three scales on the original noisy picture at the same time, performing primary smoothing treatment by using three convolution kernels and constant parameter quantity by using an expansion convolution technology, and outputting three smoothed pictures;
step three: cascading the graph output in the step two with the original input graph, sending the graph to a fusion stage, simultaneously adopting a jump connection structure to supplement information in time, and outputting a feature graph fused with different scale smoothing effects;
step four: performing edge extraction on the image of the initial input network, the image subjected to smoothing operation and the feature map output in the third step by adopting a Laplacian operator, and setting a threshold value to perform binarization on the result to obtain an edge image of a 5-channel; cascading the edge image and the features output in the third step, and sending the edge image and the features into an enhancement module;
step five: the feature graph output by the enhancement module is mapped to the output feature dimensionality through convolution processing, then a final clear image is output, and the number of output channels of a convolution kernel is consistent with the number of channels of the input original image.
Preferably, the specific implementation method of the data enhancement technology comprises the following steps:
s11: determining whether to perform data enhancement on the input image with a probability of 1/2;
s12: when data enhancement is needed, 3 image blocks are randomly positioned in an input image, overlapping is allowed, and the width and height values of all the image blocks are randomly specified in the ranges of [0,1/4 xW ], [0,1/4 xH ]; wherein, the width of the input image is W, and the height is H;
s13: and replacing the positioned image block with the noiseless image block content for monitoring the corresponding position, namely enabling the network to perform learning of identity mapping on the part of pixels.
Preferably, the sizes of the three convolution kernels in the second step are 3 × 3, 5 × 5 and 7 × 7 in sequence.
Preferably, the fusion stage in step three is composed of five attention modules, a feature map dynamic expression module, and interval down-sampling and up-sampling.
Preferably, the specific treatment steps in the fusion stage are as follows:
s31: the network structure configuration letter V of the fusion stage comprises three layers, wherein the left side is gradually sampled downwards and is regarded as an encoder, and the right side is correspondingly gradually sampled upwards and is regarded as a decoder;
s32: performing convolution twice by 3 multiplied by 32 for each layer, further extracting features, and performing channel importance recalibration on feature maps carrying different scale information through an attention module;
s33: at the end of each layer of the down-sampling stage, reducing the size of the input features to 1/2 by using maximum pooling, compressing and fusing spatial features, retaining texture content, expanding the receptive field of the convolutional network, and extracting more semantic information;
s34: the head of each layer in the up-sampling stage uses transposition convolution to perform 2 times of up-sampling, and output with the same resolution as that of the same layer in the down-sampling stage is cascaded to supplement the first half part of spatial information in time and combine the deep semantic feature information at the same time;
s35: and outputting the V-shaped network, fusing different scale information by a feature map dynamic expression module, and performing self-adaptive expression on each layer of feature map, namely outputting a smooth result which is completely the same as the size of the noisy map input into the network.
Preferably, the specific working steps of the attention module are as follows:
s321: for the feature diagram H multiplied by W multiplied by C of the input module, a layer of convolution operation of 3 multiplied by C multiplied by 64 is used for further abstracting the features;
s322: performing convolution operation on a layer of 3 multiplied by 64 again through a ReLU activation function, and then calibrating the importance of different channels by utilizing a channel attention mechanism;
s323: the inputs of the module and the output of S322 are added pixel by pixel as the final output of the attention module.
Preferably, the specific working steps of the characteristic diagram dynamic expression module in S35 are as follows:
s351: expressing the characteristics by convolution layers with different convolution kernel sizes to obtain U ', U ' and U ' ″, and adding the results pixel by pixel to obtain mixed characteristics
Figure BDA0002740818260000041
S352: will be provided with
Figure BDA0002740818260000042
Performing global pooling, extracting global semantic information, performing full-link layer and ReLU nonlinear transformation, dividing into three parts to obtain corresponding three channel calibration coefficient vectors alpha, beta and gamma, and performing softmax normalization operation on the whole, namely performing weighting processing on the three vectors along each channel;
s353: multiplying the three vectors alpha, beta and gamma with U ', U ' and U ' respectively, and then adding the three vectors pixel by pixel, wherein at the moment, convolution kernels with different sizes are adaptively selected by each characteristic channel for characteristic expression;
s354: obtaining a recovered clean image through a single-layer convolution; wherein the dimension of the convolution layer is 3 x 1.
Preferably, the channel attention mechanism described in S322 specifically includes:
a: performing global pooling on input original characteristics U, extracting global semantic information of the input original characteristics U, and then performing full connection layer, ReLU nonlinear transformation, full connection layer and Sigmoid nonlinear transformation to obtain a channel calibration coefficient vector mu;
b: the input features U are re-calibrated by multiplying them with the channel calibration coefficient vector μ.
Preferably, the specific working steps of the enhancement module in the fourth step are as follows:
s41: sequentially passing the input feature map H multiplied by W multiplied by 5 through three cascaded residual modules, wherein each residual module comprises 3 multiplied by 3 convolution, a ReLU activation function and the second 3 multiplied by 3 convolution, and finally adding the result and the module input pixel by pixel; meanwhile, the whole enhancement module adopts a dense convolution network structure, namely, each layer splices the input of all the previous layers and then transmits the output characteristic diagram to all the next layers;
s42: the input is cascaded with the output H multiplied by W multiplied by 5 of the first residual error module to obtain a characteristic diagram H multiplied by W multiplied by 10, the output is mapped into H multiplied by W multiplied by 5 through 1 multiplied by 1 convolution operation and is sent into the second residual error module;
s43: the output of the second residual error module is cascaded with the output of the first residual error module and the input characteristic diagram to obtain an H multiplied by W multiplied by 15 characteristic diagram, H multiplied by W multiplied by 5 is output through 1 multiplied by 1 convolution, and then the H multiplied by W multiplied by 5 characteristic diagram is sent to a third residual error module;
s44: and performing feature mapping on the output by using a 1 × 1 convolutional layer to obtain and output a finally denoised image with edge details.
Compared with the prior art, the invention has the beneficial effects that:
1. the whole denoising process is divided into two stages, wherein the first stage is to obtain a smooth image after multi-scale denoising, denoise the image on each scale, adaptively express and fuse the characteristic images of each scale, simultaneously consider global information and local information, reduce the image recovery fuzziness caused by the loss of the characteristic information, and help the whole denoising by utilizing the redundancy of the image content; in the second stage, edge information is introduced in an auxiliary mode, edges and detail content are recovered, and the visual effect is improved.
2. The method has reasonable design, considers the importance of different characteristic channels, takes global information and local information into consideration for multi-scale receptive field denoising, adaptively fuses the denoised multi-scale characteristics, enhances the details of the image in the later stage, and avoids the common denoising problem of over-smoothness. The output of the network is a denoised clear image, the network is trained by using an input noise image and a noiseless clear image pair and taking an average absolute loss function as a target, and the denoising effect is evaluated by comparing an output image with the noiseless clear image, so that the denoising quality is ensured, and meanwhile, the network is small in size, the denoising effect is ensured, and the algorithm running speed is considered.
Drawings
In order to facilitate understanding for those skilled in the art, the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a skeleton diagram of the multi-scale fusion and edge enhancement network of the present invention;
FIG. 2 is a frame diagram of an attention module of the present invention;
FIG. 3 is a framework diagram of a feature map dynamic representation module of the present invention;
fig. 4 is a schematic diagram of the data enhancement method of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-4, a method for denoising a real image based on multi-scale fusion and edge enhancement includes the following steps:
step S1, in the image input stage, the data enhancement technology is randomly adopted to transform the sample content;
step S2, inputting the original noisy picture into a network, performing convolution operation of three scales on the original noisy picture at the same time, using three convolution kernels by utilizing the idea of expansion convolution, keeping the parameter quantity unchanged, and sequentially performing primary smoothing treatment on the three convolution kernels with the sizes of 3 × 3, 5 × 5 and 7 × 7 to output three smoothed pictures;
step S3, the output of step S2 and the original input graph are cascaded and sent to a fusion stage, the fusion stage is composed of five attention modules, a feature graph dynamic expression module, and interval down-sampling and up-sampling, a jump connection structure is adopted at the same time, information is supplemented in time, and finally a more refined feature graph fusing different scale smoothing effects is output; the denoising stage of the network is described above;
step S4, the following is the detail enhancement phase of the network. And (3) performing edge extraction on the image initially input into the network, the image subjected to smoothing operation and the image output in the step (3) by adopting a Laplacian operator, setting a threshold value, and performing binarization on the result to obtain an edge image of a 5 channel. Cascading the edge image with the output of the step 3, and sending the edge image into an enhancement module;
and step S5, mapping the output characteristic diagram of the enhancement module to the output characteristic dimension after convolution processing, outputting the final clear image, wherein the number of output channels of the convolution kernel is consistent with the number of channels of the input original image.
The specific implementation method of the data enhancement technology of step S1 is as follows:
s1.1, determining whether to perform data enhancement on an input image according to the probability of 1/2;
step S1.2, if data enhancement is needed, randomly positioning 3 image blocks in an input image (the width W and the height W of the input image are set as H), allowing overlapping, and randomly assigning the width and the height of each image block in the ranges of [0, 1/4W ], [0, 1/4H ];
and S1.3, replacing the part of contents of the positioned image block with the contents of the noiseless image block at the corresponding position for supervision, namely enabling the network to learn the identity mapping of the part of pixels, and performing a regularization effect on the network learning. The method limits the image to be denoised only in the place where the image needs to be denoised, avoids the phenomenon that the image is processed too smoothly (excessive denoising), and forces the network to learn not only how to denoise but also where to denoise.
The specific implementation method of step S3 is as follows:
s3.1, the network structure of the fusion stage is like a letter V, and comprises three layers, wherein the left side gradually performs down-sampling and can be regarded as an encoder, and the right side correspondingly performs up-sampling and can be regarded as a decoder;
s3.2, performing convolution twice by 3 multiplied by 32 for each layer, further extracting features, and performing channel importance recalibration on feature maps carrying different scale information through an attention module;
s3.3, at the end of each layer of the down-sampling stage, reducing the size of the input features to 1/2 by using maximum pooling, compressing and fusing spatial features, retaining texture content, expanding the receptive field of a convolutional network, and extracting more semantic information;
s3.4, performing up-sampling 2 times by using transposition convolution at the beginning of each layer in the up-sampling stage, cascading output with the same layer in the down-sampling stage and having the same resolution, supplementing spatial information of the first half part in time, and combining deep semantic feature information at the same time;
s3.5, outputting the V-shaped network, fusing different scale information through a feature map dynamic expression module, and performing self-adaptive expression on each layer of feature map, namely outputting a finer smooth result which is completely the same as the size of the noisy image input into the network;
the specific implementation method of the attention module of step S3.2 is as follows:
step S3.2.1, for the characteristic diagram H × W × C of the input module, a layer of convolution operation of 3 × 3 × C × 64 is used to further abstract the characteristics;
s3.2.2, performing a layer of convolution operation of 3 × 3 × 64 × 64 again through a ReLU activation function, and then calibrating the importance of different channels by using a channel attention mechanism;
step S3.2.3, add the module inputs and the module outputs pixel by pixel as the final output of the attention module.
The specific implementation method of the channel attention mechanism in step S3.2.2 is as follows:
s3.2.2.1, performing global pooling on the input original features U, extracting global semantic information of the original features U, and then performing full connection layer, ReLU nonlinear transformation, full connection layer and Sigmoid nonlinear transformation to obtain a channel calibration coefficient vector mu;
step S3.2.2.2 recalibrates the input signature U by multiplying it with the channel calibration coefficient vector μ.
The specific implementation method of the characteristic diagram dynamic expression module in the step S3.5 is as follows:
s3.5.1, expressing the characteristics by convolution layers with different convolution kernel sizes to obtain U ', U ' and U ', and adding the results pixel by pixel to obtain mixed characteristics
Figure BDA0002740818260000091
Step S3.5.2, again
Figure BDA0002740818260000092
Performing global pooling, extracting global semantic information, performing full-link layer and ReLU nonlinear transformation, dividing into three parts to obtain corresponding three channel calibration coefficient vectors alpha, beta and gamma, and performing softmax normalization operation on the whole, namely performing weighting processing on the three vectors along each channel;
step S3.5.3, multiplying the three vectors alpha, beta and gamma with U ', U ' and U ' respectively, then adding pixel by pixel, at this time, each characteristic channel adaptively selects convolution kernels with different sizes to perform characteristic expression;
the recovered clean image is obtained by a single layer convolution, step S3.5.4. Wherein the dimension of the convolution layer is 3 x 1.
The specific implementation method of the enhancement module in step S4 is as follows:
and S4.1, sequentially passing the input feature map H multiplied by W multiplied by 5 through three cascaded residual modules, wherein each residual module comprises 3 multiplied by 3 convolution, a ReLU activation function and the second 3 multiplied by 3 convolution, and finally adding the result and the module input pixel by pixel. And meanwhile, the whole enhancement module adopts a dense convolutional network structure, namely, each layer splices the input of all the previous layers, and then transmits the output feature map to all the next layers.
S4.2, cascading the input and the output H multiplied by W multiplied by 5 of the first residual error module to obtain a characteristic diagram H multiplied by W multiplied by 10, mapping the output H multiplied by W multiplied by 5 through 1 multiplied by 1 convolution operation, and sending the output H multiplied by W multiplied by 5 to the second residual error module;
s4.3, cascading the output of the second residual error module with the output of the first residual error module and the input characteristic diagram to obtain an H multiplied by W multiplied by 15 characteristic diagram, outputting H multiplied by W multiplied by 5 through 1 multiplied by 1 convolution, and then sending the H multiplied by W multiplied by 5 to a third residual error module;
and S4.4, performing characteristic mapping on the output by using a 1 multiplied by 1 convolutional layer to obtain and output the image which is finally denoised and has edge details. The enhancement module can reduce the influence caused by gradient disappearance, strengthen the transmission of detail characteristics and more effectively fuse edge detail characteristics and a denoised smooth image.
The de-noised clear image can be obtained through the steps.
Finally, we trained the network with the minimum absolute error loss function (L1 loss function) as the target, and evaluated the network performance using PSNR (Peak Signal to Noise Ratio) and SSIM (structural similarity index). The method comprises the following steps:
and (3) testing environment: python 3.6; a TensorFlow frame; ubuntu16.04 system; NVIDIA GTX 1080ti GPU
And (3) testing sequence: the selected Dataset is the Darmstadt Noise Dataset (DND) used for true image denoising, containing 50 pairs of ultra-high resolution true Noise-noiseless image pairs.
The test method comprises the following steps: in order to ensure fairness, a target noiseless image of the data set is not disclosed externally, a participant submits an image denoising result to an online, and an online system calculates scores uniformly and quantifies a test effect.
Testing indexes are as follows: the invention uses indexes such as PSNR, SSIM, single and batch image processing time and the like to evaluate. The index data are calculated by different algorithms which are popular at present, and then result comparison is carried out, so that the method can obtain better results in the field of real image denoising.
Nothing in this specification is said to apply to the prior art.
It should be emphasized that the embodiments described herein are illustrative rather than restrictive, and thus the present invention is not limited to the embodiments described in the detailed description, but also includes other embodiments that can be derived from the technical solutions of the present invention by those skilled in the art.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (9)

1. The real image denoising method based on multi-scale fusion and edge enhancement is characterized by comprising the following steps:
the method comprises the following steps: in the image input stage, the data enhancement technology is randomly adopted to transform the sample content;
step two: inputting an original noisy picture into a network, performing convolution operation of three scales on the original noisy picture at the same time, performing primary smoothing treatment by using three convolution kernels and constant parameter quantity by using an expansion convolution technology, and outputting three smoothed pictures;
step three: cascading the graph output in the step two with the original input graph, sending the graph to a fusion stage, simultaneously adopting a jump connection structure to supplement information in time, and outputting a feature graph fused with different scale smoothing effects;
step four: performing edge extraction on the image of the initial input network, the image subjected to smoothing operation and the feature map output in the third step by adopting a Laplacian operator, and setting a threshold value to perform binarization on the result to obtain an edge image of a 5-channel; cascading the edge image and the features output in the third step, and sending the edge image and the features into an enhancement module;
step five: the feature graph output by the enhancement module is mapped to the output feature dimensionality through convolution processing, then a final clear image is output, and the number of output channels of a convolution kernel is consistent with the number of channels of the input original image.
2. The method for denoising the real image based on multi-scale fusion and edge enhancement as claimed in claim 1, wherein the specific implementation method of the data enhancement technology comprises the following steps:
s11: determining whether to perform data enhancement on the input image with a probability of 1/2;
s12: when data enhancement is needed, 3 image blocks are randomly positioned in an input image, and the width and height of each image block are randomly specified in the range of [0,1/4 xW ], [0,1/4 xH ]; wherein, the width of the input image is W, and the height is H;
s13: and replacing the positioned image block with the noiseless image block content for monitoring the corresponding position, namely enabling the network to perform learning of identity mapping on the part of pixels.
3. The method for denoising the real image based on multi-scale fusion and edge enhancement as claimed in claim 1, wherein the sizes of the three convolution kernels in step two are 3 x 3, 5 x 5 and 7 x 7 in sequence.
4. The method for denoising the real image based on multi-scale fusion and edge enhancement as claimed in claim 1, wherein the fusion stage in step three is composed of five attention modules, a feature map dynamic expression module, and interval down-sampling and up-sampling.
5. The method for denoising the real image based on multi-scale fusion and edge enhancement as claimed in claim 4, wherein the specific processing steps in the fusion stage are:
s31: the network structure configuration letter V of the fusion stage comprises three layers, wherein the left side is gradually sampled downwards and is regarded as an encoder, and the right side is correspondingly gradually sampled upwards and is regarded as a decoder;
s32: performing convolution twice by 3 multiplied by 32 for each layer, further extracting features, and performing channel importance recalibration on feature maps carrying different scale information through an attention module;
s33: at the end of each layer of the down-sampling stage, reducing the size of the input features to 1/2 by using maximum pooling, compressing and fusing spatial features, retaining texture content, expanding the receptive field of the convolutional network, and extracting more semantic information;
s34: the head of each layer in the up-sampling stage uses transposition convolution to perform 2 times of up-sampling, and output with the same resolution as that of the same layer in the down-sampling stage is cascaded to supplement the first half part of spatial information in time and combine the deep semantic feature information at the same time;
s35: and outputting the V-shaped network, fusing different scale information by a feature map dynamic expression module, and performing self-adaptive expression on each layer of feature map, namely outputting a smooth result which is completely the same as the size of the noisy map input into the network.
6. The method for denoising the real image based on multi-scale fusion and edge enhancement as claimed in claim 5, wherein the attention module comprises the following specific working steps:
s321: for the feature diagram H multiplied by W multiplied by C of the input module, a layer of convolution operation of 3 multiplied by C multiplied by 64 is used for further abstracting the features;
s322: performing convolution operation on a layer of 3 multiplied by 64 again through a ReLU activation function, and then calibrating the importance of different channels by utilizing a channel attention mechanism;
s323: the inputs of the module and the output of S322 are added pixel by pixel as the final output of the attention module.
7. The method for denoising the real image based on the multi-scale fusion and the edge enhancement as claimed in claim 5, wherein the specific working steps of the feature map dynamic expression module in S35 are as follows:
s351: expressing the characteristics by convolution layers with different convolution kernel sizes to obtain U ', U ' and U ' ″, and adding the results pixel by pixel to obtain mixed characteristics
Figure FDA0002740818250000031
S352: will be provided with
Figure FDA0002740818250000032
Performing global pooling, extracting global semantic information, performing full-link layer and ReLU nonlinear transformation, dividing into three parts to obtain corresponding three channel calibration coefficient vectors alpha, beta and gamma, and performing softmax normalization operation on the whole, namely performing weighting processing on the three vectors along each channel;
s353: multiplying the three vectors alpha, beta and gamma with U ', U ' and U ' respectively, and then adding the three vectors pixel by pixel, wherein at the moment, convolution kernels with different sizes are adaptively selected by each characteristic channel for characteristic expression;
s354: obtaining a recovered clean image through a single-layer convolution; wherein the dimension of the convolution layer is 3 x 1.
8. The method for denoising a true image based on multi-scale fusion and edge enhancement as claimed in claim 6, wherein the specific process of the channel attention mechanism in S322 is:
a: performing global pooling on input original characteristics U, extracting global semantic information of the input original characteristics U, and then performing full connection layer, ReLU nonlinear transformation, full connection layer and Sigmoid nonlinear transformation to obtain a channel calibration coefficient vector mu;
b: the input features U are re-calibrated by multiplying them with the channel calibration coefficient vector μ.
9. The method for denoising the real image based on the multi-scale fusion and the edge enhancement as claimed in claim 1, wherein the enhancement module in the fourth step specifically comprises the following working steps:
s41: sequentially passing the input feature map H multiplied by W multiplied by 5 through three cascaded residual modules, wherein each residual module comprises 3 multiplied by 3 convolution, a ReLU activation function and the second 3 multiplied by 3 convolution, and finally adding the result and the module input pixel by pixel; meanwhile, the whole enhancement module adopts a dense convolution network structure, namely, each layer splices the input of all the previous layers and then transmits the output characteristic diagram to all the next layers;
s42: the input is cascaded with the output H multiplied by W multiplied by 5 of the first residual error module to obtain a characteristic diagram H multiplied by W multiplied by 10, the output is mapped into H multiplied by W multiplied by 5 through 1 multiplied by 1 convolution operation and is sent into the second residual error module;
s43: the output of the second residual error module is cascaded with the output of the first residual error module and the input characteristic diagram to obtain an H multiplied by W multiplied by 15 characteristic diagram, H multiplied by W multiplied by 5 is output through 1 multiplied by 1 convolution, and then the H multiplied by W multiplied by 5 characteristic diagram is sent to a third residual error module;
s44: and performing feature mapping on the output by using a 1 × 1 convolutional layer to obtain and output a finally denoised image with edge details.
CN202011149797.3A 2020-10-23 2020-10-23 True image denoising method based on multi-scale fusion and edge enhancement Active CN112233038B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011149797.3A CN112233038B (en) 2020-10-23 2020-10-23 True image denoising method based on multi-scale fusion and edge enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011149797.3A CN112233038B (en) 2020-10-23 2020-10-23 True image denoising method based on multi-scale fusion and edge enhancement

Publications (2)

Publication Number Publication Date
CN112233038A true CN112233038A (en) 2021-01-15
CN112233038B CN112233038B (en) 2021-06-01

Family

ID=74110350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011149797.3A Active CN112233038B (en) 2020-10-23 2020-10-23 True image denoising method based on multi-scale fusion and edge enhancement

Country Status (1)

Country Link
CN (1) CN112233038B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785569A (en) * 2021-01-19 2021-05-11 浙江工业大学 Panoramic film dental caries segmentation method based on edge guidance and multi-scale fusion
CN112800942A (en) * 2021-01-26 2021-05-14 泉州装备制造研究所 Pedestrian detection method based on self-calibration convolutional network
CN112819739A (en) * 2021-01-28 2021-05-18 浙江祺跃科技有限公司 Scanning electron microscope image processing method and system
CN112907750A (en) * 2021-03-05 2021-06-04 齐鲁工业大学 Indoor scene layout estimation method and system based on convolutional neural network
CN112990215A (en) * 2021-03-04 2021-06-18 腾讯科技(深圳)有限公司 Image denoising method, device, equipment and storage medium
CN113034413A (en) * 2021-03-22 2021-06-25 西安邮电大学 Low-illumination image enhancement method based on multi-scale fusion residual error codec
CN113052771A (en) * 2021-03-19 2021-06-29 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113066033A (en) * 2021-04-19 2021-07-02 智领高新科技发展(北京)有限公司 Multi-stage denoising system and method for color image
CN113284064A (en) * 2021-05-24 2021-08-20 西安理工大学 Cross-scale context low-illumination image enhancement method based on attention mechanism
CN113344939A (en) * 2021-05-07 2021-09-03 西安智诊智能科技有限公司 Image segmentation method based on detail preservation network
CN113436118A (en) * 2021-08-10 2021-09-24 安徽工程大学 Low-dose CT image restoration method based on multi-scale convolutional coding network
CN113487495A (en) * 2021-06-02 2021-10-08 湖北地信科技集团股份有限公司 Multi-scale high-resolution image anti-noise generation method based on deep learning
CN113487528A (en) * 2021-06-30 2021-10-08 展讯通信(上海)有限公司 Image processing method and device, computer readable storage medium and terminal
CN114419327A (en) * 2022-01-18 2022-04-29 北京百度网讯科技有限公司 Image detection method and training method and device of image detection model
CN114820395A (en) * 2022-06-30 2022-07-29 浙江工业大学 Underwater image enhancement method based on multi-field information fusion
CN115497006A (en) * 2022-09-19 2022-12-20 杭州电子科技大学 Urban remote sensing image change depth monitoring method and system based on dynamic hybrid strategy
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising
CN116468619A (en) * 2023-03-01 2023-07-21 山东省人工智能研究院 Medical image denoising method based on multi-feature feedback fusion
CN116664605A (en) * 2023-08-01 2023-08-29 昆明理工大学 Medical image tumor segmentation method based on diffusion model and multi-mode fusion
CN116757966A (en) * 2023-08-17 2023-09-15 中科方寸知微(南京)科技有限公司 Image enhancement method and system based on multi-level curvature supervision
CN116977651A (en) * 2023-08-28 2023-10-31 河北师范大学 Image denoising method based on double-branch and multi-scale feature extraction
WO2024007160A1 (en) * 2022-07-05 2024-01-11 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Convolutional neural network (cnn) filter for super-resolution with reference picture resampling (rpr) functionality
CN112785569B (en) * 2021-01-19 2024-04-19 浙江工业大学 Panoramic sheet decayed tooth segmentation method based on edge guidance and multi-scale fusion

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049767A (en) * 2013-01-25 2013-04-17 西安电子科技大学 Aurora image classification method based on biological stimulation characteristic and manifold learning
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN109886871A (en) * 2019-01-07 2019-06-14 国家新闻出版广电总局广播科学研究院 The image super-resolution method merged based on channel attention mechanism and multilayer feature
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium
CN110390650A (en) * 2019-07-23 2019-10-29 中南大学 OCT image denoising method based on intensive connection and generation confrontation network
CN110517198A (en) * 2019-08-22 2019-11-29 太原科技大学 High frequency sensitivity GAN network for LDCT image denoising
US20200005122A1 (en) * 2018-06-27 2020-01-02 International Business Machines Corporation Multiscale feature representations for object recognition and detection
CN110648334A (en) * 2019-09-18 2020-01-03 中国人民解放军火箭军工程大学 Multi-feature cyclic convolution saliency target detection method based on attention mechanism
CN110728640A (en) * 2019-10-12 2020-01-24 合肥工业大学 Double-channel single-image fine rain removing method
CN110766632A (en) * 2019-10-22 2020-02-07 广东启迪图卫科技股份有限公司 Image denoising method based on channel attention mechanism and characteristic pyramid
CN111080541A (en) * 2019-12-06 2020-04-28 广东启迪图卫科技股份有限公司 Color image denoising method based on bit layering and attention fusion mechanism
US20200177898A1 (en) * 2018-10-19 2020-06-04 Samsung Electronics Co., Ltd. Methods and apparatuses for performing encoding and decoding on image
CN111242862A (en) * 2020-01-09 2020-06-05 西安理工大学 Multi-scale fusion parallel dense residual convolution neural network image denoising method
CN111292259A (en) * 2020-01-14 2020-06-16 西安交通大学 Deep learning image denoising method integrating multi-scale and attention mechanism
CN111429370A (en) * 2020-03-23 2020-07-17 煤炭科学技术研究院有限公司 Method and system for enhancing images in coal mine and computer storage medium
CN111582223A (en) * 2020-05-19 2020-08-25 华普通用技术研究(广州)有限公司 Three-dimensional face recognition method
CN111667447A (en) * 2020-06-05 2020-09-15 全景恒升(北京)科学技术有限公司 Intravascular image fusion method and system and image acquisition device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049767A (en) * 2013-01-25 2013-04-17 西安电子科技大学 Aurora image classification method based on biological stimulation characteristic and manifold learning
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
US20200005122A1 (en) * 2018-06-27 2020-01-02 International Business Machines Corporation Multiscale feature representations for object recognition and detection
US20200177898A1 (en) * 2018-10-19 2020-06-04 Samsung Electronics Co., Ltd. Methods and apparatuses for performing encoding and decoding on image
CN109886871A (en) * 2019-01-07 2019-06-14 国家新闻出版广电总局广播科学研究院 The image super-resolution method merged based on channel attention mechanism and multilayer feature
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium
CN110390650A (en) * 2019-07-23 2019-10-29 中南大学 OCT image denoising method based on intensive connection and generation confrontation network
CN110517198A (en) * 2019-08-22 2019-11-29 太原科技大学 High frequency sensitivity GAN network for LDCT image denoising
CN110648334A (en) * 2019-09-18 2020-01-03 中国人民解放军火箭军工程大学 Multi-feature cyclic convolution saliency target detection method based on attention mechanism
CN110728640A (en) * 2019-10-12 2020-01-24 合肥工业大学 Double-channel single-image fine rain removing method
CN110766632A (en) * 2019-10-22 2020-02-07 广东启迪图卫科技股份有限公司 Image denoising method based on channel attention mechanism and characteristic pyramid
CN111080541A (en) * 2019-12-06 2020-04-28 广东启迪图卫科技股份有限公司 Color image denoising method based on bit layering and attention fusion mechanism
CN111242862A (en) * 2020-01-09 2020-06-05 西安理工大学 Multi-scale fusion parallel dense residual convolution neural network image denoising method
CN111292259A (en) * 2020-01-14 2020-06-16 西安交通大学 Deep learning image denoising method integrating multi-scale and attention mechanism
CN111429370A (en) * 2020-03-23 2020-07-17 煤炭科学技术研究院有限公司 Method and system for enhancing images in coal mine and computer storage medium
CN111582223A (en) * 2020-05-19 2020-08-25 华普通用技术研究(广州)有限公司 Three-dimensional face recognition method
CN111667447A (en) * 2020-06-05 2020-09-15 全景恒升(北京)科学技术有限公司 Intravascular image fusion method and system and image acquisition device

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
DONGJIE LI ET AL: "A multiscale dilated residual network for image denoising", 《MULTIMEDIA TOOLS AND APPLICATIONS》 *
P BAO ET AL: "Noise Reduction for Magnetic Resonance Images via Adaptive Multiscale Products Thresholding", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
刘玉淑: "基于多尺度变换的图像去噪及融合算法研究", 《中国博士学位论文全文数据库 信息科技辑》 *
刘金华: "改进的混合双域图像去噪和基于融合差异图及边缘分类的SAR图像变化检测", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王文等: "基于数据融合的多尺度图像去噪方法", 《武汉大学学报(信息科学版)》 *
赵刚: "基于多尺度统计分析SAR图像边缘检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
高翔: "多尺度方向分析在图像去噪、增强和融合中的应用", 《万方》 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising
CN112785569A (en) * 2021-01-19 2021-05-11 浙江工业大学 Panoramic film dental caries segmentation method based on edge guidance and multi-scale fusion
CN112785569B (en) * 2021-01-19 2024-04-19 浙江工业大学 Panoramic sheet decayed tooth segmentation method based on edge guidance and multi-scale fusion
CN112800942A (en) * 2021-01-26 2021-05-14 泉州装备制造研究所 Pedestrian detection method based on self-calibration convolutional network
CN112800942B (en) * 2021-01-26 2024-02-13 泉州装备制造研究所 Pedestrian detection method based on self-calibration convolutional network
CN112819739B (en) * 2021-01-28 2024-03-01 浙江祺跃科技有限公司 Image processing method and system for scanning electron microscope
CN112819739A (en) * 2021-01-28 2021-05-18 浙江祺跃科技有限公司 Scanning electron microscope image processing method and system
CN112990215A (en) * 2021-03-04 2021-06-18 腾讯科技(深圳)有限公司 Image denoising method, device, equipment and storage medium
CN112990215B (en) * 2021-03-04 2023-12-12 腾讯科技(深圳)有限公司 Image denoising method, device, equipment and storage medium
CN112907750A (en) * 2021-03-05 2021-06-04 齐鲁工业大学 Indoor scene layout estimation method and system based on convolutional neural network
CN113052771A (en) * 2021-03-19 2021-06-29 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113052771B (en) * 2021-03-19 2023-09-05 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and storage medium
CN113034413B (en) * 2021-03-22 2024-03-05 西安邮电大学 Low-illumination image enhancement method based on multi-scale fusion residual error coder-decoder
CN113034413A (en) * 2021-03-22 2021-06-25 西安邮电大学 Low-illumination image enhancement method based on multi-scale fusion residual error codec
CN113066033B (en) * 2021-04-19 2023-11-17 智领高新科技发展(北京)有限公司 Multi-stage denoising system and method for color image
CN113066033A (en) * 2021-04-19 2021-07-02 智领高新科技发展(北京)有限公司 Multi-stage denoising system and method for color image
CN113344939A (en) * 2021-05-07 2021-09-03 西安智诊智能科技有限公司 Image segmentation method based on detail preservation network
CN113284064B (en) * 2021-05-24 2023-04-07 西安理工大学 Cross-scale context low-illumination image enhancement method based on attention mechanism
CN113284064A (en) * 2021-05-24 2021-08-20 西安理工大学 Cross-scale context low-illumination image enhancement method based on attention mechanism
CN113487495B (en) * 2021-06-02 2022-04-29 湖北地信科技集团股份有限公司 Multi-scale high-resolution image anti-noise generation method based on deep learning
CN113487495A (en) * 2021-06-02 2021-10-08 湖北地信科技集团股份有限公司 Multi-scale high-resolution image anti-noise generation method based on deep learning
CN113487528B (en) * 2021-06-30 2022-11-29 展讯通信(上海)有限公司 Image processing method and device, computer readable storage medium and terminal
CN113487528A (en) * 2021-06-30 2021-10-08 展讯通信(上海)有限公司 Image processing method and device, computer readable storage medium and terminal
CN113436118A (en) * 2021-08-10 2021-09-24 安徽工程大学 Low-dose CT image restoration method based on multi-scale convolutional coding network
CN114419327A (en) * 2022-01-18 2022-04-29 北京百度网讯科技有限公司 Image detection method and training method and device of image detection model
CN114820395A (en) * 2022-06-30 2022-07-29 浙江工业大学 Underwater image enhancement method based on multi-field information fusion
WO2024007160A1 (en) * 2022-07-05 2024-01-11 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Convolutional neural network (cnn) filter for super-resolution with reference picture resampling (rpr) functionality
CN115497006B (en) * 2022-09-19 2023-08-01 杭州电子科技大学 Urban remote sensing image change depth monitoring method and system based on dynamic mixing strategy
CN115497006A (en) * 2022-09-19 2022-12-20 杭州电子科技大学 Urban remote sensing image change depth monitoring method and system based on dynamic hybrid strategy
CN116468619B (en) * 2023-03-01 2024-02-06 山东省人工智能研究院 Medical image denoising method based on multi-feature feedback fusion
CN116468619A (en) * 2023-03-01 2023-07-21 山东省人工智能研究院 Medical image denoising method based on multi-feature feedback fusion
CN116664605B (en) * 2023-08-01 2023-10-10 昆明理工大学 Medical image tumor segmentation method based on diffusion model and multi-mode fusion
CN116664605A (en) * 2023-08-01 2023-08-29 昆明理工大学 Medical image tumor segmentation method based on diffusion model and multi-mode fusion
CN116757966A (en) * 2023-08-17 2023-09-15 中科方寸知微(南京)科技有限公司 Image enhancement method and system based on multi-level curvature supervision
CN116977651A (en) * 2023-08-28 2023-10-31 河北师范大学 Image denoising method based on double-branch and multi-scale feature extraction
CN116977651B (en) * 2023-08-28 2024-02-23 河北师范大学 Image denoising method based on double-branch and multi-scale feature extraction

Also Published As

Publication number Publication date
CN112233038B (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
Tian et al. Deep learning on image denoising: An overview
Lv et al. Attention guided low-light image enhancement with a large scale low-light simulation dataset
CN109360171B (en) Real-time deblurring method for video image based on neural network
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN110766632A (en) Image denoising method based on channel attention mechanism and characteristic pyramid
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN111275637A (en) Non-uniform motion blurred image self-adaptive restoration method based on attention model
CN111488865B (en) Image optimization method and device, computer storage medium and electronic equipment
CN111709895A (en) Image blind deblurring method and system based on attention mechanism
CN111028177A (en) Edge-based deep learning image motion blur removing method
CN111209952A (en) Underwater target detection method based on improved SSD and transfer learning
CN112541877B (en) Defuzzification method, system, equipment and medium for generating countermeasure network based on condition
CN112164011B (en) Motion image deblurring method based on self-adaptive residual error and recursive cross attention
CN112348747A (en) Image enhancement method, device and storage medium
CN112465727A (en) Low-illumination image enhancement method without normal illumination reference based on HSV color space and Retinex theory
CN113450290B (en) Low-illumination image enhancement method and system based on image inpainting technology
CN111127331B (en) Image denoising method based on pixel-level global noise estimation coding and decoding network
Wang et al. MAGAN: Unsupervised low-light image enhancement guided by mixed-attention
CN111861894A (en) Image motion blur removing method based on generating type countermeasure network
CN112614061A (en) Low-illumination image brightness enhancement and super-resolution method based on double-channel coder-decoder
US20230252605A1 (en) Method and system for a high-frequency attention network for efficient single image super-resolution
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
CN114723630A (en) Image deblurring method and system based on cavity double-residual multi-scale depth network
CN116797488A (en) Low-illumination image enhancement method based on feature fusion and attention embedding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240206

Address after: Building 1, Shuimu Yifang Building, No. 286 Nanguang Road, Dawangshan Community, Nantou Street, Nanshan District, Shenzhen City, Guangdong Province, 518000, 2207

Patentee after: Qidi Yuanjing (Shenzhen) Technology Co.,Ltd.

Country or region after: China

Address before: Unit 416, Tianan science and technology innovation building, Panyu energy saving science and Technology Park, 555 Panyu Avenue North, Panyu District, Guangzhou, Guangdong 511400

Patentee before: GUANGDONG QIDI TUWEI TECHNOLOGY CO.,LTD.

Country or region before: China