CN116721015A - Noise-containing rapid image enhancement and super-resolution reconstruction method - Google Patents

Noise-containing rapid image enhancement and super-resolution reconstruction method Download PDF

Info

Publication number
CN116721015A
CN116721015A CN202310456381.3A CN202310456381A CN116721015A CN 116721015 A CN116721015 A CN 116721015A CN 202310456381 A CN202310456381 A CN 202310456381A CN 116721015 A CN116721015 A CN 116721015A
Authority
CN
China
Prior art keywords
image
noise
patch
clean
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310456381.3A
Other languages
Chinese (zh)
Inventor
李绰
王天鹤
周志远
张晨
赵安娜
潘建旋
李硕祉
张云昊
刘鑫
赵帅
姜洪妍
王才喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Jinhang Institute of Technical Physics
Original Assignee
Tianjin Jinhang Institute of Technical Physics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Jinhang Institute of Technical Physics filed Critical Tianjin Jinhang Institute of Technical Physics
Priority to CN202310456381.3A priority Critical patent/CN116721015A/en
Publication of CN116721015A publication Critical patent/CN116721015A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application provides a noise-containing rapid image enhancement and super-resolution reconstruction method, which comprises the following steps: s1, processing an acquired image to acquire a blurred image with noise and a clean enhanced image to form a training data pair; s2, constructing and optimizing a training model according to the training data pair; s3, taking the image with noise as the input of the optimized training model, obtaining the model output noise, and obtaining the image after noise reduction and image enhancement by making difference with the input; and taking the image with noise as the input of the optimized training model after up-sampling treatment, obtaining the model output noise, and obtaining the image after noise reduction and image super-resolution reconstruction by difference between the model output noise and the input. By applying the technical scheme of the application, the technical problems of poor noise reduction effect, complex model and poor data universality of partial detail loss in the image noise reduction, image enhancement and super-resolution reconstruction method in the prior art can be solved.

Description

Noise-containing rapid image enhancement and super-resolution reconstruction method
Technical Field
The application relates to the technical field of image processing, in particular to a noise-containing rapid image enhancement and super-resolution reconstruction method.
Background
In the real world, the image capturing process is often affected by various signal-dependent or signal-independent noise, and thus, degradation of image quality never prevents reception of real image information by people and computers. To solve this problem, image denoising has been widely studied over the past decades as an important step in image perception.
Although digital photographing has made tremendous progress in recent years due to continued improvements in camera sensors and image signal processing procedures. However, due to various factors such as scene conditions, insufficient illumination, or photographer skill, partial images still have problems of low contrast, low brightness, serious noise, and the like. In response to these problems, many conventional image enhancement and super-resolution reconstruction methods have been proposed by researchers over the past decades. In recent years, with the development of deep learning, many image enhancement and super-resolution reconstruction methods based on supervised and unsupervised learning have been proposed, and good effects are obtained. Although most methods can significantly improve image contrast and brightness or restore the natural and true texture of high resolution images. However, these methods have difficulty directly reducing or suppressing noise, and even possibly amplifying noise.
The purpose of image denoising is to recover a clean image from a noisy image. The current image denoising method mainly comprises deep learning, wherein the deep learning method uses a large amount of noise/clean image pairs as training data pairs, and then denoising is performed by learning prior and noise distribution of images from the training data based on a CNN (Convolutional Neural Networks, CNN) deep learning model. Although most of the existing methods show excellent performance in image denoising, besides a large number of parameter tuning and complex models, the methods are easy to blur in image processing under the condition of large noise such as low illumination and the like, and have poor details.
Existing image enhancement methods can be broadly divided into two categories: a non-learning method and a data driving method. The traditional methods mainly comprise a histogram-based method, a retinex-based method, a haze-removing-based method and the like. The data driven approach takes advantage of large scale synthetic data sets, which yields good improvements in performance and speed. Although the enhancement effect of the traditional methods is relatively easy, the images enhanced by the methods often have artifacts such as color distortion or over enhancement, and when the intensity of weak light or noise is overlarge, many deep learning methods may have poor effect or easily have noise amplification.
Super-resolution reconstruction tasks have been applied to solve super-resolution tasks from a convolutional neural network method to a recently promising SR method (e.g., SRGAN) for generating an countermeasure network (GAN), and the like. Along with the continuous optimization of the super-resolution reconstruction effect, the model complexity based on the deep learning training is gradually improved, and when the performance requirements of the practical application are strict, the hardware conditions may not meet the requirements. And because the image processing method based on deep learning has strong dependence on data, when super-resolution reconstruction is performed on lens acquisition data of a specific task, a single public data set may not meet the requirements.
Disclosure of Invention
The present application aims to solve at least one of the technical problems existing in the prior art.
The application provides a noise-containing rapid image enhancement and super-resolution reconstruction method, which comprises the following steps: s1, processing an acquired image to acquire a blurred image with noise and a clean enhanced image to form a training data pair; s2, constructing and optimizing a training model according to the training data pair; s3, taking the image with noise as the input of the optimized training model, obtaining the model output noise, and obtaining the image after noise reduction and image enhancement by making difference with the input; the image with noise is used as the input of the optimized training model after up-sampling treatment, the model output noise is obtained, and the image after noise reduction and image super-resolution reconstruction is obtained by difference between the model output noise and the input;
wherein, S1 specifically includes: s11, screening the acquired images; s12, clipping the screened image to obtain a preliminary noise image noise_patch; s13, performing noise reduction processing on the noise image noise_patch obtained in the S12 to obtain a clean image gt_patch; s14, performing image enhancement on the clean image gt_patch to obtain a clean enhanced image gt_patch_enhancement; s15, performing image blurring processing on the clean image gt_patch obtained in the S13 to obtain a blurred image gt_patch_blast; s16, superposing noise on the blurred image gt_patch_blu obtained in the S15 to obtain a blurred noise image noise_patch_new; s17, the fuzzy noise image noise_patch_new obtained in the S16 and the clean enhancement image gt_patch_enhancement obtained in the S14 form a training data pair for model training.
Further, in S11, the variance function is used as an evaluation function to perform preliminary screening on the collected noise image, an image with a definition greater than a predetermined value is reserved, a blurred image which does not meet the requirements is removed, and noise of the noise image is obtained preliminarily.
Further, in S13, the noise map noise_patch obtained in S12 is noise-reduced by adopting a BM3D algorithm or an NLM non-local-mean algorithm to obtain a basic clean image gt_patch.
Further, in S14, the laplace image enhancement is performed on the clean image gt_patch, so as to obtain an enhanced clean image gt_patch_enhancement, which is used as a clean image for final training.
Further, in S15, firstly, a gaussian blur method is adopted to perform image blur processing on the clean image gt_patch, then, bilinear interpolation is adopted to perform downsampling and then upsampling on the blurred image gt_patch_blu obtained in S15 to restore the original size, and finally, a blurred image gt_patch_blu is obtained.
Further, in S16, the noise image noise_patch obtained in S12 is first subtracted from the clean image gt_patch obtained in S13 to obtain a noise noise_patch, and then the noise noise_patch is superimposed with the blurred image gt_patch_blu obtained in S15 to obtain a new blurred noise image noise_patch_new.
Further, in S17, the blurred noise image noise_patch_new obtained in S16 and the clean enhancement image gt_patch_enhancement are formed into a training data pair.
Further, building the training model specifically includes: s21, setting a training model structure; s22, cutting the training model; s23, setting input and output of a training model according to the training data pair in S17, and optimizing training model parameters.
Further, in S21, a training model is set to adopt a five-layer unet structure; in S22, the bn layer in the training model is clipped.
Further, in S23, the fuzzy noise image noise_patch_new in the training data pair of S17 is taken as a model input, the last output of the model is taken as a noise, and the difference between the output noise and the input fuzzy noise image noise_patch_new is taken as an expected clean image denoise; comparing the clean enhanced image gt_patch_enhancement in the S17 training data pair with the expected clean image, and optimizing the training model parameter setting by using the SSIM as a final loss function.
By applying the technical scheme of the application, the application provides a noise-containing rapid image enhancement and super-resolution reconstruction method, which comprises the steps of obtaining a fuzzy image with noise and a clean enhanced image through corresponding processing of the image to form a training data pair, constructing and optimizing a training model according to the training data pair, and taking the image with noise or taking the image with noise after up-sampling processing as the input of the optimized training model to obtain the output noise of the model, and finishing noise reduction and image enhancement or noise reduction and super-resolution reconstruction of the image by making difference with the input. The processing steps for acquiring the training data pair have universality and can be used for different image acquisition and processing; the training model can achieve the effects of noise reduction, image enhancement or noise reduction and super-resolution reconstruction, and has the characteristic of light weight. Compared with the prior art, the technical scheme of the application can solve the technical problems of poor noise reduction effect, complex model and poor data universality of partial detail loss in the image noise reduction, image enhancement and super-resolution reconstruction method in the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. It is evident that the drawings in the following description are only some embodiments of the present application and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of a method for noise-containing fast image enhancement and super-resolution reconstruction according to an embodiment of the present application;
FIG. 2 is a diagram of a unet structure of a training model provided in accordance with an embodiment of the present application;
FIG. 3 illustrates an image contrast diagram before and after noise reduction and image enhancement provided in accordance with a specific embodiment of the present application;
fig. 4 shows a graph of image contrast before and after noise reduction and image super-resolution reconstruction provided in accordance with an embodiment of the present application.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the application, its application, or uses. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description. Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
As shown in fig. 1, according to an embodiment of the present application, there is provided a method for noise-containing fast image enhancement and super-resolution reconstruction, the method comprising:
s1, processing an acquired image to acquire a blurred image with noise and a clean enhanced image to form a training data pair;
s2, constructing and optimizing a training model according to the training data pair;
s3, taking the image with noise as the input of the optimized training model, obtaining the model output noise, and obtaining the image after noise reduction and image enhancement by making difference with the input; the image with noise is used as the input of the optimized training model after up-sampling treatment, the model output noise is obtained, and the image after noise reduction and image super-resolution reconstruction is obtained by difference between the model output noise and the input;
wherein, S1 specifically includes:
s11, screening the acquired images;
s12, clipping the screened image to obtain a preliminary noise image noise_patch;
s13, performing noise reduction processing on the noise image noise_patch obtained in the S12 to obtain a clean image gt_patch;
s14, performing image enhancement on the clean image gt_patch to obtain a clean enhanced image gt_patch_enhancement;
s15, performing image blurring processing on the clean image gt_patch obtained in the S13 to obtain a blurred image gt_patch_blast;
s16, superposing noise on the blurred image gt_patch_blu obtained in the S15 to obtain a blurred noise image noise_patch_new;
s17, the fuzzy noise image noise_patch_new obtained in the S16 and the clean enhancement image gt_patch_enhancement obtained in the S14 form a training data pair for model training.
By applying the configuration mode, the method comprises the steps of correspondingly processing the images to obtain a fuzzy image with noise and a clean image with noise to form a training data pair, constructing and optimizing a training model according to the training data pair, and taking the image with noise or the image with noise after up-sampling processing as the input of the optimized training model to obtain the output noise of the model, and performing difference with the input to finish noise reduction and image enhancement or noise reduction and image super-resolution reconstruction. The processing steps for acquiring the training data pair have universality and can be used for different image acquisition and processing; the training model can achieve the effects of noise reduction, image enhancement or noise reduction and super-resolution reconstruction, and has the characteristic of light weight.
Further, in the present application, S1 is executed first, and the acquired image is processed to obtain a blurred image with noise and a clean enhanced image to form a training data pair.
S1 specifically comprises:
s11, screening the acquired images;
s12, clipping the screened image to obtain a preliminary noise image noise_patch;
s13, performing noise reduction processing on the noise image noise_patch obtained in the S12 to obtain a clean image gt_patch;
s14, performing image enhancement on the clean image gt_patch to obtain a clean enhanced image gt_patch_enhancement;
s15, performing image blurring processing on the clean image gt_patch obtained in the S13 to obtain a blurred image gt_patch_blast;
s16, superposing noise on the blurred image gt_patch_blu obtained in the S15 to obtain a blurred noise image noise_patch_new;
s17, the fuzzy noise image noise_patch_new obtained in the S16 and the clean enhancement image gt_patch_enhancement obtained in the S14 form a training data pair for model training.
As a specific embodiment of the present application, in S11, when the camera is used to perform image acquisition on a specific scene, the acquired image is a noise image due to device image acquisition, environment, and the like. The clearly focused image has larger gray scale difference than the blurred image, so that the variance function can be used as an evaluation function to perform preliminary screening on the acquired noise image, the image with the definition larger than the preset definition value is reserved, the blurred image which does not meet the requirement is removed, and the noise image noise is preliminarily obtained.
In this embodiment, the evaluation function may be set toWherein x and y are pixel point coordinates, μ is an image mean, and f (x, y) is a coordinate pixel value of the (x, y) point.
In S12, the noise image noise_patch may be cut according to actual needs, for example, the noise image noise may be cut to a noise image noise_patch of 64×64 pixels.
In S13, a BM3D algorithm (Block Matching 3D) or an NLM (Non-local Mean) Non-local Mean algorithm may be used to perform noise reduction processing on the noise map noise_patch obtained in S12 to obtain a basic clean image gt_patch.
In S14, laplace image enhancement may be performed on the clean image gt_patch, to obtain an enhanced clean image gt_patch_enhancement, which is used as a clean image for final training.
In S15, firstly, performing image blurring processing on the clean image gt_patch by adopting a Gaussian blurring method, then, performing downsampling and upsampling on the blurred image gt_patch_blu obtained in S15 by adopting bilinear interpolation to restore the original size, and finally, obtaining the blurred image gt_patch_blu.
In S16, the noise image noise_patch obtained in S12 is first differenced from the clean image gt_patch obtained in S13 to obtain a noise noise_patch, and then the noise noise_patch is superimposed with the blurred image gt_patch_blu obtained in S15 to obtain a new blurred noise image noise_patch_new.
In S17, the blurred noise image noise_patch_new obtained in S16 and the clean enhancement image gt_patch_enhancement are formed into a training data pair for model training.
Further, after the training data pair is acquired, S2 is performed, and a training model is constructed and optimized according to the training data pair.
In the application, constructing a training model specifically comprises:
s21, setting a training model structure;
s22, cutting the training model;
s23, setting input and output of a training model according to the training data pair in S17, and optimizing training model parameters.
As an embodiment of the present application, in S21, in order to reduce the complexity of the model, a training model may be configured to use a five-layer unet structure, as shown in fig. 2. In this embodiment, the channel outputs of each convolution layer (one layer is formed by 3 rectangular frames in each group in the figure) can be set to be 16, 32, 64, 32 and 16, the convolution kernel size is 3×3, and the reduction of the number of channels and the reduction of the network layer number can greatly reduce the parameter number and improve the method performance.
The BN (BatchNormalization) layer of the training model is mainly used for standardization and contrast of data in the model, contrast stretching and changing of the data are not needed for bottom visual processing, and the applicant finds that great benefits are not brought to the BN layer for the bottom visual processing, so that in S22, the BN layer in the training model is cut, the effect can be improved, and meanwhile, the running time of the model can be greatly reduced and the performance can be improved due to the learnable parameters in the BN layer.
In S23, using a residual manner, inputting a noise image noise_patch_new in the training data pair of S17 as a model, and outputting the last output of the model as a noise, wherein the difference between the output noise and the input noise image noise_patch_new is used as an expected clean image denoised; and comparing the clean enhanced image gt_patch_enhancement in the training data pair with the expected clean image, and optimizing the training model parameter setting.
As shown in fig. 2, the input blurred noise image noise_patch_new is differenced with the output noise to obtain the desired clean image denoise: denoise=noise_patch_new-noise.
In the optimization of the training model parameters, SSIM (structural similarity index ) is used as the final loss function.
Further, in the application, after the optimized training model is obtained, S3 is executed, the image with noise is used as the input of the optimized training model, the model output noise is obtained, and the noise is differenced with the input to obtain the image after noise reduction and image enhancement; and taking the image with noise as the input of the optimized training model after up-sampling treatment, obtaining the model output noise, and obtaining the image after noise reduction and image super-resolution reconstruction by difference between the model output noise and the input.
As a specific embodiment of the present application, the up-sampling amplification process may be performed on the noisy image by using a bilinear interpolation method.
The application carries out denoising and blurring treatment on low-quality blurred images caused by optical factors, atmospheric factors, artificial factors, technical factors and the like, can solve the problem of missing image details caused by denoising under the complex conditions of low illumination, high noise and the like, and can solve the problem of noise residue caused by image enhancement under the complex conditions, the requirement on training data under specific tasks and the performance requirement on deep learning in practical application.
The application carries out the processing of noise reduction, image enhancement, image blurring and up-down sampling of the image in the S1, so that the trained model can realize the effects of noise reduction, image enhancement or noise reduction and super-resolution reconstruction. The method has universality on the data processing steps, can be used for different image acquisition, and has strong robustness.
The model provided by the application has universality, can realize three functions of noise reduction, image enhancement and image super-resolution reconstruction in the same model, and realizes the light weight of the model. The lightweight model can reduce the requirements of hardware equipment and reduce project cost; and the lightweight model is fast, and user experience is good. The model is light and universal in data processing, and can be suitable for task processing of different data acquisition devices.
For further understanding of the present application, the following describes the noisy fast image enhancement and super-resolution reconstruction method of the present application in detail with reference to fig. 1 to 4.
As shown in fig. 1 to 4, a method for fast image enhancement and super-resolution reconstruction with noise according to an embodiment of the present application includes the following steps.
S11, screening the acquired images;
s12, clipping the screened image to obtain a preliminary noise image noise_patch;
s13, performing noise reduction processing on the noise image noise_patch obtained in the S12 to obtain a clean image gt_patch;
s14, performing image enhancement on the clean image gt_patch to obtain a clean enhanced image gt_patch_enhancement;
s15, performing image blurring processing on the clean image gt_patch obtained in the S13 to obtain a blurred image gt_patch_blast;
s16, superposing noise on the blurred image gt_patch_blu obtained in the S15 to obtain a blurred noise image noise_patch_new;
s17, the fuzzy noise image noise_patch_new obtained in the S16 and the clean enhancement image gt_patch_enhancement obtained in the S14 form a training data pair for model training.
S21, setting a training model structure, and adopting a five-layer unet structure;
s22, cutting the bn layer in the training model;
s23, taking a fuzzy noise image noise_patch_new in the training data pair of S17 as a model input, taking the last output of the model as noise, and taking the difference between the output noise and the input fuzzy noise image noise_patch_new as an expected clean image denoise; and comparing the clean enhanced image gt_patch_enhancement in the training data pair with the expected clean image, and optimizing the training model parameter setting.
S3, taking the image with noise as the input of the optimized training model, obtaining model output noise, and obtaining the image after noise reduction and image enhancement by difference between the model output noise and the input, wherein the image is shown in FIG. 3; the image with noise is used as the input of the optimized training model after up-sampling processing, the model output noise is obtained, and the image after noise reduction and image super-resolution reconstruction is obtained by difference between the model output noise and the input, as shown in fig. 4.
As can be seen from fig. 3 and fig. 4, the method of the present application can adopt the same model to realize the effects of noise reduction, image enhancement or noise reduction and super-resolution reconstruction, has obvious image processing effect, and can meet the use requirements.
In summary, the present application provides a method for enhancing a noisy fast image and reconstructing a super-resolution, which includes obtaining a blurred image with noise and a clean enhanced image by performing corresponding processing on the image to form a training data pair, and constructing and optimizing a training model based on the training data pair, and obtaining noise output by the model by taking the noisy image or the noisy image after up-sampling processing as an input of the optimized training model, and performing difference with the input to complete noise reduction and image enhancement or noise reduction and image super-resolution reconstruction. The processing steps for acquiring the training data pair have universality and can be used for different image acquisition and processing; the training model can achieve the effects of noise reduction, image enhancement or noise reduction and super-resolution reconstruction, and has the characteristic of light weight.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. The method for enhancing the noise-containing rapid image and reconstructing the super resolution is characterized by comprising the following steps of:
s1, processing an acquired image to acquire a blurred image with noise and a clean enhanced image to form a training data pair;
s2, constructing and optimizing a training model according to the training data pair;
s3, taking the image with noise as the input of the optimized training model, obtaining the model output noise, and obtaining the image after noise reduction and image enhancement by making difference with the input; the image with noise is used as the input of the optimized training model after up-sampling treatment, the model output noise is obtained, and the image after noise reduction and image super-resolution reconstruction is obtained by difference between the model output noise and the input;
wherein, S1 specifically includes:
s11, screening the acquired images;
s12, clipping the screened image to obtain a preliminary noise image noise_patch;
s13, carrying out noise reduction processing on the noise image noise_patch obtained in the S12 to obtain a clean image gt_patch;
s14, performing image enhancement on the clean image gt_patch to obtain a clean enhanced image gt_patch_enhancement;
s15, performing image blurring processing on the clean image gt_patch obtained in the S13 to obtain a blurred image gt_patch_blu;
s16, superposing noise on the blurred image gt_patch_blu obtained in the S15 to obtain a blurred noise image noise_patch_new;
s17, the fuzzy noise image noise_patch_new obtained in S16 and the clean enhancement image gt_patch_enhancement obtained in S14 are combined into a training data pair for model training.
2. The method for enhancing and reconstructing a noisy rapid image according to claim 1, wherein in S11, the collected noisy image is preliminarily screened by using a variance function as an evaluation function, an image with a sharpness greater than a predetermined sharpness value is retained, a blurred image which does not meet the requirements is removed, and a noise image noise is preliminarily obtained.
3. The method for enhancing and reconstructing a noisy image according to claim 1, wherein in S13, a BM3D algorithm or an NLM non-local mean algorithm is adopted to perform noise reduction processing on the noise map noise_patch obtained in S12 to obtain a basic clean image gt_patch.
4. The method for enhancing a noisy fast image and reconstructing a super-resolution according to claim 1, wherein in S14, the clean image gt_patch is subjected to laplacian image enhancement to obtain an enhanced clean image gt_patch_enhancement as a final trained clean image.
5. The method for enhancing and reconstructing a noisy image according to claim 4, wherein in S15, the clean image gt_patch is first subjected to image blurring processing by a gaussian blurring method, then the blurred image gt_patch_blu obtained in S15 is downsampled by bilinear interpolation and then upsampled to the original size, and finally the blurred image gt_patch_blu is obtained.
6. The method according to claim 1, wherein in S16, the noise image noise_patch obtained in S12 is first subtracted from the clean image gt_patch obtained in S13 to obtain a noise noise_patch, and then the noise noise_patch is superimposed with the blurred image gt_patch_blu obtained in S15 to obtain a new blurred noise image noise_patch_new.
7. The method for enhancing and reconstructing a noisy image according to claim 1, wherein in S17, the blurred noise image noise_patch_new obtained in S16 and the clean enhancement image gt_patch_enhancement are combined into a training data pair.
8. The noisy rapid image enhancement and super-resolution reconstruction method according to any one of claims 1 to 7, wherein constructing the training model specifically comprises:
s21, setting a training model structure;
s22, cutting the training model;
s23, setting input and output of a training model according to the training data pair in S17, and optimizing training model parameters.
9. The method for enhancing and reconstructing a noisy fast image according to claim 8, wherein in S21, a training model is set to adopt a five-layer unet structure; in S22, the bn layer in the training model is clipped.
10. The method for enhancing and reconstructing a noisy fast image according to claim 8 or 9, wherein in S23, the noise_patch_new of the blurred noise image in the training data pair of S17 is taken as the model input, the last output of the model is taken as the noise, and the difference between the noise output and the noise_patch_new of the blurred noise image input is taken as the expected clean image denoised; comparing the clean enhanced image gt_patch_enhancement in the S17 training data pair with the expected clean image, and optimizing the training model parameter setting by using the SSIM as a final loss function.
CN202310456381.3A 2023-04-25 2023-04-25 Noise-containing rapid image enhancement and super-resolution reconstruction method Pending CN116721015A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310456381.3A CN116721015A (en) 2023-04-25 2023-04-25 Noise-containing rapid image enhancement and super-resolution reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310456381.3A CN116721015A (en) 2023-04-25 2023-04-25 Noise-containing rapid image enhancement and super-resolution reconstruction method

Publications (1)

Publication Number Publication Date
CN116721015A true CN116721015A (en) 2023-09-08

Family

ID=87863809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310456381.3A Pending CN116721015A (en) 2023-04-25 2023-04-25 Noise-containing rapid image enhancement and super-resolution reconstruction method

Country Status (1)

Country Link
CN (1) CN116721015A (en)

Similar Documents

Publication Publication Date Title
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
CN110163818B (en) Low-illumination video image enhancement method for maritime unmanned aerial vehicle
CN113450290B (en) Low-illumination image enhancement method and system based on image inpainting technology
CN116051428B (en) Deep learning-based combined denoising and superdivision low-illumination image enhancement method
CN112669214A (en) Fuzzy image super-resolution reconstruction method based on alternative direction multiplier algorithm
CN114723630A (en) Image deblurring method and system based on cavity double-residual multi-scale depth network
Liu et al. Image reconstruction using deep learning
CN113129236A (en) Single low-light image enhancement method and system based on Retinex and convolutional neural network
CN109345479B (en) Real-time preprocessing method and storage medium for video monitoring data
CN112614063B (en) Image enhancement and noise self-adaptive removal method for low-illumination environment in building
CN112927160B (en) Single low-light image enhancement method based on depth Retinex
CN113850744A (en) Image enhancement algorithm based on self-adaptive Retinex and wavelet fusion
CN110852947B (en) Infrared image super-resolution method based on edge sharpening
CN116823686B (en) Night infrared and visible light image fusion method based on image enhancement
CN116128768B (en) Unsupervised image low-illumination enhancement method with denoising module
CN117058019A (en) Pyramid enhancement network-based target detection method under low illumination
CN116612263A (en) Method and device for sensing consistency dynamic fitting of latent vision synthesis
CN116363011A (en) Multi-branch low-illumination image enhancement method based on frequency domain frequency division
CN116721015A (en) Noise-containing rapid image enhancement and super-resolution reconstruction method
CN115829967A (en) Industrial metal surface defect image denoising and enhancing method
CN115018717A (en) Improved Retinex-Net low-illumination and dark vision image enhancement method
CN114140360A (en) Local low-visibility image enhancement method
He et al. Joint motion deblurring and superresolution from single blurry image
Tun et al. Joint Training of Noisy Image Patch and Impulse Response of Low-Pass Filter in CNN for Image Denoising
CN110599419A (en) Image denoising method for preventing loss of image edge information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination