CN111612709A - Image noise reduction method based on DnCNNs improvement - Google Patents
Image noise reduction method based on DnCNNs improvement Download PDFInfo
- Publication number
- CN111612709A CN111612709A CN202010395977.3A CN202010395977A CN111612709A CN 111612709 A CN111612709 A CN 111612709A CN 202010395977 A CN202010395977 A CN 202010395977A CN 111612709 A CN111612709 A CN 111612709A
- Authority
- CN
- China
- Prior art keywords
- image
- channel
- network
- dncnns
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 18
- 230000009467 reduction Effects 0.000 title claims abstract description 12
- 230000006872 improvement Effects 0.000 title claims description 5
- 238000013507 mapping Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 abstract description 4
- 238000013459 approach Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
An improved image noise reduction method based on DnCNNs adopts an improved DnCNNs network structure, and introduces a DenseNet improved residual error network compared with an original network. All layers are directly connected together. In this new architecture, the input to each layer consists of the feature maps of all previous layers, and its output will be transmitted to each subsequent layer. The feature mappings are aggregated through deep cascading, network depth is further improved, and meanwhile, low-level features such as pixel-level features are introduced. Multi-scale structures are often intended to acquire a larger receptive field. The simplest approach is of course to use different convolution kernels, but large convolution kernels tend to result in increased parameters and computation. The original network adopts a multi-scale structure, which often causes huge original calculation amount, so that a small convolution kernel and a mode of multiple convolution are adopted in a new network structure, and the method has the advantages of high running speed and simple operation. Only fixed parameter values need to be set.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an improved image noise reduction method based on DnCNNs.
Background
With the technological progress, new image technologies are gradually popularized, the requirements of people on images in daily life are higher and higher, and the problems that images shot under the low-light conditions such as cloudy days or at night have more noise, blurred details and the like are solved, so that the noise removal of the images is needed to optimize the images. The conventional basic method is to process a frequency domain and a spatial domain or obtain prior knowledge to determine a noise type for processing, the basic methods are only used for image noise reduction under a specific scene, and the image noise reduction effect of a complex environment is not obvious, in recent years, an image noise reduction technology based on deep learning is advanced, and in a high-level image understanding task, for example: the system has remarkable performances in image classification, target detection and the like. An image denoising technology based on a deep learning algorithm becomes a research hotspot, and the conventional method is to directly train a noise image and a clear image of an image data training set to obtain weight parameters, but the visual effect and the image quality are to be improved after the restoration is finished. Therefore, a new image noise reduction technology is necessary.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides an image denoising method based on DnCNNs improvement, which can denoise an image under the dim light condition, obviously reduce the image noise, improve the image signal to noise ratio and improve the image identification degree, and the invention adopts the technical scheme for overcoming the technical problems:
an improved image noise reduction method based on DnCNNs is characterized by comprising the following steps:
a) carrying out color channel conversion on a color picture input into a computer, converting the color picture into an RGB color channel, and separating the RGB color channel to obtain an R channel image, a G channel image and a B channel image;
b) inputting an R channel image, a G channel image and a B channel image into a DnCNNs network, wherein the DnCNNs network is a 3 x 3 convolution kernel, the DnCNNs network performs L times of convolution processing on the R channel image, the G channel image and the B channel image, extracts relevant feature data, obtains one layer for each time of convolution operation to obtain L layers, and obtains a feature mapping image with the depth of 64 by using convolution of the ReLU and 3 x 3;
c) introducing a DenseNet residual error network into the DnCNNs network in the step b), connecting the L layers in the step b) together by using the DenseNet residual error network, wherein the input of each layer consists of feature mapping of all the previous layers, and obtaining a feature image of 2w x 2h x 64, wherein w is the width of the picture, and h is the height of the picture;
d) obtaining a feature map of 4w 4h c obtained by convolving the feature image of 2w 2h 64 by using the ReLU and 3 x 3 convolution check, wherein the feature map is IestWherein c is the image depth;
e) by the formulaCalculating to obtain a loss function LEIn which IHRObtaining an optimized image for an original high-resolution image by calculating a loss function;
f) and synthesizing the R channel, the G channel and the B channel of the optimized image into an RGB image.
Further, the DnCNNs network in step B) performs convolution processing on the R channel image, the G channel image, and the B channel image by using a convolution kernel of 3 × 3 with a step size of 2.
The invention has the beneficial effects that: by adopting the improved DnCNNs network structure, a DenseNet improved residual error network is introduced compared with the original network. All layers are directly connected together. In this new architecture, the input to each layer consists of the feature maps of all previous layers, and its output will be transmitted to each subsequent layer. The feature mappings are aggregated through deep cascading, network depth is further improved, and meanwhile, low-level features such as pixel-level features are introduced. Multi-scale structures are often intended to acquire a larger receptive field. The simplest approach is of course to use different convolution kernels, but large convolution kernels tend to result in increased parameters and computation. The original network adopts a multi-scale structure, which often causes huge original calculation amount, so that a small convolution kernel and a mode of multiple convolution are adopted in a new network structure, and the method has the advantages of high running speed and simple operation. Only fixed parameter values need to be set.
Drawings
Fig. 1 is a schematic block diagram of the present invention.
Detailed Description
The invention is further described below with reference to fig. 1.
An improved image noise reduction method based on DnCNNs is characterized by comprising the following steps:
a) the method comprises the steps of converting color channels of color pictures input into a computer into RGB color channels, and separating the RGB color channels to obtain an R channel image, a G channel image and a B channel image.
b) Inputting the R channel image, the G channel image and the B channel image into a DnCNNs network, wherein the DnCNNs network is a 3 x 3 convolution kernel, the DnCNNs network performs convolution processing on the R channel image, the G channel image and the B channel image for L times, extracting relevant feature data, performing convolution operation for one layer each time to obtain L layers, and obtaining a feature mapping image with the depth of 64 by using convolution of the ReLU and 3 x 3.
c) Introducing a DenseNet residual network into the DnCNNs network in the step b), connecting the L layers in the step b) together by using the DenseNet residual network, wherein the input of each layer consists of feature mapping of all the previous layers, and obtaining a feature image of 2w by 2h by 64, wherein w is the width of the picture, and h is the height of the picture. Introducing a modified residual network densnet, in the original DnCNNs network, an L-layer network is assumed, but in the densnet, there are L (L +1)/2 connections, and it can be understood that each layer input comes from the output of all previous layers. The DenseNet principle formula is as follows:
X1=H1([x0,x1,…,xl-1]) Wherein [ x ]0,x1,…,xl-1]Indicating that the output featuremap of the 0 to l-1 layers is used for concataration. coordination is the merging of channels, which should be declared as the inclusion, H1Including the convolution of BN, ReLU, and 3 x 3.
Establishing and building Block superposition, and establishing forward Skip Connection between any two layers of blocks is the whole core idea. A Block contains several sense layers, which in turn comprise three layers: BN, ReLU and Conv. The first 3 layers were used as a part to reduce the number of signatures using 1 × 1Conv and increase the number of calculations. The connection layers between different dense blocks are called transition layers, each transition layer comprising 3 layers: BN, 1 × 1Conv and 2 × 2 AvgPool.
d) Obtaining a feature map of 4w 4h c obtained by convolving the feature image of 2w 2h 64 by using the ReLU and 3 x 3 convolution check, wherein the feature map is IestWhere c is the image depth.
e) Initialization of parameters in the network the learning target of the Xavier network is the difference of linear interpolation of the super-resolution image and the input low-resolution image. Where the loss function part we choose pixel-wise loss to emphasize the match of each corresponding pixel between the two images. Pictures trained with pixel-wise loss will typically be smoother. Even if the output picture has a high PSNR, the visual effect is not very prominent. Image sharpening is introduced at a later stage to solve the problem of over-smoothing. By the formulaCalculating to obtain a loss function LEIn which IHRAnd obtaining an optimized image for the original high-resolution image by calculating a loss function. L isEThe method is used for estimating the inconsistency degree of the predicted value f (x) of the model and the real value Y, and is a non-negative real value function, and the smaller the loss function is, the better the robustness of the model is.
f) And synthesizing the R channel, the G channel and the B channel of the optimized image into an RGB image.
Firstly, color channel conversion is carried out on a color picture, the image is separated into RGB color channels, after the conversion is completed, the images in the three channels are respectively preprocessed by using an improved CNN network, the images of the three channels are respectively optimized and solved, the images of the three channels are fully optimized by using the learning capacity of the network, the results of the three image channels are finally obtained, the RGB images are obtained after the results are obtained, and color channel fusion is carried out after the conversion is completed, so that the processed image is obtained.
At present, the following problems exist in the aspect of deep network: 1. disappearance of the gradient: as the depth of the network increases, the gradient vanishing problem becomes more severe, causing the network to crash. 2. Without being able to fully utilize low-level features to operate, for a CNN, the preceding convolutional layer often represents a low-level feature, such as a pixel-level feature. While the latter convolutional layers are often high-level features such as semantic features and the like. Compared with the prior art, the invention has the following advantages: 1. in the prior image denoising, a Gaussian noise adding method is commonly used for simulating the complex noise distribution, and the difference between the complex noise distribution and the actual noise distribution is inevitable, so that the application effect on a real noise image is reduced, and the work of separating color channels is adopted, so that the noise removal of the real shot image or the artificial noise adding simulation noise distribution is more convenient and effective. 2. Adopting an improved DnCNNs network structure: (1) a DenseNet improved residual network was introduced compared to the original network. All layers are directly connected together. In this new architecture, the input to each layer consists of the feature maps of all previous layers, and its output will be transmitted to each subsequent layer. The feature mappings are aggregated through deep cascading, network depth is further improved, and meanwhile, low-level features such as pixel-level features are introduced. (2) Multi-scale structures are often intended to acquire a larger receptive field. The simplest approach is of course to use different convolution kernels, but large convolution kernels tend to result in increased parameters and computation. The original network adopts a multi-scale structure, which often causes huge original calculation amount, so that a small convolution kernel and a mode of multiple convolution are adopted in a new network structure, and the method has the advantages of high running speed and simple operation. Only fixed parameter values need to be set.
Preferably, the DnCNNs network in step B) performs convolution processing on the R channel image, the G channel image, and the B channel image by using a convolution kernel of 3 × 3 with a step size of 2.
The above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (2)
1. An improved image noise reduction method based on DnCNNs is characterized by comprising the following steps:
a) carrying out color channel conversion on a color picture input into a computer, converting the color picture into an RGB color channel, and separating the RGB color channel to obtain an R channel image, a G channel image and a B channel image;
b) inputting an R channel image, a G channel image and a B channel image into a DnCNNs network, wherein the DnCNNs network is a 3 x 3 convolution kernel, the DnCNNs network performs L times of convolution processing on the R channel image, the G channel image and the B channel image, extracts relevant feature data, obtains one layer for each time of convolution operation to obtain L layers, and obtains a feature mapping image with the depth of 64 by using convolution of the ReLU and 3 x 3;
c) introducing a DenseNet residual error network into the DnCNNs network in the step b), connecting the L layers in the step b) together by using the DenseNet residual error network, wherein the input of each layer consists of feature mapping of all the previous layers, and obtaining a feature image of 2w x 2h x 64, wherein w is the width of the picture, and h is the height of the picture;
d) obtaining a feature map of 4w 4h c obtained by convolving the feature image of 2w 2h 64 by using the ReLU and 3 x 3 convolution check, wherein the feature map is IestWherein c is the image depth;
e) by the formulaCalculating to obtain a loss function LEIn which IHRObtaining an optimized image for an original high-resolution image by calculating a loss function;
f) and synthesizing the R channel, the G channel and the B channel of the optimized image into an RGB image.
2. The method of claim 1 wherein the image noise reduction based on DnCNNs improvement comprises: the DnCNNs network in the step B) performs convolution processing on the R channel image, the G channel image and the B channel image by using a convolution kernel with the step size of 2 and 3 x 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010395977.3A CN111612709B (en) | 2020-05-11 | 2020-05-11 | Image noise reduction method based on DnCNNs improvement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010395977.3A CN111612709B (en) | 2020-05-11 | 2020-05-11 | Image noise reduction method based on DnCNNs improvement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111612709A true CN111612709A (en) | 2020-09-01 |
CN111612709B CN111612709B (en) | 2023-03-28 |
Family
ID=72197930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010395977.3A Active CN111612709B (en) | 2020-05-11 | 2020-05-11 | Image noise reduction method based on DnCNNs improvement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111612709B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180293711A1 (en) * | 2017-04-06 | 2018-10-11 | Disney Enterprises, Inc. | Kernel-predicting convolutional neural networks for denoising |
CN109255758A (en) * | 2018-07-13 | 2019-01-22 | 杭州电子科技大学 | Image enchancing method based on full 1*1 convolutional neural networks |
CN110599409A (en) * | 2019-08-01 | 2019-12-20 | 西安理工大学 | Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel |
CN110648292A (en) * | 2019-09-11 | 2020-01-03 | 昆明理工大学 | High-noise image denoising method based on deep convolutional network |
CN110782414A (en) * | 2019-10-30 | 2020-02-11 | 福州大学 | Dark light image denoising method based on dense connection convolution |
-
2020
- 2020-05-11 CN CN202010395977.3A patent/CN111612709B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180293711A1 (en) * | 2017-04-06 | 2018-10-11 | Disney Enterprises, Inc. | Kernel-predicting convolutional neural networks for denoising |
CN109255758A (en) * | 2018-07-13 | 2019-01-22 | 杭州电子科技大学 | Image enchancing method based on full 1*1 convolutional neural networks |
CN110599409A (en) * | 2019-08-01 | 2019-12-20 | 西安理工大学 | Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel |
CN110648292A (en) * | 2019-09-11 | 2020-01-03 | 昆明理工大学 | High-noise image denoising method based on deep convolutional network |
CN110782414A (en) * | 2019-10-30 | 2020-02-11 | 福州大学 | Dark light image denoising method based on dense connection convolution |
Non-Patent Citations (2)
Title |
---|
刘超等: "图像超分辨率卷积神经网络加速算法", 《国防科技大学学报》 * |
章云港等: "基于卷积神经网络的低剂量CT图像去噪方法", 《光学学报》 * |
Also Published As
Publication number | Publication date |
---|---|
CN111612709B (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110111366B (en) | End-to-end optical flow estimation method based on multistage loss | |
CN108921786B (en) | Image super-resolution reconstruction method based on residual convolutional neural network | |
CN112435191B (en) | Low-illumination image enhancement method based on fusion of multiple neural network structures | |
CN111754438B (en) | Underwater image restoration model based on multi-branch gating fusion and restoration method thereof | |
CN107564009B (en) | Outdoor scene multi-target segmentation method based on deep convolutional neural network | |
CN111489303A (en) | Maritime affairs image enhancement method under low-illumination environment | |
CN111028235A (en) | Image segmentation method for enhancing edge and detail information by utilizing feature fusion | |
CN110930327B (en) | Video denoising method based on cascade depth residual error network | |
CN110717921B (en) | Full convolution neural network semantic segmentation method of improved coding and decoding structure | |
CN112767283A (en) | Non-uniform image defogging method based on multi-image block division | |
CN116152120B (en) | Low-light image enhancement method and device integrating high-low frequency characteristic information | |
CN114170286B (en) | Monocular depth estimation method based on unsupervised deep learning | |
CN110211052A (en) | A kind of single image to the fog method based on feature learning | |
CN112070688A (en) | Single image defogging method for generating countermeasure network based on context guidance | |
CN112580473A (en) | Motion feature fused video super-resolution reconstruction method | |
CN110782458A (en) | Object image 3D semantic prediction segmentation method of asymmetric coding network | |
CN113052764A (en) | Video sequence super-resolution reconstruction method based on residual connection | |
CN113409355A (en) | Moving target identification system and method based on FPGA | |
CN109871790B (en) | Video decoloring method based on hybrid neural network model | |
CN117274059A (en) | Low-resolution image reconstruction method and system based on image coding-decoding | |
CN115984747A (en) | Video saliency target detection method based on dynamic filter | |
CN115526779A (en) | Infrared image super-resolution reconstruction method based on dynamic attention mechanism | |
CN113436101B (en) | Method for removing rain by Dragon lattice tower module based on efficient channel attention mechanism | |
CN114596233A (en) | Attention-guiding and multi-scale feature fusion-based low-illumination image enhancement method | |
CN117391920A (en) | High-capacity steganography method and system based on RGB channel differential plane |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230303 Address after: 250000 building S02, No. 1036, Gaoxin Inspur Road, Jinan, Shandong Applicant after: Shandong Inspur Scientific Research Institute Co.,Ltd. Address before: 250104 1st floor, R & D building, No. 2877, Suncun Town, Licheng District, Jinan City, Shandong Province Applicant before: JINAN INSPUR HIGH-TECH TECHNOLOGY DEVELOPMENT Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |