CN111612709B - Image noise reduction method based on DnCNNs improvement - Google Patents

Image noise reduction method based on DnCNNs improvement Download PDF

Info

Publication number
CN111612709B
CN111612709B CN202010395977.3A CN202010395977A CN111612709B CN 111612709 B CN111612709 B CN 111612709B CN 202010395977 A CN202010395977 A CN 202010395977A CN 111612709 B CN111612709 B CN 111612709B
Authority
CN
China
Prior art keywords
image
channel
network
dncnns
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010395977.3A
Other languages
Chinese (zh)
Other versions
CN111612709A (en
Inventor
凌泽乐
高明
金长新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Scientific Research Institute Co Ltd
Original Assignee
Shandong Inspur Scientific Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Scientific Research Institute Co Ltd filed Critical Shandong Inspur Scientific Research Institute Co Ltd
Priority to CN202010395977.3A priority Critical patent/CN111612709B/en
Publication of CN111612709A publication Critical patent/CN111612709A/en
Application granted granted Critical
Publication of CN111612709B publication Critical patent/CN111612709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

An improved image noise reduction method based on DnCNNs adopts an improved DnCNNs network structure, and introduces a DenseNet improved residual error network compared with an original network. All layers are directly connected together. In this new architecture, the input to each layer consists of the feature maps of all previous layers, and its output will be transmitted to each subsequent layer. The feature mappings are aggregated through deep cascading, network depth is further improved, and meanwhile, low-level features such as pixel-level features are introduced. Multi-scale structures are often intended to acquire a larger receptive field. The simplest approach is of course to use different convolution kernels, but large convolution kernels tend to result in increased parameters and computation. The original network adopts a multi-scale structure, which often causes huge original calculation amount, so that a small convolution kernel and a mode of multiple convolution are adopted in a new network structure, and the method has the advantages of high running speed and simple operation. Only fixed parameter values need to be set.

Description

Image noise reduction method based on DnCNNs improvement
Technical Field
The invention relates to the technical field of image processing, in particular to an improved image noise reduction method based on DnCNNs.
Background
With the technological progress, new image technologies are gradually popularized, the requirements of people on images in daily life are higher and higher, and the problems that images shot under the low-light conditions such as cloudy days or at night have more noise, blurred details and the like are solved, so that the noise removal of the images is needed to optimize the images. The conventional basic method is to process a frequency domain and a spatial domain or obtain a priori knowledge to determine a noise type for processing, the basic methods are only used for image noise reduction in a specific scene, and the image noise reduction effect in a complex environment is not obvious, in recent years, an image noise reduction technology based on deep learning has been advanced remarkably, and in a high-level image understanding task, for example: the system has remarkable performances in image classification, target detection and the like. An image denoising technology based on a deep learning algorithm becomes a research hotspot, and the conventional method is to directly train a noise image and a clear image of an image data training set to obtain weight parameters, but the visual effect and the image quality are to be improved after the restoration is finished. Therefore, a new image noise reduction technology is necessary.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides an image noise reduction method based on DnCNNs improvement, which can reduce the noise of an image under the dim light condition, obviously reduce the noise of the image, improve the signal to noise ratio of the image and improve the image identification degree, and the invention adopts the technical scheme for overcoming the technical problems of the method:
an improved image noise reduction method based on DnCNNs is characterized by comprising the following steps:
a) Carrying out color channel conversion on a color picture input into a computer, converting the color picture into an RGB color channel, and separating the RGB color channel to obtain an R channel image, a G channel image and a B channel image;
b) Inputting an R channel image, a G channel image and a B channel image into a DnCNNs network, wherein the DnCNNs network is 3*3 convolution kernel, the DnCNNs network performs L times of convolution processing on the R channel image, the G channel image and the B channel image, relevant feature data are extracted, one layer is obtained through each time of convolution operation, L layers are obtained, and a feature mapping image with the depth of 64 is obtained through convolution of ReLU and 3*3;
c) Introducing a DenseNet residual error network into the DnCNNs network in the step b), connecting the L layers in the step b) together by using the DenseNet residual error network, wherein the input of each layer consists of feature mappings of all the previous layers, and obtaining a feature image of 2w x 2h x 64, wherein w is the width of the picture, and h is the height of the picture;
d) Obtaining a feature map of 4w 4h c obtained by convolution of feature images of 2w 2h 64 by using ReLU and 3*3 convolution check, wherein the feature map is I est Wherein c is the image depth;
e) By the formula
Figure BDA0002485653380000021
Calculating to obtain a loss function L E In which I HR Obtaining an optimized image for the original high-resolution image by calculating a loss function;
f) And synthesizing the R channel, the G channel and the B channel of the optimized image into an RGB image.
Further, the DnCNNs network in step B) performs convolution processing on the R-channel image, the G-channel image and the B-channel image by using the convolution kernel of 3*3 with the step size of 2.
The invention has the beneficial effects that: by adopting the improved DnCNNs network structure, a DenseNet improved residual error network is introduced compared with the original network. All layers are directly connected together. In this new architecture, the input to each layer consists of the feature maps of all previous layers, and its output will be transmitted to each subsequent layer. The feature maps are aggregated through deep cascade, and then the network depth is improved, and meanwhile, low-level features such as pixel-level features are introduced. Multi-scale structures are often intended to acquire a larger receptive field. The simplest approach is of course to use different convolution kernels, but large convolution kernels tend to result in increased parameters and computation. The original network adopts a multi-scale structure, which often causes huge original calculation amount, so that a small convolution kernel and a mode of multiple convolution are adopted in a new network structure, and the method has the advantages of high running speed and simple operation. Only fixed parameter values need to be set.
Drawings
Fig. 1 is a schematic block diagram of the present invention.
Detailed Description
The invention is further illustrated with reference to fig. 1.
An improved image noise reduction method based on DnCNNs is characterized by comprising the following steps:
a) The method comprises the steps of converting color channels of color pictures input into a computer into RGB color channels, and separating the RGB color channels to obtain an R channel image, a G channel image and a B channel image.
b) Inputting an R channel image, a G channel image and a B channel image into a DnCNNs network, wherein the DnCNNs network is 3*3 convolution kernel, the DnCNNs network performs L times of convolution processing on the R channel image, the G channel image and the B channel image, extracts relevant feature data, obtains one layer in each convolution operation to obtain L layers, and obtains a feature mapping image with the depth of 64 by using convolution of ReLU and 3*3.
c) Introducing a DenseNet residual network into the DnCNNs network in the step b), connecting the L layers in the step b) together by using the DenseNet residual network, wherein the input of each layer consists of feature maps of all the previous layers, and obtaining a feature image of 2w x 2h x 64, wherein w is the width of the picture, and h is the height of the picture. An improved residual network densnet was introduced, in the original DnCNNs network, an L-layer network was assumed, but in densnet there would be L (L + 1)/2 connections, it being understood that each layer input comes from the output of all previous layers. The DenseNet principle formula is as follows:
X 1 =H 1 ([x 0 ,x 1 ,…,x l-1 ]) Wherein [ x ] 0 ,x 1 ,…,x l-1 ]Indicating that the output feature map of 0 to l-1 layers is used for localization. coordination is the merging of channels, which should be declared as the inclusion, H 1 Including the convolution of BN, reLU, and 3*3.
Establishing and building Block superposition, and establishing forward Skip Connection between any two layers of blocks is the whole core idea. A Block contains several sense layers, which in turn comprise three layers: BN, reLU and Conv. The first 3 layers are used as a part to reduce the number of feature maps by 1 × 1conv and increase the amount of calculation. The connection layers between different dense blocks are called transition layers, each transition layer comprising 3 layers: BN,1 × 1conv and 2 × 2avgpool.
d) Obtaining a feature map of 4w 4h c obtained by convolution of feature images of 2w 2h 64 by using ReLU and 3*3 convolution check, wherein the feature map is I est Where c is the image depth.
e) Initialization of parameters in the network the learning target of the Xavier network is the difference of linear interpolation of the super-resolution image and the input low-resolution image. Wherein the loss function part we choose pixel-wise loss to emphasizeThe matching of each corresponding pixel between the two images. Pictures trained with pixel-wise loss will typically be smoother. Even if the output picture has a high PSNR, the visual effect is not so prominent. Image sharpening is introduced at a later stage to solve the problem of over-smoothing. By the formula
Figure BDA0002485653380000031
Calculating to obtain a loss function L E In which I HR And obtaining an optimized image for the original high-resolution image by calculating a loss function. L is E The method is used for estimating the inconsistency degree of the predicted value f (x) of the model and the true value Y, and is a non-negative real value function, and the smaller the loss function is, the better the robustness of the model is.
f) And synthesizing the R channel, the G channel and the B channel of the optimized image into an RGB image.
Firstly, color channel conversion is carried out on a color picture, the image is separated into RGB color channels, after the conversion is completed, the images in the three channels are respectively preprocessed by using an improved CNN network, the images of the three channels are respectively optimized and solved, the images of the three channels are fully optimized by using the learning capacity of the network, the results of the three image channels are finally obtained, the RGB images are obtained after the results are obtained, and color channel fusion is carried out after the conversion is completed, so that the processed image is obtained.
At present, the following problems exist in the aspect of deep network: 1. disappearance of the gradient: as the depth of the network increases, the gradient vanishing problem becomes more severe, causing the network to crash. 2. Without being able to fully utilize low-level features to operate, for a CNN, the preceding convolutional layer often represents a low-level feature, such as a pixel-level feature. While the latter convolutional layers tend to be high-level features such as semantic features and the like. Compared with the prior art, the invention has the following advantages: 1. in the prior image denoising, a Gaussian noise adding method is commonly used for simulating the complex noise distribution, and the difference between the complex noise distribution and the actual noise distribution is inevitable, so that the application effect on a real noise image is reduced, and the work of separating color channels is adopted, so that the noise removal of the real shot image or the artificial noise adding simulation noise distribution is more convenient and effective. 2. Adopting an improved DnCNNs network structure: (1) A DenseNet improved residual network was introduced compared to the original network. All layers are directly connected together. In this new architecture, the input to each layer consists of the feature maps of all previous layers, and its output will be transmitted to each subsequent layer. The feature mappings are aggregated through deep cascading, network depth is further improved, and meanwhile, low-level features such as pixel-level features are introduced. (2) Multi-scale structures are often intended to acquire a larger receptive field. The simplest approach is of course to use different convolution kernels, but large convolution kernels tend to result in increased parameters and computation. The original network adopts a multi-scale structure, which often causes huge original calculation amount, so that a small convolution kernel and a mode of multiple convolution are adopted in a new network structure, and the method has the advantages of high running speed and simple operation. Only fixed parameter values need to be set.
Preferably, the DnCNNs network in step B) performs convolution processing on the R-channel image, the G-channel image, and the B-channel image by using the convolution kernel of 3*3 with a step size of 2.
The above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (2)

1. An improved image noise reduction method based on DnCNNs is characterized by comprising the following steps:
a) Carrying out color channel conversion on a color picture input into a computer, converting the color picture into an RGB color channel, and separating the RGB color channel to obtain an R channel image, a G channel image and a B channel image;
b) Inputting an R channel image, a G channel image and a B channel image into a DnCNNs network, wherein the DnCNNs network is 3*3 convolution kernel, the DnCNNs network performs L times of convolution processing on the R channel image, the G channel image and the B channel image, extracts relevant feature data, obtains one layer in each convolution operation to obtain L layers, and obtains a feature mapping image with the depth of 64 by using convolution of ReLU and 3*3;
c) Introducing a DenseNet residual error network into the DnCNNs network in the step b), connecting the L layers in the step b) together by using the DenseNet residual error network, wherein the input of each layer consists of feature mappings of all the previous layers, and obtaining a feature image of 2w x 2h x 64, wherein w is the width of the picture, and h is the height of the picture;
d) Obtaining a feature map of 4w 4h c obtained by convolution of feature images of 2w 2h 64 by using ReLU and 3*3 convolution check, wherein the feature map is I est Wherein c is the image depth;
e) By the formula
Figure FDA0002485653370000011
Calculating to obtain a loss function L E In which I HR Obtaining an optimized image for an original high-resolution image by calculating a loss function;
f) And synthesizing the R channel, the G channel and the B channel of the optimized image into an RGB image.
2. The method of claim 1 wherein the image noise reduction based on DnCNNs improvement comprises: the DnCNNs network in the step B) performs convolution processing on the R channel image, the G channel image and the B channel image by adopting the convolution kernel of 3*3 with the step size of 2.
CN202010395977.3A 2020-05-11 2020-05-11 Image noise reduction method based on DnCNNs improvement Active CN111612709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010395977.3A CN111612709B (en) 2020-05-11 2020-05-11 Image noise reduction method based on DnCNNs improvement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010395977.3A CN111612709B (en) 2020-05-11 2020-05-11 Image noise reduction method based on DnCNNs improvement

Publications (2)

Publication Number Publication Date
CN111612709A CN111612709A (en) 2020-09-01
CN111612709B true CN111612709B (en) 2023-03-28

Family

ID=72197930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010395977.3A Active CN111612709B (en) 2020-05-11 2020-05-11 Image noise reduction method based on DnCNNs improvement

Country Status (1)

Country Link
CN (1) CN111612709B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255758A (en) * 2018-07-13 2019-01-22 杭州电子科技大学 Image enchancing method based on full 1*1 convolutional neural networks
CN110599409A (en) * 2019-08-01 2019-12-20 西安理工大学 Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel
CN110648292A (en) * 2019-09-11 2020-01-03 昆明理工大学 High-noise image denoising method based on deep convolutional network
CN110782414A (en) * 2019-10-30 2020-02-11 福州大学 Dark light image denoising method based on dense connection convolution

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10475165B2 (en) * 2017-04-06 2019-11-12 Disney Enterprises, Inc. Kernel-predicting convolutional neural networks for denoising

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255758A (en) * 2018-07-13 2019-01-22 杭州电子科技大学 Image enchancing method based on full 1*1 convolutional neural networks
CN110599409A (en) * 2019-08-01 2019-12-20 西安理工大学 Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel
CN110648292A (en) * 2019-09-11 2020-01-03 昆明理工大学 High-noise image denoising method based on deep convolutional network
CN110782414A (en) * 2019-10-30 2020-02-11 福州大学 Dark light image denoising method based on dense connection convolution

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
图像超分辨率卷积神经网络加速算法;刘超等;《国防科技大学学报》;20190428(第02期);全文 *
基于卷积神经网络的低剂量CT图像去噪方法;章云港等;《光学学报》;20171208(第04期);全文 *

Also Published As

Publication number Publication date
CN111612709A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN110111366B (en) End-to-end optical flow estimation method based on multistage loss
CN108921786B (en) Image super-resolution reconstruction method based on residual convolutional neural network
CN111062892B (en) Single image rain removing method based on composite residual error network and deep supervision
CN112435191B (en) Low-illumination image enhancement method based on fusion of multiple neural network structures
CN111462013B (en) Single-image rain removing method based on structured residual learning
CN110930327B (en) Video denoising method based on cascade depth residual error network
CN116152120B (en) Low-light image enhancement method and device integrating high-low frequency characteristic information
CN111028235A (en) Image segmentation method for enhancing edge and detail information by utilizing feature fusion
CN112767283A (en) Non-uniform image defogging method based on multi-image block division
CN110211052A (en) A kind of single image to the fog method based on feature learning
CN109871790B (en) Video decoloring method based on hybrid neural network model
CN102457724B (en) Image motion detecting system and method
CN114170286B (en) Monocular depth estimation method based on unsupervised deep learning
CN114092774B (en) RGB-T image significance detection system and detection method based on information flow fusion
CN113052764A (en) Video sequence super-resolution reconstruction method based on residual connection
CN111696033A (en) Real image super-resolution model and method for learning cascaded hourglass network structure based on angular point guide
CN112070688A (en) Single image defogging method for generating countermeasure network based on context guidance
CN113409355A (en) Moving target identification system and method based on FPGA
CN112164010A (en) Multi-scale fusion convolution neural network image defogging method
CN109003247B (en) Method for removing color image mixed noise
CN113947538A (en) Multi-scale efficient convolution self-attention single image rain removing method
CN111612709B (en) Image noise reduction method based on DnCNNs improvement
CN112489103A (en) High-resolution depth map acquisition method and system
CN114926359B (en) Underwater image enhancement method combining bicolor space recovery and multi-stage decoding structure
CN108492264B (en) Single-frame image fast super-resolution method based on sigmoid transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230303

Address after: 250000 building S02, No. 1036, Gaoxin Inspur Road, Jinan, Shandong

Applicant after: Shandong Inspur Scientific Research Institute Co.,Ltd.

Address before: 250104 1st floor, R & D building, No. 2877, Suncun Town, Licheng District, Jinan City, Shandong Province

Applicant before: JINAN INSPUR HIGH-TECH TECHNOLOGY DEVELOPMENT Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant