CN111612709A - Image noise reduction method based on DnCNNs improvement - Google Patents

Image noise reduction method based on DnCNNs improvement Download PDF

Info

Publication number
CN111612709A
CN111612709A CN202010395977.3A CN202010395977A CN111612709A CN 111612709 A CN111612709 A CN 111612709A CN 202010395977 A CN202010395977 A CN 202010395977A CN 111612709 A CN111612709 A CN 111612709A
Authority
CN
China
Prior art keywords
image
channel
network
dncnns
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010395977.3A
Other languages
Chinese (zh)
Other versions
CN111612709B (en
Inventor
凌泽乐
高明
金长新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Scientific Research Institute Co Ltd
Original Assignee
Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Inspur Hi Tech Investment and Development Co Ltd filed Critical Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority to CN202010395977.3A priority Critical patent/CN111612709B/en
Publication of CN111612709A publication Critical patent/CN111612709A/en
Application granted granted Critical
Publication of CN111612709B publication Critical patent/CN111612709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

An improved image noise reduction method based on DnCNNs adopts an improved DnCNNs network structure, and introduces a DenseNet improved residual error network compared with an original network. All layers are directly connected together. In this new architecture, the input to each layer consists of the feature maps of all previous layers, and its output will be transmitted to each subsequent layer. The feature mappings are aggregated through deep cascading, network depth is further improved, and meanwhile, low-level features such as pixel-level features are introduced. Multi-scale structures are often intended to acquire a larger receptive field. The simplest approach is of course to use different convolution kernels, but large convolution kernels tend to result in increased parameters and computation. The original network adopts a multi-scale structure, which often causes huge original calculation amount, so that a small convolution kernel and a mode of multiple convolution are adopted in a new network structure, and the method has the advantages of high running speed and simple operation. Only fixed parameter values need to be set.

Description

Image noise reduction method based on DnCNNs improvement
Technical Field
The invention relates to the technical field of image processing, in particular to an improved image noise reduction method based on DnCNNs.
Background
With the technological progress, new image technologies are gradually popularized, the requirements of people on images in daily life are higher and higher, and the problems that images shot under the low-light conditions such as cloudy days or at night have more noise, blurred details and the like are solved, so that the noise removal of the images is needed to optimize the images. The conventional basic method is to process a frequency domain and a spatial domain or obtain prior knowledge to determine a noise type for processing, the basic methods are only used for image noise reduction under a specific scene, and the image noise reduction effect of a complex environment is not obvious, in recent years, an image noise reduction technology based on deep learning is advanced, and in a high-level image understanding task, for example: the system has remarkable performances in image classification, target detection and the like. An image denoising technology based on a deep learning algorithm becomes a research hotspot, and the conventional method is to directly train a noise image and a clear image of an image data training set to obtain weight parameters, but the visual effect and the image quality are to be improved after the restoration is finished. Therefore, a new image noise reduction technology is necessary.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides an image denoising method based on DnCNNs improvement, which can denoise an image under the dim light condition, obviously reduce the image noise, improve the image signal to noise ratio and improve the image identification degree, and the invention adopts the technical scheme for overcoming the technical problems:
an improved image noise reduction method based on DnCNNs is characterized by comprising the following steps:
a) carrying out color channel conversion on a color picture input into a computer, converting the color picture into an RGB color channel, and separating the RGB color channel to obtain an R channel image, a G channel image and a B channel image;
b) inputting an R channel image, a G channel image and a B channel image into a DnCNNs network, wherein the DnCNNs network is a 3 x 3 convolution kernel, the DnCNNs network performs L times of convolution processing on the R channel image, the G channel image and the B channel image, extracts relevant feature data, obtains one layer for each time of convolution operation to obtain L layers, and obtains a feature mapping image with the depth of 64 by using convolution of the ReLU and 3 x 3;
c) introducing a DenseNet residual error network into the DnCNNs network in the step b), connecting the L layers in the step b) together by using the DenseNet residual error network, wherein the input of each layer consists of feature mapping of all the previous layers, and obtaining a feature image of 2w x 2h x 64, wherein w is the width of the picture, and h is the height of the picture;
d) obtaining a feature map of 4w 4h c obtained by convolving the feature image of 2w 2h 64 by using the ReLU and 3 x 3 convolution check, wherein the feature map is IestWherein c is the image depth;
e) by the formula
Figure BDA0002485653380000021
Calculating to obtain a loss function LEIn which IHRObtaining an optimized image for an original high-resolution image by calculating a loss function;
f) and synthesizing the R channel, the G channel and the B channel of the optimized image into an RGB image.
Further, the DnCNNs network in step B) performs convolution processing on the R channel image, the G channel image, and the B channel image by using a convolution kernel of 3 × 3 with a step size of 2.
The invention has the beneficial effects that: by adopting the improved DnCNNs network structure, a DenseNet improved residual error network is introduced compared with the original network. All layers are directly connected together. In this new architecture, the input to each layer consists of the feature maps of all previous layers, and its output will be transmitted to each subsequent layer. The feature mappings are aggregated through deep cascading, network depth is further improved, and meanwhile, low-level features such as pixel-level features are introduced. Multi-scale structures are often intended to acquire a larger receptive field. The simplest approach is of course to use different convolution kernels, but large convolution kernels tend to result in increased parameters and computation. The original network adopts a multi-scale structure, which often causes huge original calculation amount, so that a small convolution kernel and a mode of multiple convolution are adopted in a new network structure, and the method has the advantages of high running speed and simple operation. Only fixed parameter values need to be set.
Drawings
Fig. 1 is a schematic block diagram of the present invention.
Detailed Description
The invention is further described below with reference to fig. 1.
An improved image noise reduction method based on DnCNNs is characterized by comprising the following steps:
a) the method comprises the steps of converting color channels of color pictures input into a computer into RGB color channels, and separating the RGB color channels to obtain an R channel image, a G channel image and a B channel image.
b) Inputting the R channel image, the G channel image and the B channel image into a DnCNNs network, wherein the DnCNNs network is a 3 x 3 convolution kernel, the DnCNNs network performs convolution processing on the R channel image, the G channel image and the B channel image for L times, extracting relevant feature data, performing convolution operation for one layer each time to obtain L layers, and obtaining a feature mapping image with the depth of 64 by using convolution of the ReLU and 3 x 3.
c) Introducing a DenseNet residual network into the DnCNNs network in the step b), connecting the L layers in the step b) together by using the DenseNet residual network, wherein the input of each layer consists of feature mapping of all the previous layers, and obtaining a feature image of 2w by 2h by 64, wherein w is the width of the picture, and h is the height of the picture. Introducing a modified residual network densnet, in the original DnCNNs network, an L-layer network is assumed, but in the densnet, there are L (L +1)/2 connections, and it can be understood that each layer input comes from the output of all previous layers. The DenseNet principle formula is as follows:
X1=H1([x0,x1,…,xl-1]) Wherein [ x ]0,x1,…,xl-1]Indicating that the output featuremap of the 0 to l-1 layers is used for concataration. coordination is the merging of channels, which should be declared as the inclusion, H1Including the convolution of BN, ReLU, and 3 x 3.
Establishing and building Block superposition, and establishing forward Skip Connection between any two layers of blocks is the whole core idea. A Block contains several sense layers, which in turn comprise three layers: BN, ReLU and Conv. The first 3 layers were used as a part to reduce the number of signatures using 1 × 1Conv and increase the number of calculations. The connection layers between different dense blocks are called transition layers, each transition layer comprising 3 layers: BN, 1 × 1Conv and 2 × 2 AvgPool.
d) Obtaining a feature map of 4w 4h c obtained by convolving the feature image of 2w 2h 64 by using the ReLU and 3 x 3 convolution check, wherein the feature map is IestWhere c is the image depth.
e) Initialization of parameters in the network the learning target of the Xavier network is the difference of linear interpolation of the super-resolution image and the input low-resolution image. Where the loss function part we choose pixel-wise loss to emphasize the match of each corresponding pixel between the two images. Pictures trained with pixel-wise loss will typically be smoother. Even if the output picture has a high PSNR, the visual effect is not very prominent. Image sharpening is introduced at a later stage to solve the problem of over-smoothing. By the formula
Figure BDA0002485653380000031
Calculating to obtain a loss function LEIn which IHRAnd obtaining an optimized image for the original high-resolution image by calculating a loss function. L isEThe method is used for estimating the inconsistency degree of the predicted value f (x) of the model and the real value Y, and is a non-negative real value function, and the smaller the loss function is, the better the robustness of the model is.
f) And synthesizing the R channel, the G channel and the B channel of the optimized image into an RGB image.
Firstly, color channel conversion is carried out on a color picture, the image is separated into RGB color channels, after the conversion is completed, the images in the three channels are respectively preprocessed by using an improved CNN network, the images of the three channels are respectively optimized and solved, the images of the three channels are fully optimized by using the learning capacity of the network, the results of the three image channels are finally obtained, the RGB images are obtained after the results are obtained, and color channel fusion is carried out after the conversion is completed, so that the processed image is obtained.
At present, the following problems exist in the aspect of deep network: 1. disappearance of the gradient: as the depth of the network increases, the gradient vanishing problem becomes more severe, causing the network to crash. 2. Without being able to fully utilize low-level features to operate, for a CNN, the preceding convolutional layer often represents a low-level feature, such as a pixel-level feature. While the latter convolutional layers are often high-level features such as semantic features and the like. Compared with the prior art, the invention has the following advantages: 1. in the prior image denoising, a Gaussian noise adding method is commonly used for simulating the complex noise distribution, and the difference between the complex noise distribution and the actual noise distribution is inevitable, so that the application effect on a real noise image is reduced, and the work of separating color channels is adopted, so that the noise removal of the real shot image or the artificial noise adding simulation noise distribution is more convenient and effective. 2. Adopting an improved DnCNNs network structure: (1) a DenseNet improved residual network was introduced compared to the original network. All layers are directly connected together. In this new architecture, the input to each layer consists of the feature maps of all previous layers, and its output will be transmitted to each subsequent layer. The feature mappings are aggregated through deep cascading, network depth is further improved, and meanwhile, low-level features such as pixel-level features are introduced. (2) Multi-scale structures are often intended to acquire a larger receptive field. The simplest approach is of course to use different convolution kernels, but large convolution kernels tend to result in increased parameters and computation. The original network adopts a multi-scale structure, which often causes huge original calculation amount, so that a small convolution kernel and a mode of multiple convolution are adopted in a new network structure, and the method has the advantages of high running speed and simple operation. Only fixed parameter values need to be set.
Preferably, the DnCNNs network in step B) performs convolution processing on the R channel image, the G channel image, and the B channel image by using a convolution kernel of 3 × 3 with a step size of 2.
The above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (2)

1. An improved image noise reduction method based on DnCNNs is characterized by comprising the following steps:
a) carrying out color channel conversion on a color picture input into a computer, converting the color picture into an RGB color channel, and separating the RGB color channel to obtain an R channel image, a G channel image and a B channel image;
b) inputting an R channel image, a G channel image and a B channel image into a DnCNNs network, wherein the DnCNNs network is a 3 x 3 convolution kernel, the DnCNNs network performs L times of convolution processing on the R channel image, the G channel image and the B channel image, extracts relevant feature data, obtains one layer for each time of convolution operation to obtain L layers, and obtains a feature mapping image with the depth of 64 by using convolution of the ReLU and 3 x 3;
c) introducing a DenseNet residual error network into the DnCNNs network in the step b), connecting the L layers in the step b) together by using the DenseNet residual error network, wherein the input of each layer consists of feature mapping of all the previous layers, and obtaining a feature image of 2w x 2h x 64, wherein w is the width of the picture, and h is the height of the picture;
d) obtaining a feature map of 4w 4h c obtained by convolving the feature image of 2w 2h 64 by using the ReLU and 3 x 3 convolution check, wherein the feature map is IestWherein c is the image depth;
e) by the formula
Figure FDA0002485653370000011
Calculating to obtain a loss function LEIn which IHRObtaining an optimized image for an original high-resolution image by calculating a loss function;
f) and synthesizing the R channel, the G channel and the B channel of the optimized image into an RGB image.
2. The method of claim 1 wherein the image noise reduction based on DnCNNs improvement comprises: the DnCNNs network in the step B) performs convolution processing on the R channel image, the G channel image and the B channel image by using a convolution kernel with the step size of 2 and 3 x 3.
CN202010395977.3A 2020-05-11 2020-05-11 Image noise reduction method based on DnCNNs improvement Active CN111612709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010395977.3A CN111612709B (en) 2020-05-11 2020-05-11 Image noise reduction method based on DnCNNs improvement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010395977.3A CN111612709B (en) 2020-05-11 2020-05-11 Image noise reduction method based on DnCNNs improvement

Publications (2)

Publication Number Publication Date
CN111612709A true CN111612709A (en) 2020-09-01
CN111612709B CN111612709B (en) 2023-03-28

Family

ID=72197930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010395977.3A Active CN111612709B (en) 2020-05-11 2020-05-11 Image noise reduction method based on DnCNNs improvement

Country Status (1)

Country Link
CN (1) CN111612709B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180293711A1 (en) * 2017-04-06 2018-10-11 Disney Enterprises, Inc. Kernel-predicting convolutional neural networks for denoising
CN109255758A (en) * 2018-07-13 2019-01-22 杭州电子科技大学 Image enchancing method based on full 1*1 convolutional neural networks
CN110599409A (en) * 2019-08-01 2019-12-20 西安理工大学 Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel
CN110648292A (en) * 2019-09-11 2020-01-03 昆明理工大学 High-noise image denoising method based on deep convolutional network
CN110782414A (en) * 2019-10-30 2020-02-11 福州大学 Dark light image denoising method based on dense connection convolution

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180293711A1 (en) * 2017-04-06 2018-10-11 Disney Enterprises, Inc. Kernel-predicting convolutional neural networks for denoising
CN109255758A (en) * 2018-07-13 2019-01-22 杭州电子科技大学 Image enchancing method based on full 1*1 convolutional neural networks
CN110599409A (en) * 2019-08-01 2019-12-20 西安理工大学 Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel
CN110648292A (en) * 2019-09-11 2020-01-03 昆明理工大学 High-noise image denoising method based on deep convolutional network
CN110782414A (en) * 2019-10-30 2020-02-11 福州大学 Dark light image denoising method based on dense connection convolution

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘超等: "图像超分辨率卷积神经网络加速算法", 《国防科技大学学报》 *
章云港等: "基于卷积神经网络的低剂量CT图像去噪方法", 《光学学报》 *

Also Published As

Publication number Publication date
CN111612709B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN110111366B (en) End-to-end optical flow estimation method based on multistage loss
CN108921786B (en) Image super-resolution reconstruction method based on residual convolutional neural network
CN112435191B (en) Low-illumination image enhancement method based on fusion of multiple neural network structures
CN111754438B (en) Underwater image restoration model based on multi-branch gating fusion and restoration method thereof
CN107564009B (en) Outdoor scene multi-target segmentation method based on deep convolutional neural network
CN111489303A (en) Maritime affairs image enhancement method under low-illumination environment
CN111028235A (en) Image segmentation method for enhancing edge and detail information by utilizing feature fusion
CN110930327B (en) Video denoising method based on cascade depth residual error network
CN110717921B (en) Full convolution neural network semantic segmentation method of improved coding and decoding structure
CN112767283A (en) Non-uniform image defogging method based on multi-image block division
CN116152120B (en) Low-light image enhancement method and device integrating high-low frequency characteristic information
CN114170286B (en) Monocular depth estimation method based on unsupervised deep learning
CN110211052A (en) A kind of single image to the fog method based on feature learning
CN112070688A (en) Single image defogging method for generating countermeasure network based on context guidance
CN112580473A (en) Motion feature fused video super-resolution reconstruction method
CN110782458A (en) Object image 3D semantic prediction segmentation method of asymmetric coding network
CN113052764A (en) Video sequence super-resolution reconstruction method based on residual connection
CN113409355A (en) Moving target identification system and method based on FPGA
CN109871790B (en) Video decoloring method based on hybrid neural network model
CN117274059A (en) Low-resolution image reconstruction method and system based on image coding-decoding
CN115984747A (en) Video saliency target detection method based on dynamic filter
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
CN113436101B (en) Method for removing rain by Dragon lattice tower module based on efficient channel attention mechanism
CN114596233A (en) Attention-guiding and multi-scale feature fusion-based low-illumination image enhancement method
CN117391920A (en) High-capacity steganography method and system based on RGB channel differential plane

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230303

Address after: 250000 building S02, No. 1036, Gaoxin Inspur Road, Jinan, Shandong

Applicant after: Shandong Inspur Scientific Research Institute Co.,Ltd.

Address before: 250104 1st floor, R & D building, No. 2877, Suncun Town, Licheng District, Jinan City, Shandong Province

Applicant before: JINAN INSPUR HIGH-TECH TECHNOLOGY DEVELOPMENT Co.,Ltd.

GR01 Patent grant
GR01 Patent grant