CN109872288B - Network training method, device, terminal and storage medium for image denoising - Google Patents

Network training method, device, terminal and storage medium for image denoising Download PDF

Info

Publication number
CN109872288B
CN109872288B CN201910100276.XA CN201910100276A CN109872288B CN 109872288 B CN109872288 B CN 109872288B CN 201910100276 A CN201910100276 A CN 201910100276A CN 109872288 B CN109872288 B CN 109872288B
Authority
CN
China
Prior art keywords
image
denoising
neural network
convolutional neural
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910100276.XA
Other languages
Chinese (zh)
Other versions
CN109872288A (en
Inventor
陈松逵
石大明
朱美芦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201910100276.XA priority Critical patent/CN109872288B/en
Publication of CN109872288A publication Critical patent/CN109872288A/en
Application granted granted Critical
Publication of CN109872288B publication Critical patent/CN109872288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention is applicable to the technical field of image processing, and provides a network training method, a device, a terminal and a storage medium for image denoising, wherein the method comprises the following steps: inputting a preset number of noise images into a preset convolutional neural network, initializing parameters of the convolutional neural network, acquiring denoising image features of the noise images through a denoising layer of the convolutional neural network, combining the denoising image features through a combination layer of the preset convolutional neural network to obtain denoising images after denoising the noise images, inputting the denoising images corresponding to the denoising images and noise images into a preset detail loss model, acquiring detail loss when the convolutional neural network is denoised, when the detail loss model is not converged, transmitting the detail loss back to the convolutional neural network, updating the parameters of the convolutional neural network according to the detail loss, and continuing training the convolutional neural network, so that the detail loss when the network is reduced through constantly denoising and adjusting the parameters, and further improving the denoising effect of the network.

Description

Network training method, device, terminal and storage medium for image denoising
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a network training method, device, terminal and storage medium for image denoising.
Background
With the development of shooting technologies on various electronic terminals, images are gradually developed into information carriers which are relatively commonly used in the society, but the images are often degraded due to interference and influence of various noises in the process of acquiring, transmitting or storing the images. Image denoising is an important research direction in the field of image processing, and aims to remove noise in noisy images so as to obtain clean images.
At present, most of image denoising depends on a convolutional neural network after deep learning, and when the convolutional neural network performs the deep learning, a mean square error (MSE, mean square error) is often used as a loss function of an optimized network, and although the loss function can help the network find a solution which has good performance in an objective evaluation standard, namely peak-to-noise ratio (PSNR), when the detail of an image is lost, the MSE drives the network to find a homogenizing solution, so that feature gaps between the captured images cannot be well captured, the captured feature gaps are too balanced, and thus the restored image is more blurred.
Disclosure of Invention
The invention aims to provide a network training method, a device, a terminal and a storage medium for image denoising, which aim to solve the problem that the image denoising effect of the existing image denoising network is not ideal because the prior art can not provide an effective image denoising network training method.
In one aspect, the present invention provides a network training method for image denoising, the method comprising the steps of:
inputting a preset number of noise images into a preset convolutional neural network, initializing parameters of the convolutional neural network, and acquiring denoising image features of the noise images through a denoising layer of the convolutional neural network;
combining the denoising image features through a preset combination layer of the convolutional neural network to obtain a denoised image after denoising the noise image;
inputting the denoising image and the noiseless image corresponding to the denoising image into a preset detail loss model, and acquiring detail loss of the convolutional neural network when denoising the denoising image through the detail loss model;
and when the detail loss model is not converged, the detail loss is reversely transmitted to the convolutional neural network in a gradient reverse transmission mode, and parameters of the convolutional neural network are updated according to the detail loss so as to continue training the convolutional neural network.
In another aspect, the present invention provides a network training apparatus for image denoising, the apparatus comprising:
the characteristic denoising unit is used for inputting a preset number of noise images into a preset convolutional neural network, initializing parameters of the convolutional neural network and acquiring denoising image characteristics of the noise images through a denoising layer of the convolutional neural network;
the characteristic combination unit is used for combining the characteristics of the denoising image through a preset combination layer of the convolutional neural network so as to obtain a denoised image after denoising the noise image;
the loss acquisition unit is used for inputting the denoising image and the noiseless image corresponding to the denoising image into a preset detail loss model, and acquiring detail loss of the convolutional neural network when denoising the denoising image through the detail loss model; and
and the anti-transmission updating unit is used for transmitting the detail loss back to the convolutional neural network in a gradient anti-transmission mode when the detail loss model is not converged, and updating parameters of the convolutional neural network according to the detail loss so as to continue training the convolutional neural network.
In another aspect, the present invention further provides a computing terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the network training method for image denoising as described above when executing the computer program.
In another aspect, the present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of a network training method for image denoising as described above.
According to the method, a preset number of noise images are input into a preset convolutional neural network, parameters of the convolutional neural network are initialized, denoising image features of the noise images are obtained through a denoising layer of the convolutional neural network, the denoising image features are combined through a combination layer of the preset convolutional neural network to obtain denoising images after denoising the noise images, then the denoising images and noiseless images corresponding to the noise images are input into a preset detail loss model, detail loss during denoising of the convolutional neural network is obtained, when the detail loss model is not converged, the detail loss is transmitted to the convolutional neural network in a back-to-back mode, and the convolutional neural network parameters are updated according to the detail loss to continue training of the convolutional neural network, so that the detail loss during network is reduced through continuous denoising and parameter adjustment, and further the denoising effect of the convolutional neural network is improved.
Drawings
FIG. 1 is a flowchart of a network training method for image denoising according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an image denoising network according to an embodiment of the present invention;
FIG. 3 is a flowchart of an implementation of the loss of acquisition detail provided in one embodiment of the present invention;
fig. 4 is a schematic layer structure of a VGG19 network according to an embodiment of the invention; and
fig. 5 is a schematic structural diagram of a network training device for image denoising according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of a network training device for image denoising according to a third embodiment of the present invention; and
fig. 7 is a schematic structural diagram of a terminal according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The following describes in detail the implementation of the present invention in connection with specific embodiments:
embodiment one:
fig. 1 shows a flow of implementing a network training method for image denoising according to an embodiment of the present invention, and for convenience of explanation, only the portions relevant to the embodiment of the present invention are shown, which are described in detail below:
in step S101, a preset number of noise images are input into a preset convolutional neural network, parameters of the convolutional neural network are initialized, and denoising image features of the noise images are obtained through a denoising layer of the convolutional neural network.
The embodiment of the invention is suitable for a computing terminal, the computing terminal can load and operate a network for image denoising, and for convenience of subsequent description, the network for image denoising is called an image denoising network, and the image denoising network can be used for removing noise from an image. In order to obtain an image denoising network with a good denoising effect, a convolutional neural network is used for deep denoising learning, so in the embodiment of the invention, a preset number of noise images are input into the preset convolutional neural network, then the parameters of the convolutional neural network are initialized, and then the denoising image characteristics of the noise images are obtained through the denoising layer of the convolutional neural network.
Preferably, the noise images with the preset number are images with similar or same noise source, for example, images shot on haze days or images shot on cloudy days or images shot on sunny days with the preset number can be respectively trained according to different environments, and the obtained image denoising network can achieve good denoising effects on the images of the haze days, the cloudy days and the sunny days respectively; when initializing, the parameters of the convolutional neural network can be initialized and set in a random parameter setting mode; when denoising an input noise image, extracting noise image features of the noise image through a first preset convolution layer and an activation layer of a convolution neural network, denoising the noise image features through a residual block of the preset convolution neural network to obtain denoising image features, and denoising the input noise image according to current image denoising network parameters, wherein the residual block comprises a second preset convolution layer and an activation layer of the convolution neural network, and a network structure formed by arranging a preset number of convolution layers and a preset number of activation layers according to a preset sequence is called as the first preset convolution layer and the activation layer or the second preset convolution layer and the activation layer for convenience of description.
Fig. 2 shows a block diagram of an image denoising network according to an embodiment of the present invention, where (a) is a schematic structural diagram of the image denoising network; (b) A schematic diagram of the structure of the residual block of the image denoising network.
In the embodiment of the invention, the denoising layers of the convolutional neural network are composed of a preset number of convolutional layers and active layers, after a noise image is received, the characteristics of the noise image are extracted by 1 convolutional layer (k 9n64s 1) and 1 active layer (PReLU), then the characteristics of the noise image are input into residual blocks, when the residual blocks denoise the characteristics of the noise image, the 1 residual blocks denoise the characteristics of the noise image through 2 convolutional layers (k 3n64s 1) and 1 active layer (PReLU), the obtained characteristics are overlapped with the input characteristics of the noise image, and the obtained characteristics are input as the next residual blocks until the last residual block outputs the denoised image characteristics, and after the denoised image characteristics are obtained, the noise image characteristics are overlapped into the denoised image characteristics and are input into a combination layer of the image denoising network. When the problems of gradient elimination and gradient explosion of the convolutional neural network are prevented, a BN (Batch Normalization) layer of the convolutional neural network is deleted in the embodiment of the invention, so that the image characteristics are effectively prevented from becoming unclear, and more details can be reserved when the convolutional neural network is used for denoising the image.
Where PReLU represents the name of the activation function, k represents the corresponding convolution kernel size of the convolution layer, n represents the number of feature patterns, s represents the convolution step size, for example, k9n64s1, represents the convolution kernel size as 9*9, the number of output feature patterns is 64, and the step of convolution kernel sliding is 1.
Further, before combining the denoised image features, the noisy image features are superimposed into the denoised image features and simultaneously input to the combined layer of the image denoising network, thereby avoiding the problems of gradient extinction and gradient explosion.
In step S102, the denoised image features are combined by a preset combination layer of the convolutional neural network, so as to obtain a denoised image after denoising the noise image.
In the embodiment of the invention, the denoising image is an image generated after the denoising operation of the input noise image is performed by a preset convolutional neural network, and after the denoising image features are acquired, the acquired denoising image features are subjected to feature combination by 2 convolutional layers (k 3n256s 1), 1 convolutional layer (k 9n3s 1) and 2 active layers (PReLU), so that the denoised image of the denoised noise image is obtained.
In step S103, the denoising image and the noiseless image corresponding to the denoising image are input into a preset detail loss model, and the detail loss when the convolutional neural network denoises the noise image is obtained.
In the embodiment of the invention, the detail loss model is a preset model for calculating the feature loss of the noiseless image corresponding to the noise image, after the denoising image is obtained, the denoising image and the noiseless image corresponding to the noise image are input into the preset detail loss model, so that the detail loss of the convolutional neural network when the denoising of the noise image is obtained, and the detail loss represents the denoising effect of the current parameters of the convolutional neural network.
Fig. 3 shows a flow of implementation of obtaining detail loss according to the first embodiment of the present invention, where the detail loss is obtained by the following steps, so as to better assist in training an image denoising network with a better denoising effect:
in step S301, a first euclidean distance between a feature map of a denoising image and a feature map of a noise-free image is obtained through a preset VGG neural network.
In the embodiment of the present invention, fig. 4 shows a structure diagram of a VGG19 network provided in the first embodiment of the present invention, the VGG neural network is a 19-layer VGG network (VGG 19 network) trained in advance, after a denoised image and a denoised image are input, the denoised image and the denoised image are respectively led into the VGG19 network trained in advance, then feature graphs of the denoised image and the denoised image are respectively extracted from a 4 th convolution layer (conv 5-4) in a 5 th pooling layer (pool 5) of the VGG19 network, a detail loss obtained according to the feature graphs extracted from the 4 th convolution layer in the 5 th pooling layer of the VGG19 network can better assist the convolutional neural network to recover details of the noise image, and then euclidean distance between the denoised image and the feature graphs of the denoised image extracted from the 4 th convolution layer (conv 5-4) in the 5 th pooling layer (pool 5) of the VGG19 network is obtained through a euclidean distance formula
Figure BDA0001965510110000061
For convenience of description, the euclidean distance obtained at this time is referred to as a first euclidean distance, where the first euclidean distance represents a perceived loss when the noise image is denoised in the convolutional neural network, and the perceived loss is used as the image in the denoising processA detail loss of the generated detail loss, so that the image denoising network keeps more detail in the noiseless image when denoising the image, wherein x represents the noiseless image, +.>
Figure BDA0001965510110000062
Representing noise image, function->
Figure BDA0001965510110000063
Feature map extracted from 4 th convolution layer in 5 th pooling layer of VGG19 network, W 5,4 Representing the dimension of the feature map in the width direction, H 5,4 Representing the dimension of the feature map in the height direction, (i, j) representing the pixel point coordinates of the feature map.
In step S302, a second euclidean distance between the high frequency image information of the noise removed image and the high frequency image information of the noise free image is obtained by high frequency filtering.
In the embodiment of the invention, the high-frequency information of the denoising image and the noiseless image is obtained through a high-pass filter, wherein the high-pass filter can be a first-order high-pass filter adopting a Sobel operator (Sobel operator), the high-frequency information comprises detail information such as edges, textures and the like, and the Euclidean distance between the high-frequency information of the denoising image and the high-frequency information of the noiseless image is obtained through a Euclidean distance formula
Figure BDA0001965510110000071
For convenience of description, the euclidean distance obtained at this time is taken as a second euclidean distance, the second distance represents a high-frequency loss when the noise image is denoised in the convolutional neural network, the high-frequency loss is taken as another detail loss of detail loss generated in the denoising process of the image, thereby avoiding high-frequency artifacts generated when the obtained image denoising network denoises the image, wherein the function phi represents a high-frequency information image obtained through a high-pass filter, W and H represent dimensions of the high-frequency information image in the width direction and the height direction respectively, and (i, j) represents pixel point coordinates of the feature image.
In step S303, according to the formula
Figure BDA0001965510110000072
Loss of detail is obtained. />
In the embodiment of the invention, after the perceived loss and the high-frequency loss of the noise image when denoising is performed in the convolutional neural network are obtained, a detail loss formula is used
Figure BDA0001965510110000073
Detail loss when acquiring convolutional neural network to denoise noise image is expressed by the formula +.>
Figure BDA0001965510110000074
Called detail loss formula, where l detail Represents detail loss generated when denoising noise image, l per Represents a first Euclidean distance, l hf Representing a second Euclidean distance->
Figure BDA0001965510110000075
Is the weight of the second euclidean distance.
In step S104, when the detail loss model is not converged, the detail loss is back-transmitted to the convolutional neural network by a gradient back-transmission mode, and parameters of the convolutional neural network are updated according to the detail loss, so as to continue training the convolutional neural network.
In the embodiment of the invention, when the detail loss model is not converged, the parameter of the current convolutional neural network cannot achieve the expected denoising effect, at the moment, the detail loss is reversely transmitted to the convolutional neural network in a gradient reverse transmission mode, the parameter of the convolutional neural network is updated according to the detail loss so as to continuously train the convolutional neural network, the updated parameter of the convolutional neural network is output until the detail loss model is converged, so that a trained image denoising network is obtained, and the image denoising network is continuously trained to denoise an oriented type image, so that the parameter of the convolutional neural network with better denoising effect of the oriented type image is obtained.
In the embodiment of the invention, a preset number of noise images are input into a preset convolutional neural network, parameters of the convolutional neural network are initialized, the denoising image features of the noise images are obtained through a denoising layer of the convolutional neural network, the denoising image features are combined through a combination layer of the preset convolutional neural network to obtain denoising images after denoising the noise images, a noise-free image corresponding to the denoising images and the noise images is input into a preset detail loss model, the detail loss during denoising of the convolutional neural network is obtained, when the detail loss model is not converged, the detail loss is deconcused to the convolutional neural network, and the convolutional neural network parameters are updated according to the detail loss to continue training the convolutional neural network, so that the detail loss during the network is reduced through continuous denoising and parameter adjustment, and the denoising effect of the network is improved.
Embodiment two:
fig. 5 shows a structure of a network training device for image denoising according to a second embodiment of the present invention, and for convenience of explanation, only a portion related to the embodiment of the present invention is shown, where the network training device includes:
the feature denoising unit 51 is configured to input a preset number of noise images into a preset convolutional neural network, initialize parameters of the convolutional neural network, and obtain denoising image features of the noise images through a denoising layer of the convolutional neural network;
the feature combination unit 52 is configured to combine the features of the denoised image through a preset combination layer of the convolutional neural network, so as to obtain a denoised image after denoising the noise image;
a loss obtaining unit 53, configured to input a denoising image and a noiseless image corresponding to the denoising image into a preset detail loss model, and obtain detail loss when the convolutional neural network denoises the noise image; and
and the anti-transmission updating unit 54 is configured to, when the detail loss model is not converged, de-transmit the detail loss to the convolutional neural network by using a gradient anti-transmission method, and update parameters of the convolutional neural network according to the detail loss so as to continue training the convolutional neural network.
In the embodiment of the invention, a preset number of noise images are input into a preset convolutional neural network, parameters of the convolutional neural network are initialized, the denoising image features of the noise images are obtained through a denoising layer of the convolutional neural network, the denoising image features are combined through a combination layer of the preset convolutional neural network to obtain denoising images after denoising the noise images, a noise-free image corresponding to the denoising images and the noise images is input into a preset detail loss model, the detail loss during denoising of the convolutional neural network is obtained, when the detail loss model is not converged, the detail loss is deconcused to the convolutional neural network, and the convolutional neural network parameters are updated according to the detail loss to continue training the convolutional neural network, so that the detail loss during the network is reduced through continuous denoising and parameter adjustment, and the denoising effect of the network is improved.
In the embodiment of the present invention, each unit of the network training device for image denoising may be implemented by a corresponding hardware or software unit, and each unit may be an independent software or hardware unit, or may be integrated into one software or hardware unit, which is not used to limit the present invention. The specific implementation of each unit may refer to the description of the first embodiment, and will not be repeated here.
Embodiment III:
fig. 6 shows a structure of a network training device for image denoising according to a third embodiment of the present invention, and for convenience of explanation, only a portion related to the embodiment of the present invention is shown, including:
the feature denoising unit 61 is configured to input a preset number of noise images into a preset convolutional neural network, initialize parameters of the convolutional neural network, and obtain denoising image features of the noise images through a denoising layer of the convolutional neural network;
the feature combination unit 62 is configured to combine the features of the denoised image through a preset combination layer of the convolutional neural network, so as to obtain a denoised image after denoising the noise image;
a loss obtaining unit 63, configured to input a denoising image and a noiseless image corresponding to the denoising image into a preset detail loss model, and obtain detail loss when the convolutional neural network denoises the noise image; and
and the anti-transmission updating unit 64 is configured to, when the detail loss model is not converged, de-transmit the detail loss to the convolutional neural network through a gradient anti-transmission mode, and update parameters of the convolutional neural network according to the detail loss so as to continue training the convolutional neural network.
Wherein the feature denoising unit 61 includes:
a feature extraction unit 611, configured to extract noise image features of a noise image through a first preset convolutional layer and an active layer of the convolutional neural network; and
the feature denoising subunit 612 is configured to denoise the noise image feature through a residual block of a preset convolutional neural network, so as to obtain a denoised image feature, where the residual block includes a second preset convolutional layer and an active layer of the convolutional neural network.
The loss acquisition unit 63 includes:
a first distance acquiring unit 631 configured to acquire a first euclidean distance between a feature map of a noise-removed image and a feature map of a noise-free image through a preset VGG neural network;
a second distance acquisition unit 632 for acquiring a second euclidean distance between the high frequency image information of the noise-removed image and the high frequency image information of the noise-free image by high frequency filtering; and
a loss acquisition subunit 633 for calculating a formula
Figure BDA0001965510110000101
Obtain detail loss, l detail Represents detail loss generated when denoising noise image, l per Represents a first Euclidean distance, l hf Representing a second Euclidean distance->
Figure BDA0001965510110000102
Is the weight of the second euclidean distance.
In the embodiment of the invention, a preset number of noise images are input into a preset convolutional neural network, parameters of the convolutional neural network are initialized, the denoising image features of the noise images are obtained through a denoising layer of the convolutional neural network, the denoising image features are combined through a combination layer of the preset convolutional neural network to obtain denoising images after denoising of the noise images, a noise-free image corresponding to the denoising images and the noise images is input into a preset detail loss model to obtain detail loss when the convolutional neural network is denoised, when the detail loss model is not converged, the detail loss is deconcused to the convolutional neural network, and the convolutional neural network parameters are updated according to the detail loss to continue training the convolutional neural network until the detail loss model is converged, the updated parameters of the convolutional neural network are output to obtain a trained image denoising network, and therefore the detail loss when the network is reduced through continuous denoising and parameter adjustment, and the denoising effect of the network is improved.
In the embodiment of the present invention, each unit of the network training device for image denoising may be implemented by a corresponding hardware or software unit, and each unit may be an independent software or hardware unit, or may be integrated into one software or hardware unit, which is not used to limit the present invention. The specific implementation of each unit may refer to the description of the first embodiment, and will not be repeated here.
Embodiment four:
fig. 7 shows a structure of a computing terminal according to a fourth embodiment of the present invention, and for convenience of explanation, only a portion related to the embodiment of the present invention is shown, including:
the computing terminal 7 of an embodiment of the invention comprises a processor 71, a memory 72 and a computer program 73 stored in the memory 72 and executable on the processor 71. The processor 71, when executing the computer program 73, implements the steps in the above-described network training method embodiment for image denoising, for example, steps S101 to S104 shown in fig. 1 and steps S301 to S303 shown in fig. 3. Alternatively, processor 71, when executing computer program 73, performs the functions of the various elements of the network training device embodiments described above for image denoising, e.g., the functions of elements 51-54 shown in fig. 5 and elements 61-64 shown in fig. 6.
In the embodiment of the invention, when the processor executes a computer program, a preset number of noise images are input into a preset convolutional neural network, parameters of the convolutional neural network are initialized, the denoising image features of the noise images are obtained through a denoising layer of the convolutional neural network, the denoising image features are combined through a combination layer of the preset convolutional neural network to obtain denoising images after denoising the noise images, the denoising images and noiseless images corresponding to the noise images are input into a preset detail loss model, detail loss during denoising of the convolutional neural network is obtained, when the detail loss model is not converged, the detail loss is reversely transmitted to the convolutional neural network, and the parameters of the convolutional neural network are updated according to the detail loss, so that training is continuously carried out on the convolutional neural network, the detail loss during network is reduced through continuous denoising and parameter adjustment, and the denoising effect of the network is further improved.
The steps in the above-mentioned embodiments of the network training method for image denoising when the processor executes the computer program may refer to the description of the first embodiment, and will not be repeated here.
Fifth embodiment:
in an embodiment of the present invention, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps in the above-described respective network training method embodiments for image denoising, for example, steps S101 to S104 shown in fig. 1 and steps S301 to S303 shown in fig. 3. Alternatively, the computer program, when executed by the processor, performs the functions of the units in the embodiments of the network training apparatus for image denoising described above, for example, the functions of the units 51 to 54 shown in fig. 5 and the units 61 to 64 shown in fig. 6.
In the embodiment of the invention, after the computer program is executed by a processor, a preset number of noise images are input into a preset convolutional neural network, parameters of the convolutional neural network are initialized, the denoising image features of the noise images are obtained through a denoising layer of the convolutional neural network, the denoising image features are combined through a combination layer of the preset convolutional neural network to obtain a denoising image after denoising the noise images, a denoising image and a noise-free image corresponding to the noise images are input into a preset detail loss model, detail loss during denoising of the convolutional neural network is obtained, when the detail loss model is not converged, the detail loss is reversely transmitted to the convolutional neural network, and parameters of the convolutional neural network are updated according to the detail loss so as to continuously train the convolutional neural network, thereby reducing the detail loss during network through continuous denoising and parameter adjustment, and further improving the denoising effect of the network.
The steps in the above-mentioned embodiment of the network training method for image denoising when the computer program is executed by the processor may refer to the description of the first embodiment, and will not be repeated here.
The computer readable storage medium of embodiments of the present invention may include any entity or device capable of carrying computer program code, storage medium, e.g., ROM/RAM, magnetic disk, optical disk, flash memory, and so forth.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (8)

1. A network training method for image denoising, the method comprising the steps of:
inputting a preset number of noise images into a preset convolutional neural network, initializing parameters of the convolutional neural network, and acquiring denoising image features of the noise images through a denoising layer of the convolutional neural network, wherein the convolutional neural network is a convolutional neural network with a BN layer deleted;
combining the denoising image features through a preset combination layer of the convolutional neural network to obtain a denoised image after denoising the noise image;
inputting the denoising image and the noiseless image corresponding to the denoising image into a preset detail loss model, and acquiring detail loss of the convolutional neural network when denoising the denoising image through the detail loss model;
when the detail loss model is not converged, the detail loss is reversely transmitted to the convolutional neural network in a gradient reverse transmission mode, and parameters of the convolutional neural network are updated according to the detail loss so as to continue training the convolutional neural network;
the step of obtaining the detail loss of the convolutional neural network when denoising the noise image through the detail loss model comprises the following steps:
acquiring a first Euclidean distance between a feature map of the denoising image and a feature map of the noiseless image through a preset VGG neural network, wherein the preset VGG neural network is a 19-layer VGG network trained in advance;
obtaining a second Euclidean distance between the high-frequency image information of the denoising image and the high-frequency image information of the noiseless image through high-frequency filtering;
according to the formula
Figure QLYQS_1
Acquiring the detail loss, wherein ∈>
Figure QLYQS_2
Representing the loss of detail->
Figure QLYQS_3
Representing the first Euclidean distance, < > and->
Figure QLYQS_4
Representing said second Euclidean distance,>
Figure QLYQS_5
a weight representing the second euclidean distance;
the step of obtaining a first euclidean distance between the feature map of the denoising image and the feature map of the noiseless image through a preset VGG neural network comprises the following steps:
leading a denoising image and a noiseless image into a pre-trained VGG neural network VGG19, extracting a feature map of the denoising image and a feature map of the noiseless image through a 4 th convolution layer in a 5 th pooling layer of the VGG19, and obtaining first Euclidean distances of the feature map of the denoising image and the feature map of the noiseless image;
the step of obtaining the second euclidean distance between the high frequency image information of the denoised image and the high frequency image information of the noiseless image by high frequency filtering includes:
and acquiring the high-frequency information of the denoising image and the noiseless image through a high-pass filter, and acquiring a second Euclidean distance between the high-frequency information of the denoising image and the high-frequency information of the noiseless image, wherein the high-pass filter is a first-order high-pass filter adopting a Sobel operator.
2. The method of claim 1, wherein the step of obtaining the denoised image features of the noisy image by a denoise layer of the convolutional neural network comprises:
extracting noise image features of the noise image through a first preset convolution layer and an activation layer of the convolution neural network;
denoising the noise image features through a preset residual block of the convolutional neural network to obtain the denoised image features, wherein the residual block comprises a second preset convolutional layer and an activation layer of the convolutional neural network.
3. The method of claim 2, wherein the step of obtaining the denoised image features of the noisy image by a denoise layer of the convolutional neural network further comprises:
and superposing the noise image features into the denoising image features.
4. The method of claim 1, wherein the method further comprises:
and when the detail loss model is converged, outputting updated parameters of the convolutional neural network to obtain a trained image denoising network.
5. A network training device for image denoising, the device comprising:
the characteristic denoising unit is used for inputting a preset number of noise images into a preset convolutional neural network, initializing parameters of the convolutional neural network, acquiring denoising image characteristics of the noise images through a denoising layer of the convolutional neural network so as to train the convolutional neural network continuously, wherein the convolutional neural network is a convolutional neural network with a BN layer deleted;
the characteristic combination unit is used for combining the characteristics of the denoising image through a preset combination layer of the convolutional neural network so as to obtain a denoised image after denoising the noise image;
the loss acquisition unit is used for inputting the denoising image and the noiseless image corresponding to the denoising image into a preset detail loss model, and acquiring detail loss of the convolutional neural network when denoising the denoising image through the detail loss model; and
the back transmission updating unit is used for back transmitting the detail loss to the convolutional neural network in a gradient back transmission mode when the detail loss model is not converged, and updating parameters of the convolutional neural network according to the detail loss;
the loss acquisition unit includes:
the first distance acquisition unit is used for acquiring a first Euclidean distance between the feature map of the denoising image and the feature map of the noiseless image through a preset VGG neural network, wherein the preset VGG neural network is a pre-trained 19-layer VGG network;
a second distance acquisition unit configured to acquire a second euclidean distance between high-frequency image information of the denoised image and high-frequency image information of the noiseless image by high-frequency filtering; and
a loss acquisition subunit for obtaining the loss according to the formula
Figure QLYQS_6
Acquiring the detail loss, wherein ∈>
Figure QLYQS_7
Representing the loss of detail->
Figure QLYQS_8
Representing the first Euclidean distance, < > and->
Figure QLYQS_9
Representing said second Euclidean distance,>
Figure QLYQS_10
a weight representing the second euclidean distance;
the first distance acquisition unit includes:
the characteristic map acquisition unit is used for importing a denoising image and a non-denoising image into a pre-trained VGG neural network VGG19, extracting the characteristic map of the denoising image and the characteristic map of the non-denoising image through a 4 th convolution layer in a 5 th pooling layer of the VGG19, and acquiring first Euclidean distances of the characteristic map of the denoising image and the characteristic map of the non-denoising image;
the second distance acquisition unit includes:
the high-frequency information acquisition unit is used for acquiring the high-frequency information of the denoising image and the noiseless image through a high-pass filter, and acquiring a second Euclidean distance between the high-frequency information of the denoising image and the high-frequency information of the noiseless image, wherein the high-pass filter is a first-order high-pass filter adopting a Sobel operator.
6. The apparatus of claim 5, wherein the apparatus further comprises:
the feature extraction unit is used for extracting noise image features of the noise image through a first preset convolution layer and an activation layer of the convolution neural network; and
and the characteristic denoising subunit is used for denoising the noise image characteristic through a preset residual block of the convolutional neural network to obtain the denoising image characteristic, wherein the residual block comprises a second preset convolutional layer and an activation layer of the convolutional neural network.
7. A computing terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when the computer program is executed.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 4.
CN201910100276.XA 2019-01-31 2019-01-31 Network training method, device, terminal and storage medium for image denoising Active CN109872288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910100276.XA CN109872288B (en) 2019-01-31 2019-01-31 Network training method, device, terminal and storage medium for image denoising

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910100276.XA CN109872288B (en) 2019-01-31 2019-01-31 Network training method, device, terminal and storage medium for image denoising

Publications (2)

Publication Number Publication Date
CN109872288A CN109872288A (en) 2019-06-11
CN109872288B true CN109872288B (en) 2023-05-23

Family

ID=66918487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910100276.XA Active CN109872288B (en) 2019-01-31 2019-01-31 Network training method, device, terminal and storage medium for image denoising

Country Status (1)

Country Link
CN (1) CN109872288B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349103A (en) * 2019-07-01 2019-10-18 昆明理工大学 It is a kind of based on deep neural network and jump connection without clean label image denoising method
CN110443758B (en) * 2019-07-05 2023-08-25 广东省人民医院(广东省医学科学院) Medical image denoising method and device
CN112308785A (en) * 2019-08-01 2021-02-02 武汉Tcl集团工业研究院有限公司 Image denoising method, storage medium and terminal device
CN110569961A (en) * 2019-08-08 2019-12-13 合肥图鸭信息科技有限公司 neural network training method and device and terminal equipment
CN110728636A (en) * 2019-09-17 2020-01-24 杭州群核信息技术有限公司 Monte Carlo rendering image denoising model, method and device based on generative confrontation network
CN110738616B (en) * 2019-10-12 2020-06-26 成都考拉悠然科技有限公司 Image denoising method with detail information learning capability
CN111047537A (en) * 2019-12-18 2020-04-21 清华大学深圳国际研究生院 System for recovering details in image denoising
CN111161180B (en) * 2019-12-26 2023-09-26 华南理工大学 Deep learning ultrasonic image de-noising method based on migration and structure priori
CN113096023B (en) * 2020-01-08 2023-10-27 字节跳动有限公司 Training method, image processing method and device for neural network and storage medium
CN111369456B (en) * 2020-02-28 2021-08-31 深圳市商汤科技有限公司 Image denoising method and device, electronic device and storage medium
CN111626950A (en) * 2020-05-19 2020-09-04 上海集成电路研发中心有限公司 Online training device and method for image denoising model
EP3996035A1 (en) * 2020-11-05 2022-05-11 Leica Microsystems CMS GmbH Methods and systems for training convolutional neural networks
CN112330575B (en) * 2020-12-03 2022-10-14 华北理工大学 Convolution neural network medical CT image denoising method
CN112598599B (en) * 2020-12-29 2024-04-09 南京大学 Denoising model training method and denoising method for hyperspectral image
CN112801888A (en) * 2021-01-06 2021-05-14 杭州海康威视数字技术股份有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112967195B (en) * 2021-03-04 2024-04-23 浙江大华技术股份有限公司 Image denoising method, device and computer readable storage medium
CN113538281B (en) * 2021-07-21 2023-07-11 深圳大学 Image denoising method, image denoising device, computer equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171217A (en) * 2018-01-29 2018-06-15 深圳市唯特视科技有限公司 A kind of three-dimension object detection method based on converged network
CN108304921B (en) * 2018-02-09 2021-02-02 北京市商汤科技开发有限公司 Convolutional neural network training method and image processing method and device
CN108335306B (en) * 2018-02-28 2021-05-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN108230280A (en) * 2018-04-11 2018-06-29 哈尔滨工业大学 Image speckle noise minimizing technology based on tensor model and compressive sensing theory
CN108765320B (en) * 2018-05-16 2021-06-22 哈尔滨工业大学 Image restoration system based on multi-level wavelet convolution neural network
CN109118435A (en) * 2018-06-15 2019-01-01 广东工业大学 A kind of depth residual error convolutional neural networks image de-noising method based on PReLU
CN109063584B (en) * 2018-07-11 2022-02-22 深圳大学 Facial feature point positioning method, device, equipment and medium based on cascade regression

Also Published As

Publication number Publication date
CN109872288A (en) 2019-06-11

Similar Documents

Publication Publication Date Title
CN109872288B (en) Network training method, device, terminal and storage medium for image denoising
Almeida et al. Blind and semi-blind deblurring of natural images
CN111028163B (en) Combined image denoising and dim light enhancement method based on convolutional neural network
Dabov et al. Image restoration by sparse 3D transform-domain collaborative filtering
CN108416740B (en) Iterative adaptive median filtering method for eliminating salt and pepper noise
Salmon et al. From patches to pixels in non-local methods: Weighted-average reprojection
Kaur et al. Comparative analysis of image denoising techniques
WO2020093914A1 (en) Content-weighted deep residual learning for video in-loop filtering
CN108648162B (en) Gradient-related TV factor image denoising and deblurring method based on noise level
CN116029946B (en) Heterogeneous residual error attention neural network model-based image denoising method and system
US20090074318A1 (en) Noise-reduction method and apparatus
CN110782406A (en) Image denoising method and device based on information distillation network
CN107451961B (en) Method for recovering sharp image under multiple fuzzy noise images
CN114155161B (en) Image denoising method, device, electronic equipment and storage medium
Wang et al. An improved image blind deblurring based on dark channel prior
US8106971B2 (en) Apparatus and method for estimating signal-dependent noise in a camera module
KR101707337B1 (en) Multiresolution non-local means filtering method for image denoising
CN116385312A (en) Low-illumination image denoising method based on phase correlation
CN115761242A (en) Denoising method and terminal based on convolutional neural network and fuzzy image characteristics
CN115829870A (en) Image denoising method based on variable scale filtering
CN106897975B (en) Image denoising method for hypercube particle calculation
Onuki et al. Trilateral filter on graph spectral domain
CN113888405A (en) Denoising and demosaicing method based on clustering self-adaptive expansion convolutional neural network
Li et al. Joint motion deblurring with blurred/noisy image pair
CN109191391B (en) Attenuation parameter self-adaptive non-local mean image noise reduction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant