CN110503616A - A kind of production network applied to picture denoising - Google Patents
A kind of production network applied to picture denoising Download PDFInfo
- Publication number
- CN110503616A CN110503616A CN201910801567.1A CN201910801567A CN110503616A CN 110503616 A CN110503616 A CN 110503616A CN 201910801567 A CN201910801567 A CN 201910801567A CN 110503616 A CN110503616 A CN 110503616A
- Authority
- CN
- China
- Prior art keywords
- convolution
- network
- picture
- output
- loss function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 17
- 238000003475 lamination Methods 0.000 claims description 40
- 230000004069 differentiation Effects 0.000 claims description 9
- 230000009467 reduction Effects 0.000 abstract description 5
- 238000005457 optimization Methods 0.000 abstract description 4
- 238000012545 processing Methods 0.000 abstract description 4
- 230000000644 propagated effect Effects 0.000 abstract description 3
- 238000012549 training Methods 0.000 description 28
- 230000006870 function Effects 0.000 description 22
- 238000000034 method Methods 0.000 description 16
- 238000013527 convolutional neural network Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000007423 decrease Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000011248 coating agent Substances 0.000 description 2
- 238000000576 coating method Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of production networks applied to picture denoising, are applied to technical field of image processing, comprising: generator is clear image for receiving noisy image, and exporting;Whole network is made of 12 blocks, arbiter network, for receiving the clean picture sample of generator generation, and using the probability that judge input picture is clean image as output;Arbiter network losses function are as follows: wherein, first-loss function is loss function corresponding to output result of the arbiter network for generator for the sum of first-loss function, the second loss function and gradient punishment;Second loss function is arbiter network for loss function corresponding to noise-free picture.Using the embodiment of the present invention, make how production network acquistion offsets picture noise using a variety of optimization directions such as image difference and noise variance and punishment strategy, it only needs the picture that will need noise reduction to input in trained generation confrontation network, denoising result can be obtained by a propagated forward operation.
Description
Technical field
The present invention relates to image processing technology more particularly to a kind of production networks applied to picture denoising.
Background technique
Image denoising is a basic image processing operations, and the purpose is to remove while retaining original image structure
Noise.Most common method is to be denoised using the sparse noise reduction autocoder framework of stack, but it may be
Picture quality is reduced to a certain extent.Therefore, related fields researchers are constantly seeking more excellent image de-noising method.Closely
The supervised learning method of Nian Lai, convolutional neural networks (CNN) obtain extensive concern in computer vision field.From 2012
Since year, CNN achieves huge progress and is widely applied in terms of image classification and image detection.So far, also
It is many to be carried out with using the related work of CNN progress image procossing.CNN exports upper layer using convolution kernel and carries out convolution behaviour
Make, the feature image of upper layer output can be voluntarily extracted, including some colors, texture, abstract structure in shape;Especially exist
It is had good robustness on the problem of displacement, scaling and other distortion invariance.Compare other depth, Feedforward Neural Networks
Network, the parameter that CNN needs to consider is less, makes a kind of deep learning structure for having much attraction.
A kind of method for generating the non-supervisory formula study of confrontation network (GAN) formula, by allowing two mutual games of neural network
Mode learnt, be usually used in field of image processing.Generate the true of the output result needs imitation exercise concentration as far as possible of network
Real sample, differentiate network input then for authentic specimen or generate network output, the purpose is to will generate the output of network from
It is distinguished as far as possible in authentic specimen.And differentiation network will then be cheated as much as possible by generating network.Two networks confront with each other,
Continuous adjusting parameter, final purpose are to make to differentiate that network can not judge whether the output result for generating network is true.On the one hand,
GAN is withdrawn into artificial intelligence field for model is produced, and which results in scholar's even interest of the public to mode of researching and producing.It is passing through
It is many that the training skills of improved model itself and the summary of article are started to occur after spending 2014 and fermenting for 2015, this
It greatly have stimulated the follow-up developments of GAN, especially generated and the relevant issues such as application aspect in image.
It is all optimized only for a kind of specific noise type currently, most of images remove dryness invention, for by not true
It is poor that the mixed noise of fixed a variety of noises composition removes dryness ability.
Summary of the invention
The purpose of the present invention is to provide it is a kind of applied to picture denoising production network, it is intended to using image difference with make an uproar
A variety of optimization directions such as sound variance and punishment strategy make how production network acquistion offsets picture noise, it is only necessary to will need noise reduction
Picture input trained generations confrontation network in, being operated by a propagated forward can be obtained denoising result, phase
More traditional picture denoising method, the embodiment of the present invention possess finer and smoother denoising effect, and can be by adjusting punishment
Algorithm parameter is spent to adjust removing-noise strength, denoises process flexible, and fitness is high.
To achieve the goals above, the present invention provides a kind of production network applied to picture denoising, comprising:
Generator is clear image for receiving noisy image, and exporting;Whole network is made of 12 blocks altogether, is wrapped
Include respectively arranged first convolutional layer, the second convolutional layer, third convolutional layer, Volume Four lamination, the 5th convolutional layer, the 6th convolution
Layer, the first warp lamination, the second warp lamination, third warp lamination, the 4th warp lamination, the 5th warp lamination, the 6th warp
Lamination;Wherein, the first convolutional layer, the second convolutional layer, third convolutional layer, Volume Four lamination, the second warp lamination, third deconvolution
Layer, the 4th warp lamination, the 5th warp lamination have 64 convolution karyogenesis 64 to open output, the 5th convolutional layer and the first warp lamination
There are 32 convolution karyogenesis 32 to open output;6th convolutional layer has 1 convolution karyogenesis 1 to open output, and the 6th warp lamination has 3 volumes
Output is opened in product karyogenesis 3;And second convolutional layer and the 4th warp lamination output join end to end, Volume Four lamination and the second warp
The output of lamination joins end to end;By generator the output of received noisy image and the 6th warp lamination subtract each other, obtain clean
Picture sample, as output result;
Arbiter network, the clean picture sample generated for receiving the generator, and it is dry for judging input picture
The probability of net image is as output;It include: first layer convolution kernel, wherein convolution kernel size is 5x5, and convolution step-length is 2, there is 64
Output is opened in a convolution karyogenesis 64;Second layer convolution kernel, convolution kernel size are 5x5, and convolution step-length is 2, have 128 convolution kernels raw
It is exported at 128;Third layer convolution kernel, wherein convolution kernel size is 3x3, and convolution step-length is 2, there is 512 convolution karyogenesis
512 outputs;4th layer of convolution kernel, convolution kernel size are 3x3, and convolution step-length is 1, have 128 convolution karyogenesis 128 to open defeated
Out;It is 3x3 that 5th convolution kernel, six layers of convolution kernel, which all have convolution kernel size, and convolution step-length is 1, and layer 5 convolution kernel has 64
Output is opened in a convolution karyogenesis 64, and layer 6 has 3 convolution karyogenesis 3 to open output;
Wherein, the loss function of arbiter network are as follows: first-loss function, the second loss function and gradient punishment the sum of its
In, the first-loss function is loss function corresponding to output result of the arbiter network for generator;Described second
Loss function is arbiter network for loss function corresponding to noise-free picture.
In a kind of implementation, first-loss function is embodied are as follows:
Wherein, bij is the pixel value for the pixel that coordinate is ij in other device network output result, and bij ' is compressed
Pixel value, arbiter network output result are that noise image is input to differentiation result corresponding to arbiter network.
In a kind of implementation, the second loss function is embodied are as follows:
Wherein, aijThe pixel value for the pixel that coordinate is ij in result, a ' are exported for arbiter networkijFor after being truncated
Pixel value, m and n are respectively the maximum value of coordinate points, and loss_real is first-loss function, and arbiter network exports result
Differentiation result corresponding to arbiter network is input to for the result of generator.
In a kind of implementation, gradient penalty term specific formula is as follows:
Wherein,M=μ × input+ (1- μ) × G
Gp gradient penalty term, G are that image is generated by generator, and input is corresponding clean image, and μ is constructed mould
Parameter in type, A are that M passes through the obtained result of arbiter network.
Using a kind of production network applied to picture denoising provided in an embodiment of the present invention, image difference and noise are utilized
A variety of optimization directions such as variance and punishment strategy make how production network acquistion offsets picture noise, it is only necessary to will need noise reduction
Picture input in trained generation confrontation network, can be obtained denoising result by a propagated forward operation, compare
Traditional picture denoising method, the embodiment of the present invention possess finer and smoother denoising effect, and can be by adjusting punishment degree
Algorithm parameter adjusts removing-noise strength, denoises process flexible, fitness is high.
Detailed description of the invention
Fig. 1 is a kind of structural schematic diagram of the embodiment of the present invention.
Fig. 2 is a kind of structural schematic diagram of generator network of the embodiment of the present invention.
Fig. 3 is a kind of structural schematic diagram of arbiter network of the embodiment of the present invention.
Fig. 4 is a kind of flow diagram of image procossing of the embodiment of the present invention.
Specific embodiment
Illustrate embodiments of the present invention below by way of specific specific example, those skilled in the art can be by this specification
Other advantages and efficacy of the present invention can be easily understood for disclosed content.The present invention can also pass through in addition different specific realities
The mode of applying is embodied or practiced, the various details in this specification can also based on different viewpoints and application, without departing from
Various modifications or alterations are carried out under spirit of the invention.
Please refer to Fig. 1-2.It should be noted that only the invention is illustrated in a schematic way for diagram provided in the present embodiment
Basic conception, only shown in schema then with related component in the present invention rather than component count, shape when according to actual implementation
Shape and size are drawn, when actual implementation kenel, quantity and the ratio of each component can arbitrarily change for one kind, and its component cloth
Office's kenel may also be increasingly complex.
It is possible, firstly, to obtain 1000 outdoor scene pictures from SUN database.Using Matlab software, picture database is given
In picture batch white Gaussian noise is added, signal strength is respectively set in a variety of random noises, each picture such as salt-pepper noise
For the noise of 30dB, 1000 pairs clear-fuzzy pair is formed altogether;800 pairs of pictures are randomly selected as training set, remaining 200
Picture evaluates network as forecast set.
As depicted in figs. 1 and 2, for generator, whole network uses symmetrical structure, and input is noisy image, output
For clear image, training objective is how to offset picture noise;Whole network is made of 12 blocks altogether, group inside the first six block
Become: convolutional layer, batch standardization layer and LReLU active coating;Composition inside six blocks afterwards are as follows: warp lamination, batch standardization layer and
LReLU active coating.Each convolution kernel size is all 3x3, and convolution step-length is the 1, the 1st layer, and 2 layers, 3 layers, 4 layers, 8 layers, 9 layers, 10 layers,
11 layers respectively have 64 convolution karyogenesis 64 to open output;5th layer, 7 layers respectively has 32 convolution karyogenesis 32 to open output, and the 6th layer has 1
Output is opened in a convolution karyogenesis 1;12nd layer has 3 convolution karyogenesis 3 to open output.
The present invention is with reference to U-net network structure, the characteristic pattern that the convolutional layer of the 2nd layer, 4 layers is obtained and the 10th layer, 8 layers
The characteristic pattern that warp lamination obtains is attached, and band figure of making an uproar finally is subtracted the 12nd layer of obtained characteristic pattern, obtains output result.
The generator network characteristics that the present invention constructs are that its practical target that calculates not is the image after noise reduction, but multilayer is relied on to roll up
Jump between product, warp lamination and convolution and warp lamination connects to calculate the noise profile in noisy image, in pixel
Noise is extracted in rank, finally calculated noise image is subtracted with original image and obtains denoising image.
As shown in figures 1 and 3, arbiter network is constructed, the clean picture sample generated for generator is inputted, exports to comment
Sentence the probability that input picture is clean image;Whole network is made of 6 convolution units, and first convolution unit includes convolution
Layer, LReLU layers;The second to four units include convolutional layer, criticize standardization layer and LReLU layers;5th unit includes volume
Lamination, batch standardization layer and ReLU layers;6th unit includes convolutional layer and ReLU layers.
In the first convolution kernel layer by layer, convolution kernel size is 5x5, and convolution step-length is 2, has 64 convolution karyogenesis 64 to open defeated
Out;Second layer convolution kernel size and convolution step-length are identical as first layer, have 128 convolution karyogenesis 128 to open output;Third layer volume
Product core size is 3x3, and convolution step-length is 2, has 512 convolution karyogenesis 512 to open output;4th layer of convolution kernel size is 3x3, volume
Product step-length is 1, has 128 convolution karyogenesis 128 to open output;Layer 5 convolution kernel, six layers of convolution kernel size and convolution step-length with
4th layer identical, and layer 5 layer convolution kernel has 64 convolution karyogenesis 64 to open output, and layer 6 layer convolution kernel has 3 convolution kernels raw
It is exported at 3.
Use clean image as input, training arbiter network;Learning rate is set as 0.002, and every batch of inputs 20 figures
Piece.
While training generator, the present invention by using clean picture as in input again incoming arbiter network,
It obtains differentiating result Dlogits-inputIt is added in loss function.The present invention carries out break-in operation to the differentiation result, to prevent from counting
Value diverging, and the whole mean value of matrix after truncation is calculated, specific formula is as follows, and design parameter can be expressed with following formula:
Wherein, aijFor Dlogits-inputEach of pixel value, a 'ijFor the pixel value after being truncated.Loss_real is same
Sample is added in loss function as penalty term.
Noise pattern is obtained result G by generator by the present invention, and using the result as being inputted into arbiter network
In, it obtains differentiating result Dlogits-G。
Then break-in operation is carried out to differentiation result, and calculates the whole mean value of matrix after truncation, specific formula is as follows:
Wherein bij is Dlogits-GEach of pixel value, bij ' be compressed pixel value.
The present invention using the poor image_deff in pixel space as the penalty term of generator network, specific formula is as follows:
Wherein n is the sum of training sample, and G is the result images that generator generates, and input is that the result images are corresponding
Clear picture.Therefore image_deff is calculated to scheme the gap between true picture by model denoising generated,
Carry out Optimized model as penalty term in loss function.
Also optimization process is added using the noise variance of the image generated by generator network as penalty term in the present invention, makes an uproar
Sound variance is acquired by following formula:
Wherein N is pixel number, and μ is noise mean value.
The present invention will be used as by the obtained total variation (Total Variation) for generating figure G of generator network and be punished
Item is penalized to be added in loss function.Total variation is the sum of the absolute difference of adjacent pixel values in input picture, to evaluate image
Smoothness, specific formula is as follows:
Wherein Ω indicates the domain of image, pixel (x, y) ∈ Ω.
To sum up, the loss function of generator network can be used following formula to indicate:
Wherein, α, β are the two super ginsengs introduced, weight when to adjusting and optimizing.
Gradient disappears in order to prevent explodes with gradient, and the present invention joined gradient punishment in the loss function of arbiter
Gp, gp can improve the unstable problem of master production confrontation network training, and process can be indicated with following formula:
M=μ × input+ (1- μ) × G
Wherein, G is that image is generated by generator network, and input is corresponding clean image, and μ is in constructed model
Parameter, A be M pass through the obtained result of arbiter network.
To sum up, the loss function of arbiter network can be used following formula to indicate:
D_loss=loss_fake-loss_real+gp
Arbiter network is connected with generator network, model is totally as shown in Figure 4.With generator network and arbiter net
Confrontation error between network is integrated in the error term of step 6 as new error term, using described in back-propagation algorithm training
Convolutional neural networks generator.
Generator network parameter, training arbiter network are first fixed in every wheel training.Declined according to loss function using gradient
Algorithm optimizes arbiter.
Fixed arbiter network parameter, training generator network.Confrontation error brought by arbiter network is added to life
In the overall error of network of growing up to be a useful person, using gradient descent algorithm training generator network.
As shown in figure 4, to carry out the implementation method flow chart that image procossing and production network generate in inventive embodiments,
Comprising steps of
Step 1: pictures are obtained from database;
Step 2: image preprocessing clean picture is added the noise of varying strength, constructs training set and test set;
Step 3: building generator, input are that band is made an uproar picture;
Step 4: building arbiter, and use clean picture training arbiter;
Step 5: the denoising picture of generator output being inputted into arbiter, arbiter output is to the differentiation knot for generating picture
Fruit;
Step 6: calculate image difference and noise variance, in conjunction with arbiter differentiation as a result, being instructed using gradient descent algorithm
Practice generator and arbiter;
Step 7: step S150, S160 repetition training arbiter and generator are pressed, until obtaining the life of an effect stability
Accepted way of doing sth network;
Step 8: the picture for needing to denoise being inputted into trained production network, obtains denoising figure.
It is alternately repeated and executes above-mentioned two step as a circulation, train 10000 wheels altogether.The specific training process of network
Are as follows: totally 10000 wheel randomly selects the 20 pictures feeding network in training set during the training of every wheel to setting exercise wheel number
Training.During the training of every wheel, generator is calculated separately according to above-mentioned g_loss function and D_loss function and arbiter damages
It loses;Then the change in values that each parameter in network is calculated according to loss function calculated result and gradient descent algorithm, multiplied by setting
Learning rate obtain final parameter change value;Legacy network parameter finally is added with change in values, the net after realizing every wheel training
Network parameter updates.In the training process, network always fights error and reduces with the increase of exercise wheel number, but error decrease speed
It will gradually slow down, after training, if network always fights error and keeps stablizing, no longer decline substantially completing totally 10000 wheels,
Then think network trained completion.
Wherein, described confrontation error, circular are as follows:
It for arbiter network, inputs as authentic specimen or generates sample, training objective are as follows: when input is authentic specimen
When, provide a marking as high as possible;When input is generates sample, an alap marking is provided.Therefore, it fights
The expression formula of error is as follows:
Wherein, G represents generator network, and D represents arbiter network.θdFor the parameter of arbiter network D, θgFor generator
The parameter of network G.For generator network G, it is expected that the value of above formula is the smaller the better, this result that represent generator is increasingly connect
Nearly authentic specimen;For arbiter network D, it would be desirable that the value of above formula is the bigger the better, this represents arbiter network can be more quasi-
Really tell the picture and true picture of generation.
In entire training process, completed using the training of an arbiter, generator as a circulation, using with
The method of machine gradient decline, in conjunction with the global error and loss function in step 6, while training generator network G and training are sentenced
Other device network D, obtains final network model.
Step 8 can make model ultimately generate the result picture for denoising effect, and more similar to authentic specimen.
The above-described embodiments merely illustrate the principles and effects of the present invention, and is not intended to limit the present invention.It is any ripe
The personage for knowing this technology all without departing from the spirit and scope of the present invention, carries out modifications and changes to above-described embodiment.Cause
This, institute is complete without departing from the spirit and technical ideas disclosed in the present invention by those of ordinary skill in the art such as
At all equivalent modifications or change, should be covered by the claims of the present invention.
Claims (4)
1. a kind of production network applied to picture denoising characterized by comprising
Generator is clear image for receiving noisy image, and exporting;Whole network is made of 12 blocks altogether, including point
The first convolutional layer, the second convolutional layer, third convolutional layer, Volume Four lamination, the 5th convolutional layer, the 6th convolutional layer, not being arranged
One warp lamination, the second warp lamination, third warp lamination, the 4th warp lamination, the 5th warp lamination, the 6th warp lamination;
Wherein, the first convolutional layer, the second convolutional layer, third convolutional layer, Volume Four lamination, the second warp lamination, third warp lamination,
Four warp laminations, the 5th warp lamination have 64 convolution karyogenesis 64 to open output, and the 5th convolutional layer and the first warp lamination have 32
Output is opened in a convolution karyogenesis 32;6th convolutional layer has 1 convolution karyogenesis 1 to open output, and the 6th warp lamination has 3 convolution kernels
Generate 3 outputs;And second convolutional layer and the 4th warp lamination output join end to end, Volume Four lamination and the second warp lamination
Output join end to end;By generator the output of received noisy image and the 6th warp lamination subtract each other, obtain clean picture
Sample, as output result;
Arbiter network, the clean picture sample generated for receiving the generator, and will judge input picture is clean figure
The probability of picture is as output;It include: first layer convolution kernel, wherein convolution kernel size is 5x5, and convolution step-length is 2, there is 64 volumes
Output is opened in product karyogenesis 64;Second layer convolution kernel, convolution kernel size are 5x5, and convolution step-length is 2, there is 128 convolution karyogenesis
128 outputs;Third layer convolution kernel, wherein convolution kernel size is 3x3, and convolution step-length is 2, there is 512 convolution karyogenesis 512
Open output;4th layer of convolution kernel, convolution kernel size are 3x3, and convolution step-length is 1, have 128 convolution karyogenesis 128 to open output;The
It is 3x3 that five convolution kernels, six layers of convolution kernel, which all have convolution kernel size, and convolution step-length is 1, and layer 5 convolution kernel has 64 convolution
Output is opened in karyogenesis 64, and layer 6 has 3 convolution karyogenesis 3 to open output;
Wherein, the loss function of arbiter network are as follows: first-loss function, the second loss function and gradient punishment the sum of wherein,
The first-loss function is loss function corresponding to output result of the arbiter network for generator;Second loss
Function is arbiter network for loss function corresponding to noise-free picture.
2. the production network according to claim 1 or 2 applied to picture denoising, which is characterized in that first-loss letter
Several embodies are as follows:
Wherein, bij is the pixel value for the pixel that coordinate is ij in other device network output result, and bij ' is compressed pixel
Value, arbiter network output result are that noise image is input to differentiation result corresponding to arbiter network.
3. the production network according to claim 2 applied to picture denoising, which is characterized in that the second loss function
It embodies are as follows:
Wherein, aijThe pixel value for the pixel that coordinate is ij in result, a ' are exported for arbiter networkijFor the pixel after being truncated
Value, m and n are respectively the maximum value of coordinate points, and loss_real is first-loss function, and it is to generate that arbiter network, which exports result,
The result of device is input to differentiation result corresponding to arbiter network.
4. the production network according to claim 3 applied to picture denoising, which is characterized in that the tool of gradient penalty term
Body formula is as follows:
Wherein,M=μ × input+ (1- μ) × G
Gp gradient penalty term, G are that image is generated by generator, and input is corresponding clean image, and μ is in constructed model
Parameter, A be M pass through the obtained result of arbiter network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910801567.1A CN110503616A (en) | 2019-08-28 | 2019-08-28 | A kind of production network applied to picture denoising |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910801567.1A CN110503616A (en) | 2019-08-28 | 2019-08-28 | A kind of production network applied to picture denoising |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110503616A true CN110503616A (en) | 2019-11-26 |
Family
ID=68590131
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910801567.1A Withdrawn CN110503616A (en) | 2019-08-28 | 2019-08-28 | A kind of production network applied to picture denoising |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110503616A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145123A (en) * | 2019-12-27 | 2020-05-12 | 福州大学 | Image denoising method based on U-Net fusion detail retention |
CN111899185A (en) * | 2020-06-18 | 2020-11-06 | 深圳先进技术研究院 | Training method and device of image noise reduction model, electronic equipment and storage medium |
CN112164008A (en) * | 2020-09-29 | 2021-01-01 | 中国科学院深圳先进技术研究院 | Training method of image data enhancement network, and training device, medium, and apparatus thereof |
CN112230210A (en) * | 2020-09-09 | 2021-01-15 | 南昌航空大学 | HRRP radar target identification method based on improved LSGAN and CNN |
CN113011532A (en) * | 2021-04-30 | 2021-06-22 | 平安科技(深圳)有限公司 | Classification model training method and device, computing equipment and storage medium |
WO2022027216A1 (en) * | 2020-08-04 | 2022-02-10 | 深圳高性能医疗器械国家研究院有限公司 | Image denoising method and application thereof |
-
2019
- 2019-08-28 CN CN201910801567.1A patent/CN110503616A/en not_active Withdrawn
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145123A (en) * | 2019-12-27 | 2020-05-12 | 福州大学 | Image denoising method based on U-Net fusion detail retention |
CN111145123B (en) * | 2019-12-27 | 2022-06-14 | 福州大学 | Image denoising method based on U-Net fusion retention details |
CN111899185A (en) * | 2020-06-18 | 2020-11-06 | 深圳先进技术研究院 | Training method and device of image noise reduction model, electronic equipment and storage medium |
WO2022027216A1 (en) * | 2020-08-04 | 2022-02-10 | 深圳高性能医疗器械国家研究院有限公司 | Image denoising method and application thereof |
CN112230210A (en) * | 2020-09-09 | 2021-01-15 | 南昌航空大学 | HRRP radar target identification method based on improved LSGAN and CNN |
CN112164008A (en) * | 2020-09-29 | 2021-01-01 | 中国科学院深圳先进技术研究院 | Training method of image data enhancement network, and training device, medium, and apparatus thereof |
CN112164008B (en) * | 2020-09-29 | 2024-02-23 | 中国科学院深圳先进技术研究院 | Training method of image data enhancement network, training device, medium and equipment thereof |
CN113011532A (en) * | 2021-04-30 | 2021-06-22 | 平安科技(深圳)有限公司 | Classification model training method and device, computing equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110503616A (en) | A kind of production network applied to picture denoising | |
CN110599409B (en) | Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel | |
CN108765319A (en) | A kind of image de-noising method based on generation confrontation network | |
CN114140353B (en) | Swin-Transformer image denoising method and system based on channel attention | |
CN110473154A (en) | A kind of image de-noising method based on generation confrontation network | |
CN108171320A (en) | A kind of image area switching network and conversion method based on production confrontation network | |
CN105825484B (en) | A kind of depth image denoising and Enhancement Method based on deep learning | |
CN107016406A (en) | The pest and disease damage image generating method of network is resisted based on production | |
CN108334816A (en) | The Pose-varied face recognition method of network is fought based on profile symmetry constraint production | |
CN109741247A (en) | A kind of portrait-cartoon generation method neural network based | |
CN108876737A (en) | A kind of image de-noising method of joint residual error study and structural similarity | |
CN110458750A (en) | A kind of unsupervised image Style Transfer method based on paired-associate learning | |
CN107729819A (en) | A kind of face mask method based on sparse full convolutional neural networks | |
CN107423678A (en) | A kind of training method and face identification method of the convolutional neural networks for extracting feature | |
CN107749052A (en) | Image defogging method and system based on deep learning neutral net | |
CN106204482B (en) | Based on the mixed noise minimizing technology that weighting is sparse | |
CN111145116A (en) | Sea surface rainy day image sample augmentation method based on generation of countermeasure network | |
CN106991408A (en) | The generation method and method for detecting human face of a kind of candidate frame generation network | |
CN111986075B (en) | Style migration method for target edge clarification | |
CN109685716A (en) | A kind of image super-resolution rebuilding method of the generation confrontation network based on Gauss encoder feedback | |
CN110427799A (en) | Based on the manpower depth image data Enhancement Method for generating confrontation network | |
CN112561791B (en) | Image style migration based on optimized AnimeGAN | |
CN109753864A (en) | A kind of face identification method based on caffe deep learning frame | |
CN109146061A (en) | The treating method and apparatus of neural network model | |
CN106204461A (en) | Compound regularized image denoising method in conjunction with non local priori |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20191126 |
|
WW01 | Invention patent application withdrawn after publication |