CN115511730A - Image strobe removing method based on loop generation countermeasure network - Google Patents

Image strobe removing method based on loop generation countermeasure network Download PDF

Info

Publication number
CN115511730A
CN115511730A CN202211073159.7A CN202211073159A CN115511730A CN 115511730 A CN115511730 A CN 115511730A CN 202211073159 A CN202211073159 A CN 202211073159A CN 115511730 A CN115511730 A CN 115511730A
Authority
CN
China
Prior art keywords
image
stroboscopic
network
loss
fake
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211073159.7A
Other languages
Chinese (zh)
Inventor
林晓丹
李杨福
邱应强
朱建清
曾焕强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqiao University
Original Assignee
Huaqiao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqiao University filed Critical Huaqiao University
Priority to CN202211073159.7A priority Critical patent/CN115511730A/en
Publication of CN115511730A publication Critical patent/CN115511730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image stroboscopic removing method based on a circularly generated countermeasure network, which is characterized in that a circularly generated countermeasure network model is obtained by training on the basis of a synthesized stroboscopic image data set and comprises two generation networks and two identification networks, wherein the two generation networks are respectively responsible for generating a non-stroboscopic image from an input stroboscopic image and generating a stroboscopic image from an input non-stroboscopic image, and the identification networks are used for training the generation networks by combining a generated countermeasure thought and an input real image. Finally, the purpose of inputting the stroboscopic image and finally outputting the corresponding non-stroboscopic image is achieved, and the practicability of the image stroboscopic removing method is effectively improved.

Description

Image strobe removing method based on loop generation countermeasure network
Technical Field
The invention belongs to the field of computer image processing, and mainly relates to image stroboscopic elimination in an artificial illumination imaging environment. Specifically, an image strobe removal method based on a loop generation countermeasure network.
Background
In an artificial lighting environment, for example, when lighting devices such as fluorescent lamps and LED lamps use ac mains as a power supply, the light intensity of the lighting devices may show a certain fluctuation due to the periodic variation of the grid current. Because the rolling shutter camera adopts a line-by-line exposure mode, the brightness fluctuation of the light source at different exposure moments of each line can be captured by the image shot by the rolling shutter camera, and the brightness fluctuation is specifically represented as strip-shaped stripes with light and shade changes, namely the stroboscopic effect of the image.
Generating the antagonistic network GAN is a generative model, which generally comprises two sub-models with opposite optimization objectives, one called generator, responsible for fitting the potential distribution of the real data; one is called a discriminator, which discriminates whether input data is real data or data forged by a generator as a two-classifier. The cyclic generation countermeasure network cycleGAN is a generation countermeasure network using unsupervised training and aims to solve the problem that pairing data is difficult to acquire in supervised learning. The cycle consistency loss proposed by CycleGAN allows the model to learn a one-to-one mapping between the source and target domains in the absence of pairing data. The generator and the discriminator network structure both adopt a convolutional neural network.
The generation countermeasure network takes the real picture as the input of the generation countermeasure network, and the generation network outputs another picture with specific attributes.
The existing image stroboscopic removing method comprises the following steps: a plurality of images with different exposure durations are used for image fusion, so that the aim of inhibiting stroboflash is fulfilled; the digital filter is designed to filter out stroboscopic components in the image using additional a priori information such as line scan frequency, grid frequency, etc. of the imaging device.
However, the existing stroboscopic removing method is difficult to be applied to the situation that a single picture or camera parameters and power grid parameters are unknown, the use scene is limited, and the practicability is low.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide an image stroboscopic removal method based on a circularly generated countermeasure network, which introduces the concept of circularly generated countermeasures, does not need additional prior information or a matched non-stroboscopic image as reference, and enhances the use scene and the practicability of the stroboscopic removal method.
In order to realize the purpose, the invention adopts the technical scheme that:
an image stroboscopic removal method based on a loop generation countermeasure network, comprising the following steps:
step 1, training an image stroboscopic removal model;
step 1.1, constructing a stroboscopic-removing cycle to generate a confrontation network framework;
the anti-network framework generated by the stroboscopic removing circulation comprises a generation network G, a generation network R, an identification network D1 and an identification network D2, wherein the generation network R is used for learning and eliminating the mapping of image stroboscopic; the generation network G is used for learning the generation of stroboscopic images; the identification network D1 is used for eliminating the difference between the stroboscopic image and the real non-stroboscopic image by the identification model; the identification network D2 is used for identifying the difference between the stroboscopic image generated by the model and the real stroboscopic image;
step 1.2, inputting a stroboscopic/stroboscopic-free image training set, and training an image stroboscopic-free model; the strobe/no-strobe image training set comprises a strobe image X and a no-strobe image Y; the generation network R generates an image Yfake without stroboflash according to an input stroboflash image X, and generates an ontology mapping non-stroboflash image YIden according to an input non-stroboflash image Y; the generation network G generates an image Xfade containing stroboflash according to an input stroboflash-free image Y, generates an ontology mapping stroboflash image Xiden according to an input stroboflash image X, and calculates the stroboflash loss _ Flicker, the gradient loss _ Grad and the ontology mapping loss _ Iden of the model;
respectively inputting the generated stroboscopic-removed image Yfake and stroboscopic image Xfake into a generation network G and a generation network R, generating a cyclic stroboscopic image Xcycle and a cyclic stroboscopic-removed image Ycycle, and calculating a cyclic consistency loss _ Cycle;
the probability P of outputting a real image by inputting the stroboscopic-removed images Y and Yfake into an identification network D1 1 1 real ,P 1 2 real ,P 1 3 real ]And possibility of generating an image [ P 1 1 fake ,P 1 2 fake ,P 1 3 fake ]And calculating the loss _ Disc1 of the discriminator corresponding to the D1;
the stroboscopic images X and Xfake are respectively input into an identification network D2, and the possibility of outputting a real image [ P 2 1 real ,P 2 2 real ,P 2 3 real ]And possibility of generating image [ P 2 1 fake ,P 2 2 fake ,P 2 3 fake ]And calculating the loss _ Disc2 of the discriminator corresponding to the D2;
updating the network parameters of D1 and D2 according to loss _ Disc1 and loss _ Disc2;
then, the stroboscopic-removed image Yfake is inputted into the discrimination network D1, and the probability of generating the image [ P ] is outputted 1 1 fake ,P 1 2 fake ,P 1 3 fake ]Calculating the loss of confrontation of the generating network R _ Gen _ R
The generated strobe image Xfake is input to an authentication network D2, and the probability of generating an image [ P ] is output 2 1 fake ,P 2 2 fake ,P 2 3 fake ]Calculating and generating the confrontation loss _ Gen _ G of the network G;
and calculating the total loss _ R and loss _ G of the generating network R and the generating network G, and updating network parameters to obtain an image stroboscopic removal model.
And 2, performing image stroboscopic removal by using the image stroboscopic removal model obtained by training in the step 1, namely inputting stroboscopic images into the image stroboscopic removal model obtained by training in the step 1, and outputting to obtain stroboscopic removal images.
The strobe loss _ Flicker is calculated as follows:
loss_Flicker=|X p -Yfake p | 1 +|Y p -Xfake p | 1
wherein, the p index represents the average value of the two-dimensional pixel matrix calculated in each channel of R, G and B.
The gradient loss _ Grad is calculated as follows:
Figure BDA0003830075480000041
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003830075480000042
it is indicated that the gradient is calculated for the horizontal direction,
Figure BDA0003830075480000043
indicating that the gradient is calculated for the vertical direction.
The ontology mapping loss _ Iden is calculated as follows:
loss_Iden=|X-Xiden| 1 +|Y-Yiden| 1
the Cycle consistency loss _ Cycle is calculated as follows:
loss_Cycle=|X-Xcycle| 1 +|Y-Ycycle| 1
the discriminator loss _ Disc1 and loss _ Disc2 are calculated as follows:
loss_Disc1=-Σ i (log(1-P 1 i fake)+log(P 1 i real))
loss_Disc2=-Σ i (log(1-P 2 i fake)+log(P 2 i real))。
the net-generated countermeasure loss _ Gen _ R and loss _ Gen _ G are calculated as follows:
loss_Gen_R=-Σ i log(P 1 ifake);
loss_Gen_G=-Σ i log(P 2 i fake)。
the total loss of the generation network R and the generation network G is calculated as follows:
loss_R=loss_Flicker+loss_Grad+loss_Iden+loss_Cycle+loss_Gen_R;
loss_G=loss_Flicker+loss_Iden+loss_Cycle+loss_Gen_G。
the stroboscopic image X in the stroboscopic/non-stroboscopic image training set is obtained by carrying out stroboscopic synthesis on the non-stroboscopic image Y, and the details are as follows;
generating a sine signal matched with the pattern of the stroboscopic signal according to the change rule of the stroboscopic signal, and filling the sine signal in the column direction to be consistent with the size of the image;
and superposing the filled signal and the non-stroboscopic image Y to obtain a synthesized stroboscopic image X.
The generating network G and the generating network R have the same structure and respectively comprise a convolution layer, an anti-convolution layer, a jump connection layer, a ReLU nonlinear activation function layer and an example normalization layer;
the authentication network D1 and the authentication network D2 have the same structure and respectively comprise an average pooling layer, a convolution layer, a LeaklyReLU nonlinear activation function layer and a spectrum normalization layer.
After the scheme is adopted, the invention obtains a circularly generated confrontation network model through training on the basis of a synthesized stroboscopic image data set, the circularly generated confrontation network model comprises two generation networks and two identification networks, the two generation networks are respectively responsible for generating a non-stroboscopic image from an input stroboscopic image and generating a stroboscopic image from the input non-stroboscopic image, and the identification networks are used for generating the confrontation thought by combining the input real image with the generation image of the generation networks to train the generation networks. Finally, the purpose of inputting the stroboscopic image and finally outputting the corresponding non-stroboscopic image is achieved, and the practicability of the image stroboscopic removing method is effectively improved.
Compared with the prior art, the invention has the following advantages:
1. by using unsupervised training, images with different shutter durations or exposure degrees do not need to be paired, and the data acquisition difficulty is reduced.
2. And the stroboscopic loss and the gradient loss are provided according to the stroboscopic statistical rule, so that the image quality after stroboscopic elimination is improved.
3. No additional information such as imaging device line scan frequency or grid frequency is required a priori.
4. The method has no limitation on the background or content of the image, and has good generalization performance and wide application prospect.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a diagram of a cyclically generated anti-de-image strobe network model;
FIG. 2 is a diagram of an example data set;
fig. 3 is a block diagram of the generation network G;
fig. 4 is a block diagram of an authentication network D;
fig. 5 is a comparison of the desstrobing effect of the proposed model on the stroboscopic image and its general cycle-generated anti-network desstrobing effect with unused strobe loss and gradient loss.
Detailed Description
The invention discloses an image stroboscopic removing method based on a loop generation countermeasure network, which specifically comprises the following steps:
step 1, training an image stroboscopic removal model;
step 1.1, constructing a stroboscopic-removing cycle to generate a confrontation network framework;
as shown in fig. 1, the de-strobe cycle generation countermeasure network framework includes a generation network G, a generation network R, an authentication network D1, and an authentication network D2, where the generation network R is used for learning mapping for eliminating image strobe; the generation network G is used for learning the generation of stroboscopic images; the identification network D1 is used for eliminating the difference between the stroboscopic image and the real stroboscopic-free image by the identification model; the discrimination network D2 is used to discriminate the difference of the model-generated strobe image from the true strobe image.
In order to make the generating network better capture the data distribution of the real image, the structure of the generating network R, G is defined, as shown in fig. 2. The identification networks D1 and D2 adopt a multi-scale identification and spectrum normalization mode, simultaneously identify a plurality of scales of the input image, guide to generate network training, and enable the de-stroboscopic mapping of the generated network learning to furthest reserve the texture and the details of the image. The structure defining the authentication network D is shown in fig. 3. The generating network G and the generating network R have the same structure and respectively comprise a convolution layer, an anti-convolution layer, a jump connection layer, a ReLU nonlinear activation function layer and an example normalization layer. The authentication network D1 and the authentication network D2 have the same structure and respectively comprise an average pooling layer, a convolution layer, a LeaklyReLU nonlinear activation function layer and a spectrum normalization layer.
Step 1.2, inputting a stroboscopic/stroboflash-free image training set, and training an image stroboflash-removing model; the strobe/no-strobe image training set includes a strobe image X and a no-strobe image Y.
The existing image stroboscopic removing method usually needs a paired stroboflash-free image as reference or extra imaging information as prior to estimate stroboflash components, and then uses a digital filter to filter stroboflash, so that the practicability is poor. Different from the existing stroboscopic removal method, the invention introduces a loop generation countermeasure network, uses the unpaired stroboscopic/non-stroboscopic image for training, and aims at learning the mapping from the stroboscopic image to the non-stroboscopic image. For this purpose, the invention obtains a data set which can be used for training by artificially synthesizing a stroboscopic image on the basis of the existing Indor CVPR-09 data set. An exemplary graph of the data set is shown in FIG. 4.
The stroboscopic image X in the stroboscopic/non-stroboscopic image training set is obtained by carrying out stroboscopic synthesis on the non-stroboscopic image Y, and is specifically as follows;
generating a sine signal matched with the pattern of the stroboscopic signal according to the change rule of the stroboscopic signal, and filling the sine signal in the column direction to be consistent with the size of the image;
and superposing the filled signal and the non-stroboscopic image Y to obtain a synthesized stroboscopic image X.
The generation network R generates an image Yfake without stroboflash according to an input stroboflash image X, and generates an ontology mapping non-stroboflash image Yiden according to an input non-stroboflash image Y; the generation network G generates an image Xfake containing stroboflash according to an input stroboflash-free image Y, generates an ontology mapping stroboflash image Xiden according to an input stroboflash image X, and calculates the stroboflash loss _ Flicker, the gradient loss _ Grad and the ontology mapping loss _ Iden of the model;
loss_Flicker=|X p -Yfake p | 1 +|Y p -Xfake p | 1
Figure BDA0003830075480000081
loss_Iden=|X-Xiden| 1 +|Y-Yiden| 1
wherein, the p index represents the average value of the two-dimensional pixel matrix calculated in each channel of R, G and B;
Figure BDA0003830075480000082
it is indicated that the gradient is calculated for the horizontal direction,
Figure BDA0003830075480000083
indicating that the gradient is calculated for the vertical direction.
Respectively inputting the generated stroboscopic-removed image Yfake and the stroboscopic image Xfake into a generation network G and a generation network R, generating a cyclic stroboscopic image Xcycle and a cyclic stroboscopic-removed image Ycycle, and calculating a cyclic consistency loss _ Cycle;
loss_Cycle=|X-Xcycle| 1 +|Y-Ycycle| 1
the probability P of outputting a real image by inputting the stroboscopic-removed images Y and Yfake into an identification network D1 1 1 real ,P 1 2 real ,P 1 3 real ]And possibility of generating image [ P 1 1 fake ,P 1 2 fake ,P 1 3 fake ]And calculating the discriminator loss _ Disc1 corresponding to the D1, and updating the parameters of the discrimination network D1 according to the following formula:
loss_Disc1=-Σ i (log(1-P 1 i fake)+log(P 1 i real))
the stroboscopic images X and Xfake are respectively input into an identification network D2, and the possibility of outputting a real image [ P 2 1 real ,P 2 2 real ,P 2 3 real ]And possibility of generating image [ P 2 1 fake ,P 2 2 fake ,P 2 3 fake ]And calculating the discriminator loss _ Disc2 corresponding to the D2, and updating the parameters of the discrimination network D2 according to the following formula:
loss_Disc2=-Σ i (log(1-P 2 i fake)+log(P 2 i real))
then, the stroboscopic-removed image Yfake is inputted into the discrimination network D1, and the probability of generating the image [ P ] is outputted 1 1 fake ,P 1 2 fake ,P 1 3 fake ]The loss of opposition _ Gen _ R of the generating network R is calculated according to the following formula:
loss_Gen_R=-Σ i log(P 1 i fake);
the generated strobe image Xfake is input to an authentication network D2, and the probability of the generated image [ P ] is output 2 1 fake ,P 2 2 fake ,P 2 3 fake ]The loss of opposition, loss _ Gen _ G, of the generating network G is calculated according to the following formula:
loss_Gen_G=-Σ i log(P 2 i fake);
calculating the total loss _ R and loss _ G of the generating network R and the generating network G according to the following formulas, and updating network parameters to obtain an image stroboscopic removal model:
loss_R=loss_Flicker+loss_Grad+loss_Iden+loss_Cycle+loss_Gen_R;
loss_G=loss_Flicker+loss_Iden+loss_Cycle+loss_Gen_G。
and 2, performing image stroboscopic removal by using the image stroboscopic removal model obtained by training in the step 1, namely inputting stroboscopic images into the image stroboscopic removal model obtained by training in the step 1, and outputting to obtain stroboscopic removal images.
The invention provides an image de-stroboscopic model for generating a countermeasure network based on circulation, which comprises a training phase and a use phase. The model training phase operation process comprises the following steps:
1. inputting an original stroboscopic image X and an original non-stroboscopic image Y to corresponding data channels (R- > G- > R, G- > R- > G), respectively obtaining a model de-stroboscopic image Yfake and a model generation stroboscopic image Xfake, mapping a non-stroboscopic image YIden by a model ontology, mapping a stroboscopic image Xiden by the model ontology, circularly removing a stroboscopic image Ycycle, and circularly generating a stroboscopic image Xcycle;
2. inputting a real non-stroboscopic image Y and a model removing stroboscopic image Yfake to an identification network D1, inputting a synthesized stroboscopic image X and a model generating stroboscopic image Xfake to an identification network D2, and respectively outputting true and false probabilities of the real non-stroboscopic image Y and the model removing stroboscopic image Yfake;
3. and updating the network weight of the generated network and the authentication network according to the corresponding error formula.
Actual use flow of the model: and inputting the stroboscopic image to the generation network R to obtain a stroboscopic image removing Yfake.
Fig. 5 is a comparison graph of the stroboscopic removing effect of the proposed model on the stroboscopic image and the stroboscopic removing effect of the general cyclic generation countermeasure network (CycleCAN) without the stroboscopic loss and the gradient loss, and it can be seen from fig. 5 that the stroboscopic removing effect of the present invention is better.
In summary, the invention trains and obtains a cyclic generation confrontation network model on the basis of a synthesized stroboscopic image data set, and the cyclic generation confrontation network model comprises two generation networks and two identification networks, wherein the two generation networks are respectively responsible for generating a stroboscopic-free image from an input stroboscopic image and generating a stroboscopic image from the input stroboscopic-free image, and the identification networks are used for training the generation networks by combining the generation confrontation thought with the input real image and the generation image of the generation networks. Finally, the purpose of inputting the stroboscopic image and finally outputting the corresponding non-stroboscopic image is achieved, and the practicability of the image stroboscopic removing method is effectively improved.
The above description is only exemplary of the present invention and is not intended to limit the technical scope of the present invention, so that any minor modifications, equivalent changes and modifications made to the above exemplary embodiments according to the technical spirit of the present invention are within the technical scope of the present invention.

Claims (10)

1. An image stroboscopic removal method based on a loop generation countermeasure network is characterized in that: the method comprises the following steps:
step 1, training an image stroboscopic removal model;
step 1.1, constructing a stroboscopic-removing cycle to generate a confrontation network framework;
the anti-network framework generated by the stroboscopic removing circulation comprises a generation network G, a generation network R, an identification network D1 and an identification network D2, wherein the generation network R is used for learning and eliminating the mapping of image stroboscopic; the generation network G is used for learning the generation of stroboscopic images; the identification network D1 is used for eliminating the difference between the stroboscopic image and the real non-stroboscopic image by the identification model; the identification network D2 is used for identifying the difference between the stroboscopic image generated by the model and the real stroboscopic image;
step 1.2, inputting a stroboscopic/stroboflash-free image training set, and training an image stroboflash-removing model; the stroboscopic/non-stroboscopic image training set comprises a stroboscopic image X and a non-stroboscopic image Y; the generation network R generates an image Yfake without stroboflash according to an input stroboflash image X, and generates an ontology mapping non-stroboflash image Yiden according to an input non-stroboflash image Y; the generation network G generates an image Xfade containing stroboflash according to an input stroboflash-free image Y, generates an ontology mapping stroboflash image Xiden according to an input stroboflash image X, and calculates the stroboflash loss _ Flicker, the gradient loss _ Grad and the ontology mapping loss _ Iden of the model; respectively inputting the generated stroboscopic-removed image Yfake and stroboscopic image Xfake into a generation network G and a generation network R, generating a cyclic stroboscopic image Xcycle and a cyclic stroboscopic-removed image Ycycle, and calculating a cyclic consistency loss _ Cycle;
the probability P of outputting a real image by inputting the stroboscopic-removed images Y and Yfake into an identification network D1 1 1 real ,P 1 2 real ,P 1 3 real ]And possibility of generating an image [ P 1 1 fake ,P 1 2 fake ,P 1 3 fake ]And calculating the loss _ Disc1 of the discriminator corresponding to the discrimination network D1;
the stroboscopic images X and Xfake are respectively input into an identification network D2, and the possibility of outputting a real image [ P 2 1 real ,P 2 2 real ,P 2 3 real ]And possibility of generating an image [ P 2 1 fake ,P 2 2 fake ,P 2 3 fake ]And calculating the loss _ Disc2 of the discriminator corresponding to the discrimination network D2;
updating network parameters of the authentication network D1 and the authentication network D2 according to the loss _ Disc1 and the loss _ Disc2;
the stroboscopic-removed image Yfake is input into the discrimination network D1, and the probability [ P ] of generating an image is output 1 1 fake ,P 1 2 fake ,P 1 3 fake ]Calculating and generating a confrontation loss _ Gen _ R of the network R;
the generated strobe image Xfake is input to an authentication network D2, and the probability of generating an image [ P ] is output 2 1 fake ,P 2 2 fake ,P 2 3 fake ]Calculating and generating the confrontation loss _ Gen _ G of the network G;
and calculating the total loss _ R and loss _ G of the generating network R and the generating network G, and updating network parameters to obtain an image stroboscopic removal model.
And 2, performing image stroboscopic removal by using the image stroboscopic removal model obtained by training in the step 1, namely inputting stroboscopic images into the image stroboscopic removal model obtained by training in the step 1, and outputting to obtain stroboscopic removal images.
2. The image stroboscopic removal method based on the loop generation countermeasure network of claim 1, characterized in that: the strobe loss _ Flicker is calculated as follows:
loss_Flicker=|X p -Yfake p | 1 +|Y p -Xfake p | 1
wherein, the p index represents the average value of the two-dimensional pixel matrix calculated in each channel of R, G and B.
3. The image strobe removal method based on the loop generation countermeasure network of claim 1, characterized in that: the gradient loss _ Grad is calculated as follows:
Figure FDA0003830075470000031
wherein the content of the first and second substances,
Figure FDA0003830075470000032
it is indicated that the gradient is calculated for the horizontal direction,
Figure FDA0003830075470000033
the gradient is calculated for the vertical direction.
4. The image stroboscopic removal method based on the loop generation countermeasure network of claim 1, characterized in that: the ontology mapping loss _ Iden is calculated as follows:
loss_Iden=|X-Xiden| 1 +|Y-Yiden| 1
5. the image stroboscopic removal method based on the loop generation countermeasure network of claim 1, characterized in that: the Cycle consistency loss _ Cycle is calculated as follows:
loss_Cycle=|X-Xcycle| 1 +|Y-Ycycle| 1
6. the image stroboscopic removal method based on the loop generation countermeasure network of claim 1, characterized in that: the discriminator loss _ Disc1 and the discriminator loss _ Disc2 are calculated as follows:
loss_Disc1=-Σ i (log(1-P 1 i fake)+log(P 1 i real))
loss_Disc2=-Σ i (log(1-P 2 i fake)+log(P 2 i real))。
7. the image strobe removal method based on the loop generation countermeasure network of claim 1, characterized in that: the opposing losses loss _ Gen _ R and loss _ Gen _ G of the generating network are calculated as follows:
loss_Gen_R=-Σ i log(P 1 i fake);
loss_Gen_G=-Σ i log(P 2 i fake)。
8. the image stroboscopic removal method based on the loop generation countermeasure network of claim 1, characterized in that: the total loss of the generation network R and the generation network G is calculated as follows:
loss_R=loss_Flicker+loss_Grad+loss_Iden+loss_Cycle+loss_Gen_R;
loss_G=loss_Flicker+loss_Iden+loss_Cycle+loss_Gen_G。
9. the image stroboscopic removal method based on the loop generation countermeasure network of claim 1, characterized in that: the stroboscopic image X in the stroboscopic/non-stroboscopic image training set is obtained by carrying out stroboscopic synthesis on the non-stroboscopic image Y, and is specifically as follows;
generating a sine signal matched with the pattern of the stroboscopic signal according to the change rule of the stroboscopic signal, and filling the sine signal in the column direction to be consistent with the size of the image;
and superposing the filled signal and the non-stroboscopic image Y to obtain a synthesized stroboscopic image X.
10. The image stroboscopic removal method based on the loop generation countermeasure network of claim 1, characterized in that: the generating network G and the generating network R have the same structure and respectively comprise a convolution layer, an anti-convolution layer, a jump connection layer, a ReLU nonlinear activation function layer and an example normalization layer;
the authentication network D1 and the authentication network D2 have the same structure and respectively comprise an average pooling layer, a convolution layer, a LeaklyReLU nonlinear activation function layer and a spectrum normalization layer.
CN202211073159.7A 2022-09-02 2022-09-02 Image strobe removing method based on loop generation countermeasure network Pending CN115511730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211073159.7A CN115511730A (en) 2022-09-02 2022-09-02 Image strobe removing method based on loop generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211073159.7A CN115511730A (en) 2022-09-02 2022-09-02 Image strobe removing method based on loop generation countermeasure network

Publications (1)

Publication Number Publication Date
CN115511730A true CN115511730A (en) 2022-12-23

Family

ID=84501851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211073159.7A Pending CN115511730A (en) 2022-09-02 2022-09-02 Image strobe removing method based on loop generation countermeasure network

Country Status (1)

Country Link
CN (1) CN115511730A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055894A (en) * 2023-01-28 2023-05-02 荣耀终端有限公司 Image stroboscopic removing method and device based on neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055894A (en) * 2023-01-28 2023-05-02 荣耀终端有限公司 Image stroboscopic removing method and device based on neural network
CN116055894B (en) * 2023-01-28 2023-08-15 荣耀终端有限公司 Image stroboscopic removing method and device based on neural network

Similar Documents

Publication Publication Date Title
CN109919869B (en) Image enhancement method and device and storage medium
CN109218619A (en) Image acquiring method, device and system
CN107358195B (en) Non-specific abnormal event detection and positioning method based on reconstruction error and computer
CN109902633A (en) Accident detection method and device based on the camera supervised video of fixed bit
CN104751485B (en) GPU adaptive foreground extracting method
CN109977882B (en) A kind of half coupling dictionary is to the pedestrian of study again recognition methods and system
CN115511730A (en) Image strobe removing method based on loop generation countermeasure network
CN108921830A (en) A kind of demographic method based on image retrieval
CN110992366B (en) Image semantic segmentation method, device and storage medium
Kim et al. Binocular fusion net: deep learning visual comfort assessment for stereoscopic 3D
CN109447919A (en) In conjunction with the light field super resolution ratio reconstruction method of multi-angle of view and semantic textural characteristics
CN116309781B (en) Cross-modal fusion-based underwater visual target ranging method and device
CN112614070B (en) defogNet-based single image defogging method
CN108846330A (en) A kind of paper calligraphy and painting micro-image intelligent identifying system and recognition methods
CN103313084B (en) Integrated imaging double-shooting method based on different microlens array parameters
CN111242911A (en) Method and system for determining image definition based on deep learning algorithm
CN112686821A (en) Load data repairing method based on improved countermeasure network
Li et al. Multi-modality ensemble distortion for spatial steganography with dynamic cost correction
CN111798384A (en) Reverse rendering human face image illumination information editing method
CN115049559A (en) Model training method, human face image processing method, human face model processing device, electronic equipment and readable storage medium
CN111915566B (en) Infrared sample target detection method based on cyclic consistency countermeasure network
Jiao et al. Generalizable Person Re-Identification via Viewpoint Alignment and Fusion
CN110519590A (en) The detection method and system of camera module
CN111008930B (en) Fabric image super-resolution reconstruction method
CN114155400B (en) Image processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination