CN111986108A - Complex sea-air scene image defogging method based on generation countermeasure network - Google Patents

Complex sea-air scene image defogging method based on generation countermeasure network Download PDF

Info

Publication number
CN111986108A
CN111986108A CN202010786125.7A CN202010786125A CN111986108A CN 111986108 A CN111986108 A CN 111986108A CN 202010786125 A CN202010786125 A CN 202010786125A CN 111986108 A CN111986108 A CN 111986108A
Authority
CN
China
Prior art keywords
image
convolution
layer
fog
sea
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010786125.7A
Other languages
Chinese (zh)
Other versions
CN111986108B (en
Inventor
刘明雍
石廷超
牛云
黄宇轩
汪培新
方一帆
王宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202010786125.7A priority Critical patent/CN111986108B/en
Publication of CN111986108A publication Critical patent/CN111986108A/en
Application granted granted Critical
Publication of CN111986108B publication Critical patent/CN111986108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a complex sea and air scene image defogging method based on a generation countermeasure network. Firstly, acquiring images of a sea-air scene by using an optical camera, wherein the images comprise fog image acquisition and fog-free image acquisition; secondly, cutting the acquired image into images with the same width, height and size; respectively making a fog-free image data set and a fog-containing image data set of the sea-air scene, and not needing scene pair matching of the fog-containing image and the fog-free image; then, a generation countermeasure network for defogging of the complex sea-air scene image is built; training the created generation countermeasure network by using the manufactured sea-air scene data set; and finally, carrying out defogging operation on the foggy day image of the complex sea-air scene by using the trained defogging model. The method can realize defogging treatment of the image in the foggy weather on the sea under the complex sea and air scene, and simultaneously avoids the problems of color distortion of the image after defogging and unnatural scene recovery.

Description

Complex sea-air scene image defogging method based on generation countermeasure network
Technical Field
The invention relates to a complex sea and air scene image defogging method based on a generation countermeasure network, and belongs to the field of image processing.
Background
In the sea-air environment, the water vapor content of the sea surface is sufficient, the relative humidity is high, and when a certain difference exists between the water temperature of the sea surface and the temperature of the air above the sea surface, sea fog is easily formed. Sea fog is a very dangerous weather phenomenon, and due to the fog effect, a ship-borne computer imaging system cannot acquire clear images and is difficult to perform normal optical monitoring and tracking, so that celestial bodies and landmarks are more difficult to locate, and particularly in radar blind areas, great difficulty is caused to ship maneuvering, and the navigation safety of ships is seriously threatened. Therefore, when sea fog forecast is enhanced, various defogging methods aiming at sea foggy images are vigorously developed, and the method has important significance and research value.
In recent years, the research of image defogging algorithms has made great progress. At present, image defogging research is mainly divided into three types: the first method is based on enhancement, which essentially improves the appearance of an image by enhancing contrast and the like, and defogging is not performed according to the mechanism of image degradation in foggy days, so that defects such as color distortion and the like are caused. The second is a model-based method, which establishes an atmospheric scattering model according to the degradation cause of a foggy image, solves parameters in the model by combining the prior knowledge of the image, and reversely deduces a fogless image. The third method is a deep learning-based method, which directly utilizes a convolution neural network to learn the mapping relation between the foggy day image and the transmission image or the clear image, thereby solving the difficulty of artificially designing a characteristic model. However, these defogging methods based on deep learning essentially estimate parameters in a physical model by using a convolutional network, then recover a clear image by combining the physical model, and only constrain the mean square error between the network output and the tag in the optimization process, so the image quality after defogging is not stable, and besides, depending on a large number of tagged synthetic image data sets, the defogging effect on a real image in foggy days is not ideal.
Therefore, it is necessary to develop a method for defogging marine images that can avoid distortion of the defogged images.
Disclosure of Invention
The traditional defogging algorithm based on deep learning needs to acquire foggy images and fogless images which are matched in pairs in the same scene as a training data set, and the data set is difficult to acquire, so that the existing defogging algorithm based on deep learning mostly adopts artificially synthesized foggy images as foggy image data sets, but the artificially synthesized foggy images and real foggy images have great difference in pixel distribution, and therefore, the defogging effect of a defogging model obtained by training based on the artificially synthesized foggy image data sets in real foggy scenes is poor. The invention provides a complex sea and air scene image defogging method based on a generation countermeasure network, which solves the problems through designing the generation countermeasure network and solves the problems of color distortion and blurring after image defogging.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
step 1: acquiring images of a sea-air scene by using an optical camera, wherein the images comprise fog image acquisition and fog-free image acquisition;
step 2: cutting the image acquired in the step 1 into images with the same width, height and size;
and step 3: respectively making a fog-free image data set and a fog-containing image data set of a sea-air scene, and not needing scene pair matching of a fog-containing image and a fog-free image;
and 4, step 4: constructing a generation countermeasure network for defogging of the complex sea-air scene image;
and 5: training the generated countermeasure network set up in the step 4 by using the sea-air scene data set made in the step 3;
step 6: and 5, carrying out defogging operation on the foggy day image of the complex sea-air scene by using the defogging model trained in the step 5.
Further, the generation countermeasure network constructed in the step 4 includes a generator network and a discriminator network.
The conventional cycle generation countermeasure network includes two generators and two discriminators in total. However, we find that in an application scenario where a fog image is changed into a fog-free image, two discriminators are adopted, which may generate extra countermeasure loss, reduce the speed of network training, and simultaneously reduce the real-time defogging performance of a defogging model. The invention provides an improved version of a cyclic generation countermeasure network based on double generators and single discriminators. The countermeasure network is generated by adopting an improved version of cycle, and optimization and improvement of a generator network structure and a discriminator network structure are needed. The generator and discriminator structure designed by the invention is described in detail as follows:
the generator comprises three modules, namely an encoding module, a conversion module and a decoding module.
And an encoding module. The invention uses a convolutional neural network to extract features from an image input into the network, the whole coding module comprises three convolutional units, the first convolutional unit comprises a convolutional layer, a batch normalization layer and an activation function layer, wherein the size of a convolutional core of the convolutional layer is 7 multiplied by 7, and the convolution step is 1; the second convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 5 multiplied by 5, and the convolution step length is 1; the third convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution layer has a convolution kernel size of 3 x 3 and a convolution step size of 1. Through the design of the coding module, deeper characteristic information of the marine foggy image can be extracted, the problem of gradient disappearance in the training process is avoided, the calculation complexity in the model training process is further reduced, and the training speed of the defogging model is improved.
And a conversion module. The traditional loop generation countermeasure network improves the quality of a reconstructed image by using a residual error unit, and although the method can improve the quality of a generated image, the characteristic information of each layer cannot be reused, so that the characteristic utilization rate is low, and the quality of the generated image is poor. According to the invention, the conversion module of the dense connection network construction generator is built, the feature graphs with the same size are connected by using the dense connection network, and the input of each layer receives the output of all the layers in front, so that the problem of gradient disappearance can be solved, the problem of overfitting in the network training process is effectively relieved, and the feature utilization rate is effectively improved. The conversion module based on the dense connection network designed by the invention designs 5 layers of dense connection units in total. The output of the l-th layer of the dense connection network is shown as follows:
fl=Hl([f0,f1,f2,…,fl-1])
wherein HlIs a nonlinear transformation function which is a combination operation comprising batch standardization operation, linear rectification operation and convolution operation; [ f ] of0,f1,f2,…,fl-1]Feature vectors output by layers 0,1,2, …, l-1.
And a decoding module. The decoding module designed by the invention adopts an inverse convolution layer to realize the reduction of characteristic information, the whole decoding module comprises three convolution units, the first convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 3 multiplied by 3, and the convolution step length is 0.5; the second convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 3 multiplied by 3, and the convolution step length is 0.5; the third convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution layer has a convolution kernel size of 7 multiplied by 7 and a convolution step size of 1. Through the design of the decoding module, the characteristic information restoring capability can be improved, the problem of gradient disappearance is avoided, and the problem of overfitting in the network training process is solved.
The invention builds a discriminator for generating the countermeasure network, which is used for discriminating whether the image is the original image or the image generated by the generator. In order to improve the acquisition capability of local small area features, the invention designs the following discriminator network.
The discriminator network designed by the invention is completely composed of convolution layers, the image input to the discriminator is divided into a plurality of image blocks, the output of the discriminator is a matrix with one dimension being n multiplied by n, each element in the output matrix represents the judgment result of one image block in the plurality of image blocks, and finally the average value of the judgment results of all the image blocks is taken as the judgment result of the generated image.
Further, in the step 5, a self-made sea-air scene data set is used for training the generated countermeasure network built in the method, and the generated countermeasure network designed by the method mainly comprises two generators and a discriminator.
The generation countermeasure network of the present invention realizes the interconversion between the foggy image and the fogless image by two generators G, F and one discriminator D. The generator F converts the foggy image into a fogless image, and determines whether the generated defogged image is a real fogless image by using the discriminator D. Loss function L between generator F and discriminator DGANThe definition is shown as the following formula:
Figure BDA0002622027250000041
where E represents a mathematical expectation, representing a compliance relationship, PdataRepresenting the probability distribution of the data.
The invention also introduces a cycle consistency loss function to calculate the real foggy image y and the foggy image generated by the generator G
Figure BDA0002622027250000042
The loss between the two images can be ensured, the converted image contains the information of the original image as much as possible, and the cycle consistency loss LcycAs shown in the following formula:
Figure BDA0002622027250000043
where E represents a mathematical expectation, representing a compliance relationship, PdataRepresenting the probability distribution of data, | · |. non-woven phosphor1Is a norm of 1.
The invention designs a loss function L for generating the integrity of a countermeasure networkTotalAs shown in the following formula:
LTotal(G,F,D)=LGAN(F,D,x,y)+ωLcyc(G,F)
where ω represents the cyclic consistency loss function LcycWeights in the objective function.
Advantageous effects
The invention provides a complex sea and air scene image defogging method based on a generation countermeasure network, which can be used for defogging sea foggy images in a complex sea and air scene and simultaneously avoid the problems of color distortion of the defogged images and unnatural scene restoration.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of the defogging principle of the generation countermeasure network designed by the invention
FIG. 2 is a schematic diagram of a generator network structure designed by the present invention
FIG. 3 is a schematic diagram of a network structure of a discriminator designed by the present invention
FIG. 4 is a schematic diagram of the defogging effect of the generation countermeasure network designed by the present invention
Detailed Description
The complex sea and air scene image defogging method based on the generation countermeasure network comprises the following steps:
step 1: and acquiring images of the sea and air scene by using an optical camera, wherein the acquisition of the images comprises fog image acquisition and fog-free image acquisition.
Step 2: and (3) cutting the image collected in the step (1) into images with the same width, height and size.
And step 3: and respectively making a fog-free image data set and a fog-containing image data set of the sea-air scene, and not needing scene pair matching of the fog-containing image and the fog-free image.
And 4, step 4: constructing a generation countermeasure network for defogging of the complex sea-air scene image; the structure of the generation of the countermeasure network is shown in fig. 1.
The invention proposes an improved version of the cyclic generation of a countermeasure network based on a dual generator G, F, single arbiter D. The generator F converts the real fog image y into a defogged image F (y), and the generator G converts the generated defogged image F (y) into a fog image
Figure BDA0002622027250000051
The discriminator D judges whether the generated defogged image is a real fog-free image.
The countermeasure network is generated by adopting an improved version of cycle, and optimization and improvement of a generator network structure and a discriminator network structure are needed. The generator and arbiter network structure designed by the present invention is described in detail as follows:
the generator comprises three modules, namely an encoding module, a conversion module and a decoding module, as shown in fig. 2.
And an encoding module. The invention uses a convolutional neural network to extract features from an image input into the network, the whole coding module comprises three convolutional units, the first convolutional unit comprises a convolutional layer, a batch normalization layer and an activation function layer, wherein the size of a convolutional core of the convolutional layer is 7 multiplied by 7, and the convolution step is 1; the second convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 5 multiplied by 5, and the convolution step length is 1; the third convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution layer has a convolution kernel size of 3 x 3 and a convolution step size of 1. Through the design of the coding module, deeper characteristic information of the marine foggy image can be extracted, the problem of gradient disappearance in the training process is avoided, the calculation complexity in the model training process is further reduced, and the training speed of the defogging model is improved.
And a conversion module. According to the invention, the conversion module of the dense connection network construction generator is built, the feature graphs with the same size are connected by using the dense connection network, and the input of each layer receives the output of all the layers in front, so that the problem of gradient disappearance can be solved, the problem of overfitting in the network training process is effectively relieved, and the feature utilization rate is effectively improved. The conversion module based on the dense connection network designed by the invention designs 5 layers of dense connection units in total. The output of the l-th layer of the dense connection network is shown as follows:
fl=Hl([f0,f1,f2,…,fl-1])
wherein HlIs a nonlinear transformation function which is a combination operation comprising batch standardization operation, linear rectification operation and convolution operation; [ f ] of0,f1,f2,…,fl-1]Feature vectors output by layers 0,1,2, …, l-1.
And a decoding module. The decoding module designed by the invention adopts an inverse convolution layer to realize the reduction of characteristic information, the whole decoding module comprises three convolution units, the first convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 3 multiplied by 3, and the convolution step length is 0.5; the second convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 3 multiplied by 3, and the convolution step length is 0.5; the third convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution layer has a convolution kernel size of 7 multiplied by 7 and a convolution step size of 1. Through the design of the decoding module, the characteristic information restoring capability can be improved, the problem of gradient disappearance is avoided, and the problem of overfitting in the network training process is solved.
The invention builds a discriminator for generating the countermeasure network, which is used for discriminating whether the image is the original image or the image generated by the generator. In order to improve the acquisition capability of the local small area features, the invention designs the following discriminator network, as shown in fig. 3.
The discriminator network designed by the invention is completely composed of convolution layers, the image input to the discriminator is divided into a plurality of image blocks, the output of the discriminator is a matrix with one dimension being n multiplied by n, each element in the output matrix represents the judgment result of one image block in the plurality of image blocks, and finally the average value of the judgment results of all the image blocks is taken as the judgment result of the generated image.
And 5: and (4) training the generated countermeasure network built in the step (4) by using the sea-air scene data set manufactured in the step (3), wherein the generated countermeasure network designed by the invention mainly comprises two generators and a discriminator.
The generation countermeasure network of the present invention realizes the interconversion between the foggy image and the fogless image by two generators G, F and one discriminator D. The generator F converts the foggy image into a fogless image, and determines whether the generated defogged image is a real fogless image by using the discriminator D. Loss function L between generator F and discriminator DGANThe definition is shown as the following formula:
Figure BDA0002622027250000071
where E represents a mathematical expectation, representing a compliance relationship, PdataRepresenting the probability distribution of the data.
The invention also introduces a cycle consistency loss function to calculate y and
Figure BDA0002622027250000072
the loss between the two images can be ensured, the converted image contains the information of the original image as much as possible, and the cycle consistency loss LcycAs shown in the following formula:
Figure BDA0002622027250000073
where E represents a mathematical expectation, representing a compliance relationship, PdataRepresenting the probability distribution of data, | · |. non-woven phosphor1Is 1 rangeAnd (4) counting.
The invention designs a loss function L for generating the integrity of a countermeasure networkTotalAs shown in the following formula:
LTotal(G,F,D)=LGAN(F,D,x,y)+ωLcyc(G,F)
where ω represents the cyclic consistency loss function LcycWeights in the objective function.
Step 6: and 5, carrying out defogging operation on the foggy day image of the complex sea-air scene by using the defogging model trained in the step 5.
The following detailed description of embodiments of the invention is intended to be illustrative, and not to be construed as limiting the invention.
In this embodiment, 5000 marine fog-free images are collected to create a marine fog-free data set, 5000 marine fog-free images are collected to create a marine fog-free image data set, and the training data set is used to train the generation countermeasure network designed by the present invention. The training model uses a Tensorflow deep learning framework and a high-performance computer platform carrying a GPU to carry out model training, and detailed information of the experimental environment is as follows:
hardware environment:
CPU:Intel Core i7 7700K
GPU:Nvidia GeForce GTX1080Ti
software environment:
the system comprises the following steps: ubuntu16.04 LTS
Accelerating the environment: CUDA9.0/CuDNN7.0
Training a framework: tensorflow
Dependent libraries: Numpy1.14/Pillow 5.1/Scippy 1.0/Matplilob 2.2
After the training is finished, a defogging test experiment is carried out on the foggy day image of the complex sea-air scene by using the trained defogging model, the test result is shown in fig. 4, the left image is the original foggy day image, and the right image is the defogged image. Through comparison, the image detail retention after defogging is high, the brightness is high, the definition is high, and the problem of serious image distortion does not occur, which shows that the complex image defogging method based on the generation countermeasure network is effective.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention.

Claims (5)

1. A complex sea and air scene image defogging method based on a generation countermeasure network is characterized by comprising the following steps: the method comprises the following steps:
step 1: acquiring images of a sea-air scene by using an optical camera, wherein the images comprise fog image acquisition and fog-free image acquisition;
step 2: cutting the image acquired in the step 1 into images with the same width, height and size;
and step 3: respectively making a fog-free image data set and a fog-containing image data set of a sea-air scene, and not needing scene pair matching of the fog-containing image and the fog-free image;
and 4, step 4: constructing a generation countermeasure network for defogging of the complex sea-air scene image;
and 5: training the generated countermeasure network set up in the step 4 by using the sea-air scene data set made in the step 3;
step 6: and 5, carrying out defogging operation on the foggy day image of the complex sea-air scene by using the defogging model trained in the step 5.
2. The image defogging method for the complex sea and air scene based on the generation countermeasure network as claimed in claim 1, wherein: the generation countermeasure network built in the step 4 is an improved version of the loop generation countermeasure network with double generators G, F and a single discriminator D; the generator F converts the real fog image y into a defogged image F (y), and the generator G converts the generated defogged image F (y) into a fog image
Figure FDA0002622027240000011
The discriminator D judges the generatedWhether the defogged image is a real fog-free image.
3. The image defogging method for the complex sea and air scene based on the generation countermeasure network as claimed in claim 2, wherein: the generator comprises three modules, namely a coding module, a conversion module and a decoding module;
the coding module comprises three convolution units, wherein the first convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 7 multiplied by 7, and the convolution step length is 1; the second convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 5 multiplied by 5, and the convolution step length is 1; the third convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 3 multiplied by 3, and the convolution step length is 1;
the conversion module is designed based on a dense connection network and comprises a plurality of layers of dense connection units; the conversion module connects the feature maps having the same size by using a dense connection network, the input of each layer receiving the output of all previous layers; the output of the l layer of the dense connection network is fl=Hl([f0,f1,f2,…,fl-1]) Wherein H islThe method is a nonlinear transformation function, and is a combined operation comprising batch standardization operation, linear rectification operation and convolution operation; [ f ] of0,f1,f2,…,fl-1]Feature vectors output by layers 0,1,2, …, l-1;
the decoding module comprises three convolution units, wherein the first convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 3 multiplied by 3, and the convolution step length is 0.5; the second convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 3 multiplied by 3, and the convolution step length is 0.5; the third convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution layer has a convolution kernel size of 7 multiplied by 7 and a convolution step size of 1.
4. The image defogging method for the complex sea and air scene based on the generation countermeasure network as claimed in claim 2, wherein: the discriminator is composed of a convolutional layer, an image input to the discriminator is divided into a plurality of image blocks, the output of the discriminator is a matrix with one dimension being n multiplied by n, each element in the output matrix represents the judgment result of one image block in the plurality of image blocks, and finally the average value of the judgment results of all the image blocks is used as the judgment result of the generated image.
5. The image defogging method for the complex sea and air scene based on the generation countermeasure network as claimed in claim 2, wherein:
loss function L between generator F and discriminator DGANThe definition is shown as the following formula:
Figure FDA0002622027240000021
where E represents a mathematical expectation, representing a compliance relationship, PdataRepresenting a probability distribution of the data;
and introduces a cycle consistency loss function to calculate y and
Figure FDA0002622027240000022
inter-loss, cycle consistency loss LcycAs shown in the following formula:
Figure FDA0002622027240000023
a loss function L is generated that counters the integrity of the networkTotalAs shown in the following formula:
LTotal(G,F,D)=LGAN(F,D,x,y)+ωLcyc(G,F)。
CN202010786125.7A 2020-08-07 2020-08-07 Complex sea and air scene image defogging method based on generation countermeasure network Active CN111986108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010786125.7A CN111986108B (en) 2020-08-07 2020-08-07 Complex sea and air scene image defogging method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010786125.7A CN111986108B (en) 2020-08-07 2020-08-07 Complex sea and air scene image defogging method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN111986108A true CN111986108A (en) 2020-11-24
CN111986108B CN111986108B (en) 2024-04-19

Family

ID=73446038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010786125.7A Active CN111986108B (en) 2020-08-07 2020-08-07 Complex sea and air scene image defogging method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN111986108B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614070A (en) * 2020-12-28 2021-04-06 南京信息工程大学 DefogNet-based single image defogging method
CN112950521A (en) * 2021-04-27 2021-06-11 上海海事大学 Image defogging method and generator network
CN113393386A (en) * 2021-05-18 2021-09-14 电子科技大学 Non-paired image contrast defogging method based on feature decoupling
CN113449850A (en) * 2021-07-05 2021-09-28 电子科技大学 Intelligent inhibition method for clutter of sea surface monitoring radar
CN113554872A (en) * 2021-07-19 2021-10-26 昭通亮风台信息科技有限公司 Detection early warning method and system for traffic intersection and curve
CN113658051A (en) * 2021-06-25 2021-11-16 南京邮电大学 Image defogging method and system based on cyclic generation countermeasure network
CN114119420A (en) * 2021-12-01 2022-03-01 昆明理工大学 Fog image defogging method in real scene based on fog migration and feature aggregation
CN117952865A (en) * 2024-03-25 2024-04-30 中国海洋大学 Single image defogging method based on cyclic generation countermeasure network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472818A (en) * 2018-10-17 2019-03-15 天津大学 A kind of image defogging method based on deep neural network
CN109493303A (en) * 2018-05-30 2019-03-19 湘潭大学 A kind of image defogging method based on generation confrontation network
CN109903232A (en) * 2018-12-20 2019-06-18 江南大学 A kind of image defogging method based on convolutional neural networks
AU2020100274A4 (en) * 2020-02-25 2020-03-26 Huang, Shuying DR A Multi-Scale Feature Fusion Network based on GANs for Haze Removal
CN111179189A (en) * 2019-12-15 2020-05-19 深圳先进技术研究院 Image processing method and device based on generation countermeasure network GAN, electronic equipment and storage medium
CN111383192A (en) * 2020-02-18 2020-07-07 清华大学 SAR-fused visible light remote sensing image defogging method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493303A (en) * 2018-05-30 2019-03-19 湘潭大学 A kind of image defogging method based on generation confrontation network
CN109472818A (en) * 2018-10-17 2019-03-15 天津大学 A kind of image defogging method based on deep neural network
CN109903232A (en) * 2018-12-20 2019-06-18 江南大学 A kind of image defogging method based on convolutional neural networks
CN111179189A (en) * 2019-12-15 2020-05-19 深圳先进技术研究院 Image processing method and device based on generation countermeasure network GAN, electronic equipment and storage medium
CN111383192A (en) * 2020-02-18 2020-07-07 清华大学 SAR-fused visible light remote sensing image defogging method
AU2020100274A4 (en) * 2020-02-25 2020-03-26 Huang, Shuying DR A Multi-Scale Feature Fusion Network based on GANs for Haze Removal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEI LIU等: "A physics based generative adversarial network for single image defogging", IMAGE AND VISION COMPUTING *
肖进胜;申梦瑶;雷俊锋;熊闻心;焦陈坤;: "基于生成对抗网络的雾霾场景图像转换算法", 计算机学报, no. 01 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614070A (en) * 2020-12-28 2021-04-06 南京信息工程大学 DefogNet-based single image defogging method
CN112614070B (en) * 2020-12-28 2023-05-30 南京信息工程大学 defogNet-based single image defogging method
CN112950521A (en) * 2021-04-27 2021-06-11 上海海事大学 Image defogging method and generator network
CN112950521B (en) * 2021-04-27 2024-03-01 上海海事大学 Image defogging method and generator network
CN113393386A (en) * 2021-05-18 2021-09-14 电子科技大学 Non-paired image contrast defogging method based on feature decoupling
CN113393386B (en) * 2021-05-18 2022-03-01 电子科技大学 Non-paired image contrast defogging method based on feature decoupling
CN113658051A (en) * 2021-06-25 2021-11-16 南京邮电大学 Image defogging method and system based on cyclic generation countermeasure network
CN113658051B (en) * 2021-06-25 2023-10-13 南京邮电大学 Image defogging method and system based on cyclic generation countermeasure network
CN113449850A (en) * 2021-07-05 2021-09-28 电子科技大学 Intelligent inhibition method for clutter of sea surface monitoring radar
CN113554872A (en) * 2021-07-19 2021-10-26 昭通亮风台信息科技有限公司 Detection early warning method and system for traffic intersection and curve
CN114119420A (en) * 2021-12-01 2022-03-01 昆明理工大学 Fog image defogging method in real scene based on fog migration and feature aggregation
CN117952865A (en) * 2024-03-25 2024-04-30 中国海洋大学 Single image defogging method based on cyclic generation countermeasure network

Also Published As

Publication number Publication date
CN111986108B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
CN111986108A (en) Complex sea-air scene image defogging method based on generation countermeasure network
CN113362223B (en) Image super-resolution reconstruction method based on attention mechanism and two-channel network
CN113658051B (en) Image defogging method and system based on cyclic generation countermeasure network
CN116682120A (en) Multilingual mosaic image text recognition method based on deep learning
CN110889370B (en) System and method for synthesizing face by end-to-end side face based on condition generation countermeasure network
CN109035146A (en) A kind of low-quality image oversubscription method based on deep learning
CN110443883A (en) A kind of individual color image plane three-dimensional method for reconstructing based on dropblock
CN110222837A (en) A kind of the network structure ArcGAN and method of the picture training based on CycleGAN
CN113284061B (en) Underwater image enhancement method based on gradient network
Sun et al. Underwater image enhancement with encoding-decoding deep CNN networks
CN111709888A (en) Aerial image defogging method based on improved generation countermeasure network
Liu et al. Infrared image super resolution using gan with infrared image prior
CN112614070A (en) DefogNet-based single image defogging method
CN116205962A (en) Monocular depth estimation method and system based on complete context information
CN106296583B (en) Based on image block group sparse coding and the noisy high spectrum image ultra-resolution ratio reconstructing method that in pairs maps
CN110047038B (en) Single-image super-resolution reconstruction method based on hierarchical progressive network
Shen et al. Mutual information-driven triple interaction network for efficient image dehazing
Liu et al. Boths: Super lightweight network-enabled underwater image enhancement
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
CN116452450A (en) Polarized image defogging method based on 3D convolution
Wan et al. Progressive convolutional transformer for image restoration
CN110853040B (en) Image collaborative segmentation method based on super-resolution reconstruction
CN114266713A (en) NonshadowGAN-based unmanned aerial vehicle railway fastener image shadow removing method and system
Zhu et al. HDRD-Net: High-resolution detail-recovering image deraining network
CN113724156A (en) Generation countermeasure network defogging method and system combined with atmospheric scattering model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant