CN109410135B - Anti-learning image defogging and fogging method - Google Patents

Anti-learning image defogging and fogging method Download PDF

Info

Publication number
CN109410135B
CN109410135B CN201811163803.3A CN201811163803A CN109410135B CN 109410135 B CN109410135 B CN 109410135B CN 201811163803 A CN201811163803 A CN 201811163803A CN 109410135 B CN109410135 B CN 109410135B
Authority
CN
China
Prior art keywords
image
fog
fogless
foggy
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811163803.3A
Other languages
Chinese (zh)
Other versions
CN109410135A (en
Inventor
顾晓东
成庆荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201811163803.3A priority Critical patent/CN109410135B/en
Publication of CN109410135A publication Critical patent/CN109410135A/en
Application granted granted Critical
Publication of CN109410135B publication Critical patent/CN109410135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing, and particularly relates to an image defogging and fogging method based on a generation countermeasure network. The invention utilizes a physical model of the neural network approaching fog imaging to automatically learn the mapping relation from the fog-free image to the fog-containing image and the inverse mapping relation from the fog-containing image to the fog-free image from a large number of image samples by generating a confrontation network, and further utilizes the mapping relation to realize the defogging and fogging treatment of the image. The invention learns the mapping relation between the foggy image and the fogless image without using the requirement of the fogless image and the fogless image in the same scene. Therefore, the problems that the mechanism in a non-physical model is difficult to explain, or the influence of human factors in parameter estimation in the physical model and the construction of the database by the foggy and fogless images are difficult are avoided, and the purposes of stronger interpretability and more reliable effect are achieved.

Description

Anti-learning image defogging and fogging method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image defogging and fogging method.
Background
The water mist is always ubiquitous in people's lives and influences people's daily life. With the development of technology and the tendency toward civilization of electronic image acquisition products, digital images are widely applied in the situations of daily photography, security, automobile data recorders and the like of people. However, in the presence of fog, the image acquired by the electronic device may seriously degrade the quality of the image due to the scattering of the fog, and the readability of the image is reduced. Due to scattering of fog in the atmosphere, the overall color of the image is biased to white, and the contrast is seriously reduced; at the same time, the fog blurs the image, making objects in the image illegible. Therefore, there is a need to enhance or repair the foggy image by a certain method to improve the visual effect and enhance the image effect. The defogging technology has wide application prospect and application value particularly in the fields of driving record, automatic driving, security protection and the like.
The problem of image defogging belongs to an uncertain problem in the field of mathematics, and more parameters exist in solving the problem and are difficult to solve. Therefore, the image defogging problem belongs to a recognized difficult problem in the image processing field and is an important branch of the image processing field. Today, methods of physical and non-physical models exist mainly in the research field of image defogging. The non-physical model method mainly studies the brightness, contrast and the like of an image, and directly performs visual restoration or image enhancement on the image. The physical model method is mainly based on an atmospheric scattering model, and further researches the physical mechanism of image degradation loss, so that a fog-free image is reversely recovered by solving the physical model. Some image defogging methods based on the deep neural network are also proposed in the recent years, but because the pair images of the foggy image and the fogless image of the same scene are difficult to acquire, a large-scale pair image database is difficult to construct for training the deep neural network model. Despite the various methods, the various methods have advantages and disadvantages, and the image defogging technology is still in the hot direction of research.
Meanwhile, in the fields of computer animation, game production and the like, fog adding processing on normal fog-free images is required. The existing fogging technology mainly utilizes a fog generation model to process, so that an original image has a fog effect, but the fog model parameters generated by the method need to be manually set, are complicated and are difficult to form a very natural fogging image.
Disclosure of Invention
In view of the above situation, the present invention proposes an image defogging and fogging method based on generation of a countermeasure network.
The invention provides an image defogging/defogging method based on a generation countermeasure network, which utilizes a physical model of a neural network approaching fog imaging to automatically learn the mapping relation from a fog-free image to a fog-free image and the inverse mapping relation from the fog-free image to the fog-free image from a large number of image samples by generating the countermeasure network, and further utilizes the mapping relation to realize the defogging and defogging processing of the images. The invention learns the mapping relation between the foggy image and the fogless image without using the requirement of the fogless image and the fogless image in the same scene. Therefore, the problems that the mechanism in a non-physical model is difficult to explain, or the influence of human factors in parameter estimation in the physical model and the construction of the database by the foggy and fogless images are difficult are avoided, and the purposes of stronger interpretability and more reliable effect are achieved.
The invention provides an image defogging/fogging method based on a generation countermeasure network, which comprises the following three steps:
(1) collecting a large number of foggy images and fogless images as training samples, establishing a fogless-foggy image mapping database, and dividing the fogless-fogless image mapping database into fogless images and fogless images;
(2) by generating a countermeasure neural network, taking a large number of foggy images and fogless images as learning samples, learning a mapping relation from the foggy images to the fogless images from the samples by utilizing deep learning, and storing a mapping model from the foggy images to the fogless images;
(3) carrying out defogging or fogging treatment by using the generation model: inputting a foggy image into a foggy-fogless mapping model to obtain a fogless image; and inputting a fog-free image into a mapping model from the fog-free image to the fog-containing image to obtain a fogged image.
In step (1) of the present invention, the process of constructing the fog-free image database comprises:
(1) the foggy image data and fogless image data in mainstream search websites such as hundredths and google are captured through a web crawler technology, and the two types of pictures of the websites are rich in types, various in categories and very large in data quantity. Forming a training database for training and generating the antagonistic deep neural network through two types of image data captured by the web crawler;
(2) according to the actual situation, images in the database are deleted in a manual mode, some pictures which do not meet the requirements are removed, and finally the number of the foggy images and the number of the fogless images in the whole database are approximately equal.
In step (2) of the present invention, the process of generating the antagonistic neural network comprises:
(1) and (3) building a generation model and a discrimination model in a generation impedance neural network model by adopting a convolutional neural network. The task of generating the model is to generate a fog-free image from a fog-free image or a fog-free image from the fog-free image, and is an important part for completing fog-free image to fog-free image; the discrimination model discriminates whether the image generated by the current generator is foggy or not;
(2) and a loop type generation countermeasure network is built on the basis of the previous step, so that the structure of the network forms a closed loop. In the cyclic generation countermeasure network model, two generators are adopted, namely an image generator for generating fog-free images for the fog images and a G generatorH-NRepresents; and a generator for generating a foggy image from the fogless image, using GN-HAnd (4) showing. At the same time, a discriminator D is also requiredHFor discriminating the current generator GN-HWhether the generated image is a foggy image; build up discriminator DNFor discriminating the current generator GH-NWhether the generated image is a fog-free image or not; as shown in fig. 1;
(3) and (4) establishing a loss function according to the model, and optimizing the whole model by using a random gradient descent method. Wherein the loss function comprises a total of three parts, the first part being the discriminator loss from the foggy image (X) to the fogless image (Y):
Figure BDA0001820620970000031
in equation (1), the sum of the two terms has a subscript E, indicating the sample average of the corresponding discrete data;
similarly, the discriminator loss from the fogless image (Y) to the foggy image (X):
Figure BDA0001820620970000032
the sum of the losses of the two generators is
Figure BDA0001820620970000033
Adding all the loss functions of the model to obtain the loss function of the whole model:
LcycGAN(G,D)=Lcyc(GN-H,GH-N)+LGAN(GN-H,DH,X,Y)+LGAN(GH-N,DN,X,Y) (4);
(4) and training the model by using a random gradient descent method and using a fog image data and fog-free image database until optimization convergence. Finally, the generator G is obtainedH-NThe generator is a mapping relation for processing the foggy image into a fogless image; get generator G at the same timeN-HThe generator is a mapping relation for processing the fog-free image into the fog image.
In step (3), the specific process of performing the defogging or fogging treatment by using the generative model comprises the following steps:
on the basis of the above we have obtained an optimized model, which contains two discriminators and two generators. The two generators being, respectively, generators G from foggy to fogless imagesH-NAnd a generator G from a fog-free image to a fog imageN-H. Thus using the generator GH-NTaking the foggy image as input, and obtaining a repaired fogless image; at the same time, using generator GN-HAnd the fog-free image is taken as input, so that the image after fog adding can be obtained.
Advantageous effects
Among the existing defogging or fogging methods, almost all methods can only obtain a defogging model or a defogging model; among these models, many artifacts are often introduced into the model due to the unclear physical mechanism, resulting in a decrease in model interpretability.
According to the method, the mapping relation from the fog image to the fog-free image and the mapping relation from the fog-free image to the fog image are learned from a large number of actual data samples by a generation antagonistic network through a deep learning technology. In the whole process, the influence of human factors is avoided, so that the approximation of the model is more reasonable, and the subtracted parameters are difficult to set; and in the whole model, a defogging model and a fogging model can be obtained simultaneously, and compared with the prior art, the defogging model and the fogging model are more scientific and have better effect.
Drawings
Fig. 1 is a model diagram of an image defogging and fogging method based on generation of a countermeasure network, which is provided by the invention.
FIG. 2 is a diagram of an image database formed in an embodiment of the present invention.
Fig. 3 is a network structure diagram of a generator in an embodiment of the invention.
Fig. 4 is a diagram showing a network structure of the discriminator in the embodiment of the present invention.
Detailed Description
Because the data set related to fog disclosed at present does not have a large-scale image database with or without fog for direct utilization, the method of the invention crawls a large number of images from Baidu and Google browsers to form a defogged image database. The image is selected in such a range that the fog in the image is relatively uniform and does not have large sudden change, so that a good effect is achieved. The resulting image database is shown in fig. 2.
The model in the antagonistic learning type image defogging and fogging method is trained until the final convergence. The following is an explanation of the most common experimental pictures in defogging.
The network structure of the generator is shown in fig. 3 below.
The generator consists of three parts: an encoding section, a conversion section and a decoding section. The coding part adopts a convolutional neural network structure, and the high-level features of the image are coded to obtain the feature vector of the image in the process of carrying out four times of convolutional extraction on the original image. The conversion part adopts a depth residual error network, the number of network layers is 50, and the image characteristics of the previous part are utilized for conversion. The decoding part adopts a deconvolution network structure, the first three layers are deconvolution, and the last layer is a convolution structure, so that the decoding function is realized. The three parts are combined together to realize the defogging function, and the fogging function is to change the input picture into a fog-free image in another generator.
The network structure of the discriminator is shown in fig. 4.
The discriminator can take a picture as input, score the currently input image, and predict whether it is the generator generated image or the image of the original image library. The generator gives the original image a higher score and the generator generates an image with a lower score.
In summary, the present invention provides an effective image defogging and fogging method based on a generation countermeasure network, that is, a countermeasure mode is generated by a large amount of foggy images and fogless images. The method provided by the invention effectively overcomes the difficulty of establishing a physical model and the difficult problem of establishing a paired image database, and has wider application prospect and higher market value compared with the traditional defogging and fogging method.

Claims (3)

1. An image defogging and fogging method based on a generation countermeasure network is characterized by comprising the following three steps:
(1) collecting a large number of foggy images and fogless images as training samples, establishing a fogless-foggy image mapping database, and dividing the fogless-fogless image mapping database into fogless images and fogless images;
(2) by generating a countermeasure neural network, taking a large number of foggy images and fogless images as learning samples, learning a mapping relation from the foggy images to the fogless images from the samples by utilizing deep learning, and storing a mapping model from the foggy images to the fogless images;
(3) carrying out defogging or fogging treatment by using the generative model: inputting a foggy image into a foggy-fogless mapping model to obtain a fogless image; inputting a fog-free image into a mapping model from the fog-free image to the fog-containing image to obtain a fogged image;
in the step (2), the process of generating the antagonistic neural network comprises:
(1) building a generation model and a discrimination model in a generation impedance type neural network model by adopting a convolutional neural network; the task of generating the model is to generate a fog-free image from a fog-free image or a fog-free image from the fog-free image, and is an important part for completing fog-free image to fog-free image; the discrimination model is used for discriminating whether the image generated by the current generator is foggy or not;
(2) establishing a circular generation countermeasure network on the basis of the previous step, so that the structure of the network forms a closed loop; in a cyclic generative confrontation network model, two generators are used, an image generator for generating a fog-free image for a foggy image, with GH-NAnother is a generator for generating a foggy image from the fogless image, using GN-HRepresents; at the same time, a discriminator D is also set upHFor discriminating the current generator GN-HWhether the generated image is a foggy image; build up discriminator DNFor discriminating the current generator GH-NWhether the generated image is a fog-free image or not;
(3) establishing a loss function according to the model, and optimizing the whole model by using a random gradient descent method; wherein the loss function comprises a total of three parts, the first part being the discriminator loss from the foggy image (X) to the fogless image (Y):
Figure FDA0003309357380000011
the second part is the discriminator loss from the fogless image (Y) to the foggy image (X):
Figure FDA0003309357380000013
the third part is the sum of the losses of the two generators:
Figure FDA0003309357380000012
adding all loss functions of the model to obtain a loss function of the whole model:
LcycGAN(G,D)=Lcyc(GN-H,GH-N)+LGAN(GN-H,DH,X,Y)+LGAN(GH-N,DN,X,Y); (4)
(4) training the model by using a random gradient descent method and using a fog image data and fog-free image database until optimization convergence; finally, the generator G is obtainedH-NThe generator is a mapping relation for processing the foggy image into a fogless image; get generator G at the same timeN-HThe generator is a mapping relation for processing the fog-free image into the fog image.
2. The image defogging and fogging method based on the generation of the countermeasure network according to claim 1, wherein the process of constructing the fog-free image database in the step (1) is as follows:
(1) searching foggy image data and fogless image data in a website through a main stream of a web crawler technology to form a training database for training and generating an antagonistic deep neural network;
(2) according to the actual situation, images in the database are deleted in a manual mode, some pictures which do not meet the requirements are removed, and finally the number of the foggy images and the number of the fogless images in the whole database are approximately equal.
3. The image defogging and fogging method based on the generation countermeasure network of claim 1, wherein the specific flow of the defogging or fogging processing by using the generation model in the step (3) is as follows:
using generator GH-NTaking the foggy image as input to obtain a repaired fogless image; using generator GN-HAnd taking the fog-free image as input to obtain the image after fog adding.
CN201811163803.3A 2018-10-02 2018-10-02 Anti-learning image defogging and fogging method Active CN109410135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811163803.3A CN109410135B (en) 2018-10-02 2018-10-02 Anti-learning image defogging and fogging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811163803.3A CN109410135B (en) 2018-10-02 2018-10-02 Anti-learning image defogging and fogging method

Publications (2)

Publication Number Publication Date
CN109410135A CN109410135A (en) 2019-03-01
CN109410135B true CN109410135B (en) 2022-03-18

Family

ID=65466820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811163803.3A Active CN109410135B (en) 2018-10-02 2018-10-02 Anti-learning image defogging and fogging method

Country Status (1)

Country Link
CN (1) CN109410135B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934791A (en) * 2019-04-02 2019-06-25 山东浪潮云信息技术有限公司 A kind of image defogging method and system based on Style Transfer network
CN110136075B (en) * 2019-04-18 2021-01-05 中国地质大学(武汉) Remote sensing image defogging method for generating countermeasure network based on edge sharpening cycle
CN110807744B (en) * 2019-10-25 2023-09-08 山东工商学院 Image defogging method based on convolutional neural network
US11508048B2 (en) 2020-02-10 2022-11-22 Shenzhen Institutes Of Advanced Technology Method and system for generating composite PET-CT image based on non-attenuation-corrected PET image
CN111738942A (en) * 2020-06-10 2020-10-02 南京邮电大学 Generation countermeasure network image defogging method fusing feature pyramid
CN113393386B (en) * 2021-05-18 2022-03-01 电子科技大学 Non-paired image contrast defogging method based on feature decoupling
CN117408891B (en) * 2023-12-14 2024-03-15 暨南大学 Image fogging method based on Cycle-GAN

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN107123151A (en) * 2017-04-28 2017-09-01 深圳市唯特视科技有限公司 A kind of image method for transformation based on variation autocoder and generation confrontation network
CN108492265A (en) * 2018-03-16 2018-09-04 西安电子科技大学 CFA image demosaicing based on GAN combines denoising method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10319076B2 (en) * 2016-06-16 2019-06-11 Facebook, Inc. Producing higher-quality samples of natural images
WO2018053340A1 (en) * 2016-09-15 2018-03-22 Twitter, Inc. Super resolution using a generative adversarial network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN107123151A (en) * 2017-04-28 2017-09-01 深圳市唯特视科技有限公司 A kind of image method for transformation based on variation autocoder and generation confrontation network
CN108492265A (en) * 2018-03-16 2018-09-04 西安电子科技大学 CFA image demosaicing based on GAN combines denoising method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Image-to-Image Translation with Conditional Adversarial Networks";Phillip Isola 等;《arXiv》;20161121;第1-16页 *
"一种基于条件生成对抗网络的去雾方法";贾绪仲 等;《信息与电脑》;20180531(第09期);第60-62+65页 *
"生成对抗映射网络下的图像多层感知去雾算法";李策 等;《计算机辅助设计与图形学学报》;20171031;第29卷(第10期);第1835-1843页 *
Fangfang Wu 等."Perceptual Image Dehazing Based on Generative Adversarial Learning".《PCM 2018: Advances in Multimedia Information Processing - PCM 2018》.2018, *

Also Published As

Publication number Publication date
CN109410135A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN109410135B (en) Anti-learning image defogging and fogging method
Golts et al. Unsupervised single image dehazing using dark channel prior loss
US10593021B1 (en) Motion deblurring using neural network architectures
CN112766160A (en) Face replacement method based on multi-stage attribute encoder and attention mechanism
Panetta et al. Tmo-net: A parameter-free tone mapping operator using generative adversarial network, and performance benchmarking on large scale hdr dataset
CN110415184B (en) Multi-modal image enhancement method based on orthogonal element space
CN102915527A (en) Face image super-resolution reconstruction method based on morphological component analysis
CN111724400A (en) Automatic video matting method and system
CN113658040A (en) Face super-resolution method based on prior information and attention fusion mechanism
CN109766918A (en) Conspicuousness object detecting method based on the fusion of multi-level contextual information
Yang et al. Underwater image enhancement with latent consistency learning‐based color transfer
CN112767277B (en) Depth feature sequencing deblurring method based on reference image
CN113066025B (en) Image defogging method based on incremental learning and feature and attention transfer
CN106778576A (en) A kind of action identification method based on SEHM feature graphic sequences
CN116452469B (en) Image defogging processing method and device based on deep learning
CN113298744A (en) End-to-end infrared and visible light image fusion method
CN112686830A (en) Super-resolution method of single depth map based on image decomposition
CN112950498A (en) Image defogging method based on countermeasure network and multi-scale dense feature fusion
CN116883303A (en) Infrared and visible light image fusion method based on characteristic difference compensation and fusion
CN115375596A (en) Face photo-sketch portrait synthesis method based on two-way condition normalization
CN112163605A (en) Multi-domain image translation method based on attention network generation
Zhu et al. Application research on improved CGAN in image raindrop removal
Fan et al. Facial expression animation through action units transfer in latent space
CN116958451B (en) Model processing, image generating method, image generating device, computer device and storage medium
Shen et al. Depth assisted portrait video background blurring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant