CN111709888B - Aerial image defogging method based on improved generation countermeasure network - Google Patents
Aerial image defogging method based on improved generation countermeasure network Download PDFInfo
- Publication number
- CN111709888B CN111709888B CN202010496560.6A CN202010496560A CN111709888B CN 111709888 B CN111709888 B CN 111709888B CN 202010496560 A CN202010496560 A CN 202010496560A CN 111709888 B CN111709888 B CN 111709888B
- Authority
- CN
- China
- Prior art keywords
- image
- defogging
- fog
- network
- foggy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000012549 training Methods 0.000 claims abstract description 18
- 230000006870 function Effects 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 6
- 238000012935 Averaging Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 239000000428 dust Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000011859 microparticle Substances 0.000 description 2
- 239000003595 mist Substances 0.000 description 2
- 238000002834 transmittance Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Abstract
The invention discloses an aerial image defogging method based on an improved generation countermeasure network, which comprises the steps of establishing a data set; inputting the sample image with fog into a generating network for defogging treatment; inputting the defogging processed sample image and the corresponding defogging-free sample image into an countermeasure network, judging a threshold value and true and false, and calculating a loss function model parameter; feeding back the loss function model parameters to a generating network, and generating network update to generate a network model; repeating the steps to obtain a training model; and inputting the foggy picture into a training model to obtain a foggy picture. Aiming at the defects in the existing image defogging method, the invention provides an aerial image defogging method based on an improved generation countermeasure network.
Description
Technical Field
The invention relates to an aerial image defogging method, in particular to an aerial image defogging method based on an improved generation countermeasure network.
Background
With the rapid development of the internet and information processing technology, people have an increasing demand for image definition. However, due to the limitation of physical imaging conditions and acquisition environments, the images are often interfered by fog with different degrees, a large number of small water drops and small dust microparticles exist in the atmosphere in natural fog and haze weather, and the scattering effect of light can be increased due to the existence of the small water drops and the small dust microparticles, so that the problems of reduced contrast, narrowed dynamic range, reduced definition, insufficient color, masking of partial detail information, even color distortion and the like of the images shot by the outdoor image acquisition sensor are solved, great noise is added to the images, the visual effect is poor, and the image detail information is difficult to extract and analyze effectively, so that the application requirements cannot be met.
In order to enhance the effectiveness and practicability of captured images, the limitation of fog in the air on the normal operation of an image sensor is reduced, the visual effect and quality of the images are improved, and the clear treatment and removal of natural fog and haze are the problems to be solved. Therefore, the aerial image defogging method has high application value for improving the contrast of the image under the fog and haze weather conditions, increasing the dynamic range of the image and recovering the detail information of the image edge, and particularly provides a guarantee for the aerial image system to effectively and correctly work under the severe weather conditions such as fog, haze and the like.
The current image defogging method is mainly divided into a traditional model-based defogging method and a deep learning-based image defogging method. The traditional defogging method based on the model can be further subdivided into defogging algorithms based on the known field depth; and solving a defogging method of scene depth information based on the auxiliary information and a defogging method based on prior conditions.
The traditional defogging method based on the model is based on a foggy day imaging model, and discusses how to realize defogging on an imaging mechanism, and is divided into the following categories in principle, and the technical characteristics of the defogging method are as follows:
(1) Defogging method based on field depth knowledge
The method is to estimate the transmissivity of each pixel point by constructing an image degradation model on the assumption that depth of field information of an observation scene is acquired, and then to combine a foggy imaging model to realize defogging operation of the image. This approach is of little practical use as it requires the use of some costly third party equipment for measuring scene geometry information when processing color images.
(2) Defogging method for solving scene depth information based on auxiliary information
When defogging an image, the method firstly needs to utilize a defogging image of the same scene to estimate field depth information, and then models the image on the basis to perform defogging treatment. However, the method needs to obtain clear images of the scene of the image to be defogged at the same time, so that the method is difficult to obtain the fogged images and the clear images at the same time in practical application, and the practicability is not high.
(3) Defogging method based on priori conditions
The academy proposes a plurality of methods based on prior conditions, and a part of researchers propose a method for defogging by utilizing the characteristic that light propagation is not locally related to a shadow area, and the method has poor universality, is only suitable for images with thin mist, and has poor image effect when encountering slightly thick mist. Another part of researchers propose that the contrast ratio of the haze-free image is stronger than that of the haze-free image, the numerical value is higher, a method for maximizing the local contrast ratio of the picture is proposed to defog based on the assumption, the processing effect of the dense haze image is improved, and the picture can generate halation after defogging by the method. He Kaiming et al propose a defogging algorithm based on a pixel dark channel prior, wherein the dark channel prior refers to a color channel with a pixel value close to 0 for each pixel point except for sky and some areas with very high brightness, and the pixel value of the places can be considered to be brought about by the thin cloud influence of the area, so that a window module with a certain size is selected to move on an image during processing, the minimum values of pixels of different areas are obtained, and the influence is considered to be cloud influence. The method requires that the transmissivity of each pixel point in a local area is constant, and the transmissivity of an actual image is not constant in a local area, so that the transmissivity estimated by the method is inaccurate, a blocking effect exists, the whole image is darkened, the sky area is distorted in color, the color is degraded, and a large number of color blocks exist. Because the prior information of the image defogging problem is insufficient, the analysis and the searching of an accurate prior model are difficult, and various prior assumption defogging methods can bring new problems while solving a certain type of problems.
The image defogging method based on deep learning can be classified into the following. An atmospheric scattering model is represented by a generating countermeasure network or other deep learning network, and corresponding transmittance T and atmospheric ambient light value A are estimated through training learning. However, this method usually needs to know the scene depth of the image to be defogged to train to obtain the transmittance T and the atmospheric ambient light value a, but in practical application, it is difficult to obtain the scene depth of a picture, so the practicability is limited.
Su Yanzhao and other researchers utilize priori knowledge to primarily defog the picture, and input the primarily defogged image into a generating countermeasure network or other deep learning network for processing, so as to further defogging operation. The method uses priori knowledge to guide the coding network, but also faces the problem of insufficient priori information, and limits the practicability of the method.
Researchers such as field green firstly preprocess images to be defogged, normalize the images into gray images, then perform gradient operation, and then input HOG characteristics into a generated countermeasure network for processing. The method has more steps, complex operation and poor efficiency in practical application.
Tang Huanrong and other researchers directly input images into a generation countermeasure network for processing, but the network structure is complex, two generators and two discriminators are respectively arranged, the operand is greatly increased during defogging operation, and the processing speed is low.
Disclosure of Invention
In order to solve the defects of the technology, the invention provides an aerial image defogging method based on an improved generation countermeasure network.
In order to solve the technical problems, the invention adopts the following technical scheme: an aerial image defogging method based on an improved generation countermeasure network, comprising the following steps:
collecting sample images with fog and without fog, establishing a data set required by a training model, and classifying according to the fog and the fog;
II. Inputting the sample image with fog into a generating network, and carrying out defogging treatment on the sample image by the generating network; the generating network consists of generators, defogging processing is completed by encoders and decoders corresponding to each other in the generators, feature graphs in the decoders and feature graphs in the corresponding encoders are fused in dimension, so that the decoders obtain effective feature expression capability in an inverse learning stage, and PRelu activation operation is used for the fused features;
III, inputting the sample image subjected to defogging processing of the generation network and the corresponding defogging-free sample image into an countermeasure network, judging the threshold value and the true and false of the two images, and calculating the parameters of a loss function model;
because information sharing exists between the foggy image and the generated foggy image, fitting is easy to happen during training, information of the foggy part in the image is concentrated in a low-frequency part, and in order to ensure similarity between the generated foggy image and the original image, calculation of loss function model parameters is needed; the total Loss function Loss is shown by equation (1), where L 1 Representing the countermeasures against loss function, L 2 Represents a smooth loss function, W 1 Representing the countermeasures against loss weight, W 2 Representing a smoothing loss weight:
Loss=W 1 ·L 1 +W 2 ·L 2 formula (1)
Wherein, the definition of the counterloss function L1 is shown in formula (2):
L 1 =E (x,y) [log(D(x,y))]+E (x,z) [log(1-D(x,G(x,z)))]formula (2)
Smoothing loss function L 2 The definition of (2) is shown in the formula (3):
L 2 =E (x,y,z) [||y-G(x,z)|| 1 ]formula (3)
Wherein, G is generator, D is the discriminator, x is the image with fog of input, y is the corresponding image without fog of x, z is the random noise; e (E) (x,y) Representing the average of the losses of all foggy images and corresponding foggy image samples input to the arbiter, E (x,y,z) Representing averaging the loss of the real haze-free image and the haze-free image generated by the generating network;
IV, feeding back the loss function model parameters to a generating network, generating network adjustment parameters, and updating the generating network model;
v, repeating the steps I-IV until training is completed, and obtaining a training model;
and VI, inputting the defogging picture to be defogged into a training model to obtain a defogged non-defogging picture.
Further, the encoder performs feature extraction on the sample image with fog, performs downsampling operation, and uses a BN layer and a PRelu activation layer for a convolution layer;
the decoder sequentially performs up-sampling operation on the features transferred by the encoder, and the up-sampling correspondingly amplifies the originally reduced features back to the original size so as to ensure end-to-end output; after each step of up-sampling operation is carried out in the decoder, the characteristic information is enriched by using convolution operation, so that the information lost in the encoder process can be obtained in the decoder part through learning; the features after convolution of each layer are normalized and Dropout operated to prevent overfitting.
Further, the generating network adopts a U-Net structure; U-Net is a full convolution structure, can skip connection, combine the low-layer characteristic diagram with the high-layer characteristic diagram, keep the detail information of the pixel level under different resolutions; based on the U-Net structure, different convolution layers of the fog-carrying sample image are combined on the convolution layer corresponding to the fog-free image to expand the image information.
The invention has the beneficial effects that: aiming at the defects in the existing image defogging method, the invention provides an aerial image defogging method based on an improved generation countermeasure network.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of a discriminator.
Fig. 3 is a foggy image a to be processed according to the invention.
Fig. 4 is an image a after defogging treatment according to the present invention.
Fig. 5 is a foggy image B to be processed according to the present invention.
Fig. 6 is an image B after defogging treatment according to the present invention.
Fig. 7 is a misted image C to be processed in accordance with the present invention.
Fig. 8 is an image C after defogging treatment according to the present invention.
Detailed Description
The invention will be described in further detail with reference to the drawings and the detailed description.
An aerial image defogging method based on an improved generation countermeasure network shown in fig. 1, the steps of the method include:
and I, collecting sample images with fog and without fog, establishing a data set required by a training model, and classifying according to the fog and the fog.
II, inputting the sample image with fog into a generating network, and carrying out defogging treatment on the sample image by the generating network; the generating network consists of generators, defogging processing is completed by encoders and decoders corresponding to each other in the generators, feature graphs in the decoders and feature graphs in the corresponding encoders are fused in dimension, so that the decoders obtain effective feature expression capability in an inverse learning stage, and PRelu activation operation is used for the fused features;
the encoder performs feature extraction on the sample image with fog, performs downsampling operation, and uses a BN layer and a PRelu activation layer for a convolution layer;
the decoder sequentially performs up-sampling operation on the features transferred by the encoder, and the up-sampling correspondingly amplifies the originally reduced features back to the original size so as to ensure end-to-end output; after each step of up-sampling operation is carried out in the decoder, the characteristic information is enriched by using convolution operation, so that the information lost in the encoder process can be obtained in the decoder part through learning; the features after convolution of each layer are normalized and Dropout operated to prevent overfitting.
III, inputting the sample image subjected to defogging treatment of the generated network and the corresponding defogging-free sample image into an countermeasure network, judging the threshold value and the true and false of the two images through a discriminator, and calculating the parameters of a loss function model; the countermeasure network is used for judging the defogging image generated by the generation network and the original defogging image block by block, judging the authenticity of the image, reversely transmitting parameters to the generation network after the judgment is finished, and assisting the generation network to generate a defogging picture which is more similar to the real picture; if it is greater than the threshold output 1, if it is less than the threshold, 0 is output. The principle of the discriminator is shown in fig. 2.
Because information sharing exists between the foggy image and the generated foggy image, fitting is easy to happen during training, information of the foggy part in the image is concentrated in a low-frequency part, and in order to ensure similarity between the generated foggy image and the original image, calculation of loss function model parameters is needed; the total Loss function Loss is shown by equation (1), where L 1 Representing the countermeasures against loss function, L 2 Represents a smooth loss function, W 1 Representing the countermeasures against loss weight, W 2 Representing a smoothing loss weight:
Loss=W 1 ·L 1 +W 2 ·L 2 formula (1)
Wherein, the definition of the counterloss function L1 is shown in formula (2):
L 1 =E( x,y) [log(D(x,y))]+E (x,z) [l0g(1-D(x,G(x,z)))]formula (2)
Smoothing loss function L 2 The definition of (2) is shown in the formula (3):
L 2 =E (x,y,z) [||y-G(x,z)|| 1 ]formula (3)
Wherein, G is generator, D is the discriminator, x is the image with fog of input, y is the corresponding image without fog of x, z is the random noise; e (E) (x,y) Representing the average of the losses of all foggy images and corresponding foggy image samples input to the arbiter, E (x,y,z ) Representing averaging the loss of the actual foggy image and the foggy image generated by the generating network.
And IV, feeding back the loss function model parameters to a generating network, generating network adjustment parameters, and updating the generating network model.
And V, repeating the steps I-IV until training is completed, and obtaining a training model.
And VI, inputting the defogging picture to be defogged into a training model to obtain a defogged non-defogging picture.
The generating network adopts a U-Net structure; compared with a common network of a coding and decoding structure which is firstly downsampled to a low dimension and then upsampled to an original resolution, the U-Net is in a full convolution structure, and the U-Net is connected in a skipped mode, so that a low-layer characteristic diagram and a high-layer characteristic diagram are combined, and detail information of pixel levels under different resolutions can be well reserved; based on the U-Net structure, different convolution layers of the foggy image are combined on the convolution layer corresponding to the foggy image to expand the image information.
After defogging is completed, the defogging effect of the image is measured by two indexes of structural similarity and peak signal to noise ratio. Structural similarity (structural similarity index measurement, SSIM) represents a picture as a vector, and the similarity of two pictures is characterized by calculating the cosine distance between the vectors. Given two images x and y, the structural similarity of the two images can be found according to equation (4):
wherein mu x Is the average value of x, where μ y Is the average value of y and is,is the variance of x>Is the variance of y, sigma xy Is the covariance of x and y, c 1 And c 2 Is used to maintain a constant. The structural similarity ranges from 0 to 1, and the value of SSIM is equal to 1 when the two images are identical.
The peak signal-to-noise ratio (peak signal to noise ratio, PSNR) before and after restoration of an image represents the ratio of the maximum power of a signal, and in general, the higher the PSNR, the better the image reconstruction quality.
Where MSE is the mean square error between the original image and the processed image and n is the number of bits per sample value.
150 images with fog on a data set are selected as a test set, and the traditional classical method is selected to be based on prior defogging (Dark Channel Prior, DCP) of a dark channel to be compared with the method, wherein evaluation indexes SSIM and PSNR are shown in table 1, and defogging effects are shown in fig. 3-8. Compared with DCP, defogging images obtained by the algorithm are brighter and lively, have clearer edge and detail information, and the score of SSIM and PSNR is higher because of the good performance brought by the function of extracting the characteristics of the neural network and the recognition function of the learning-resistant discriminator.
Table 1 comparison of defogging image quality for two methods
DCP | The method | |
SSIM | 0.660 | 0.759 |
PSNR | 13.89 | 20.32 |
The above embodiments are not intended to limit the present invention, and the present invention is not limited to the above examples, but is also intended to be limited to the following claims.
Claims (3)
1. An aerial image defogging method based on an improved generation countermeasure network is characterized by comprising the following steps of: the method comprises the following steps:
I. collecting sample images with fog and without fog, establishing a data set required by a training model, and classifying according to the images with fog and without fog;
II. Inputting the sample image with fog into a generating network, and carrying out defogging treatment on the sample image by the generating network; the generating network consists of generators, defogging processing is completed by encoders and decoders corresponding to each other in the generators, feature graphs in the decoders and feature graphs in the corresponding encoders are fused in dimension, so that the decoders obtain effective feature expression capability in an inverse learning stage, and PRelu activation operation is used for the fused features;
III, inputting the sample image subjected to defogging processing of the generation network and the corresponding defogging-free sample image into an countermeasure network, judging the threshold value and the true and false of the two images, and calculating the parameters of a loss function model;
because information sharing exists between the foggy image and the generated foggy image, fitting is easy to happen during training, information of the foggy part in the image is concentrated in a low-frequency part, and in order to ensure similarity between the generated foggy image and the original image, calculation of loss function model parameters is needed; the total Loss function Loss is shown by equation (1), where L 1 Representing the countermeasures against loss function, L 2 Represents a smooth loss function, W 1 Representing the countermeasures against loss weight, W 2 Representing a smoothing loss weight:
Loss=W 1 ·L 1 +W 2 ·L 2 formula (1)
Wherein the counterloss function L 1 The definition of (2) is as shown in formula (2):
L 1 =E (x,y) [log(D(x,y))]+E (x,z) [log(1-D(x,G(x,z)))]formula (2)
Smoothing loss function L 2 The definition of (2) is shown in the formula (3):
L 2 =E (x,y,z) [||y-G(x,z)|| 1 ]formula (3)
Wherein, G is generator, D is the discriminator, x is the image with fog of input, y is the corresponding image without fog of x, z is the random noise; e (E) (x,y) Representing the average of the losses of all foggy images and corresponding foggy image samples input to the arbiter, E (x,y,z) Representing averaging the loss of the real haze-free image and the haze-free image generated by the generating network;
IV, feeding back the loss function model parameters to a generating network, generating network adjustment parameters, and updating the generating network model;
v, repeating the steps I-IV until training is completed, and obtaining a training model;
and VI, inputting the defogging picture to be defogged into a training model to obtain a defogged non-defogging picture.
2. An improved method of defogging aerial images based on a generated countermeasure network as recited in claim 1, wherein: the encoder performs feature extraction on the sample image with fog, performs downsampling operation, and uses a BN layer and a PRelu activation layer for a convolution layer;
the decoder sequentially performs up-sampling operation on the features transferred by the encoder, and the up-sampling correspondingly amplifies the originally reduced features back to the original size so as to ensure end-to-end output; after each step of up-sampling operation is carried out in the decoder, the characteristic information is enriched by using convolution operation, so that the information lost in the encoder process can be obtained in the decoder part through learning; the features after convolution of each layer are normalized and Dropout operated to prevent overfitting.
3. An improved method of defogging aerial images based on a generated countermeasure network as recited in claim 2, wherein: the generating network adopts a U-Net structure; U-Net is a full convolution structure, can skip connection, combine the low-layer characteristic diagram with the high-layer characteristic diagram, keep the detail information of the pixel level under different resolutions; based on the U-Net structure, different convolution layers of the fog-carrying sample image are combined on the convolution layer corresponding to the fog-free image to expand the image information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010496560.6A CN111709888B (en) | 2020-06-03 | 2020-06-03 | Aerial image defogging method based on improved generation countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010496560.6A CN111709888B (en) | 2020-06-03 | 2020-06-03 | Aerial image defogging method based on improved generation countermeasure network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111709888A CN111709888A (en) | 2020-09-25 |
CN111709888B true CN111709888B (en) | 2023-12-08 |
Family
ID=72538823
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010496560.6A Active CN111709888B (en) | 2020-06-03 | 2020-06-03 | Aerial image defogging method based on improved generation countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111709888B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113658051B (en) * | 2021-06-25 | 2023-10-13 | 南京邮电大学 | Image defogging method and system based on cyclic generation countermeasure network |
CN113362251B (en) * | 2021-06-27 | 2024-03-26 | 东南大学 | Anti-network image defogging method based on double discriminators and improved loss function |
CN116721403A (en) * | 2023-06-19 | 2023-09-08 | 山东高速集团有限公司 | Road traffic sign detection method |
CN116645298B (en) * | 2023-07-26 | 2024-01-26 | 广东电网有限责任公司珠海供电局 | Defogging method and device for video monitoring image of overhead transmission line |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665432A (en) * | 2018-05-18 | 2018-10-16 | 百年金海科技有限公司 | A kind of single image to the fog method based on generation confrontation network |
CN109272455A (en) * | 2018-05-17 | 2019-01-25 | 西安电子科技大学 | Based on the Weakly supervised image defogging method for generating confrontation network |
CN109300090A (en) * | 2018-08-28 | 2019-02-01 | 哈尔滨工业大学(威海) | A kind of single image to the fog method generating network based on sub-pix and condition confrontation |
CN109493303A (en) * | 2018-05-30 | 2019-03-19 | 湘潭大学 | A kind of image defogging method based on generation confrontation network |
CN109949242A (en) * | 2019-03-19 | 2019-06-28 | 内蒙古工业大学 | The generation method of image defogging model, device and image defogging method, device |
CN109993804A (en) * | 2019-03-22 | 2019-07-09 | 上海工程技术大学 | A kind of road scene defogging method generating confrontation network based on condition |
CN110288550A (en) * | 2019-06-28 | 2019-09-27 | 中国人民解放军火箭军工程大学 | The single image defogging method of confrontation network is generated based on priori knowledge guiding conditions |
-
2020
- 2020-06-03 CN CN202010496560.6A patent/CN111709888B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109272455A (en) * | 2018-05-17 | 2019-01-25 | 西安电子科技大学 | Based on the Weakly supervised image defogging method for generating confrontation network |
CN108665432A (en) * | 2018-05-18 | 2018-10-16 | 百年金海科技有限公司 | A kind of single image to the fog method based on generation confrontation network |
CN109493303A (en) * | 2018-05-30 | 2019-03-19 | 湘潭大学 | A kind of image defogging method based on generation confrontation network |
CN109300090A (en) * | 2018-08-28 | 2019-02-01 | 哈尔滨工业大学(威海) | A kind of single image to the fog method generating network based on sub-pix and condition confrontation |
CN109949242A (en) * | 2019-03-19 | 2019-06-28 | 内蒙古工业大学 | The generation method of image defogging model, device and image defogging method, device |
CN109993804A (en) * | 2019-03-22 | 2019-07-09 | 上海工程技术大学 | A kind of road scene defogging method generating confrontation network based on condition |
CN110288550A (en) * | 2019-06-28 | 2019-09-27 | 中国人民解放军火箭军工程大学 | The single image defogging method of confrontation network is generated based on priori knowledge guiding conditions |
Non-Patent Citations (1)
Title |
---|
一种基于条件生成对抗网络的去雾方法;贾绪仲 文志强;信息与电脑(第9期);第60-62页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111709888A (en) | 2020-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111709888B (en) | Aerial image defogging method based on improved generation countermeasure network | |
CN113052210B (en) | Rapid low-light target detection method based on convolutional neural network | |
CN110992275A (en) | Refined single image rain removing method based on generation countermeasure network | |
CN109993804A (en) | A kind of road scene defogging method generating confrontation network based on condition | |
CN109447917B (en) | Remote sensing image haze eliminating method based on content, characteristics and multi-scale model | |
CN110288550B (en) | Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition | |
CN108564597B (en) | Video foreground object extraction method fusing Gaussian mixture model and H-S optical flow method | |
CN110675340A (en) | Single image defogging method and medium based on improved non-local prior | |
CN105550999A (en) | Video image enhancement processing method based on background reuse | |
CN111242868B (en) | Image enhancement method based on convolutional neural network in scotopic vision environment | |
CN111861896A (en) | UUV-oriented underwater image color compensation and recovery method | |
CN112070688A (en) | Single image defogging method for generating countermeasure network based on context guidance | |
CN111598814B (en) | Single image defogging method based on extreme scattering channel | |
Bansal et al. | A review of image restoration based image defogging algorithms | |
CN112200746A (en) | Defogging method and device for traffic scene image in foggy day | |
CN110807744A (en) | Image defogging method based on convolutional neural network | |
CN112164010A (en) | Multi-scale fusion convolution neural network image defogging method | |
Babu et al. | An efficient image dahazing using Googlenet based convolution neural networks | |
Zhao et al. | A multi-scale U-shaped attention network-based GAN method for single image dehazing | |
CN112070691B (en) | Image defogging method based on U-Net | |
Wang et al. | Afdn: Attention-based feedback dehazing network for UAV remote sensing image haze removal | |
CN111667498B (en) | Automatic detection method for moving ship targets oriented to optical satellite video | |
CN116596792B (en) | Inland river foggy scene recovery method, system and equipment for intelligent ship | |
Zhang et al. | Dehazing with improved heterogeneous atmosphere light estimation and a nonlinear color attenuation prior model | |
CN112288726A (en) | Method for detecting foreign matters on belt surface of underground belt conveyor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |