CN113450261A - Single image defogging method based on condition generation countermeasure network - Google Patents

Single image defogging method based on condition generation countermeasure network Download PDF

Info

Publication number
CN113450261A
CN113450261A CN202010217718.1A CN202010217718A CN113450261A CN 113450261 A CN113450261 A CN 113450261A CN 202010217718 A CN202010217718 A CN 202010217718A CN 113450261 A CN113450261 A CN 113450261A
Authority
CN
China
Prior art keywords
image
generator
network
discriminator
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010217718.1A
Other languages
Chinese (zh)
Inventor
岑翼刚
张悦
阚世超
童忆
安高云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Yishi Intelligent Technology Co ltd
Original Assignee
Jiangsu Yishi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Yishi Intelligent Technology Co ltd filed Critical Jiangsu Yishi Intelligent Technology Co ltd
Priority to CN202010217718.1A priority Critical patent/CN113450261A/en
Publication of CN113450261A publication Critical patent/CN113450261A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a single image defogging method for generating a countermeasure network based on conditions, which comprises the following steps of: inputting a foggy image into a generator of the conditional generation countermeasure network, training to obtain a defogged generator model, inputting a foggy image again, and outputting the generator after training to obtain a defogged image; step 2: inputting the foggy image, the corresponding original clear image and the defogged image in the step 1 into a discriminator in a countermeasure network for discrimination; and step 3: training the whole condition to generate a countermeasure network, and defogging the foggy image by using the generator network. The invention has the advantages that: the generator and the discriminator both support the input of images with any size and can output defogged images with the same size; the image does not need to be zoomed, so that the defect that information loss is caused by zooming of the image is avoided; the defogging performance of the image under different resolutions is effectively improved, and the defogging effect is good under different indoor and outdoor scenes.

Description

Single image defogging method based on condition generation countermeasure network
Technical Field
The invention relates to the field of image processing and pattern recognition, in particular to a single image defogging method based on a condition generation countermeasure network.
Background
Image defogging plays an important role in intelligent transportation, and intelligent identification can be disturbed in the foggy state, and the performance of image identification can be effectively enhanced by defogging. The single image defogging means that only one image is input into the model and defogging is carried out completely based on the image content. Early image defogging methods were based on manual design methods for defogging. Unlike the artificial design method, the deep learning automatic learning defogging model has better defogging effect, especially based on the method of generating a countermeasure network (GAN). Since the continuous progress of the technology has promoted the gradual improvement of the image restoration effect, image defogging has become a hot problem in recent years.
GAN-based image defogging methods have proven to be very effective methods, and are widely used because GAN requires only a small number of images to train a better performing model. However, since the images shot by different cameras are different in size, the conventional GAN network needs to scale the images to a fixed size for application, and when the images are defogged, the image scaling causes serious information loss, thereby introducing a new problem.
Disclosure of Invention
The purpose of the invention is as follows: in view of the above problems, an object of the present invention is to provide a method for defogging a single image based on a condition-generated countermeasure network, wherein training of the condition-generated countermeasure network is performed, a single image with any size is input, and then a defogging result map with the same size can be output, and the defogging effect is better.
The technical scheme is as follows: a single image defogging method based on condition generation countermeasure network is characterized by comprising the following steps: the method comprises the following steps:
step 1: the condition generation countermeasure network is composed of a generator and a discriminator; inputting a foggy image into a generator in the condition generation countermeasure network, wherein the generator consists of 8U-shaped residual error network cascades; training the generator to obtain a defogged generator model, inputting a defogged picture again, and outputting the generator after the training to obtain a defogged image;
step 2: inputting the foggy image, the corresponding original clear image and the defogged image in the step 1 into a discriminator in the conditional generation countermeasure network together to obtain a discrimination result; the discriminator consists of 4 convolution layers, 1 spatial pyramid pooling layer and 1 full-connection layer;
and step 3: training the whole condition generation countermeasure network, and defogging the foggy image by using a generator network in the condition generation countermeasure network.
Further, the foggy images in the step 1 are uniformly scaled to 512 × 512, and are randomly horizontally turned, and then the processed foggy images are input into the generator;
the generator in the step 1 is composed of 8U-shaped residual error network cascades, the U-shaped residual error network is composed of a U-shaped network and a residual error network, meanwhile, the size of a deconvolution output characteristic diagram in the U-shaped residual error network is consistent with that of an input characteristic diagram of the U-shaped residual error network, and the method specifically comprises the following steps:
step (1.1): convolving the input feature map of the U-shaped residual error network by using a convolution kernel with the size of 5 multiplied by 5 and the step length of 2, deconvolving the convolved feature map by using a convolution kernel with the size of 5 multiplied by 5 and the step length of 2, and splicing the input feature map of the U-shaped residual error network and the deconvolved image in a channel dimension to obtain a feature map A; activating by using an lrelu function before convolution and deconvolution in the step;
step (1.2): convolving the characteristic diagram A obtained in the step (1.1) by using a convolution kernel with the size of 3 multiplied by 3 and the step length of 1, wherein the number of output channels is half of the input channel, and a characteristic diagram B is obtained; before convolution in the step, an lrelu function is used for activation;
step (1.3): subtracting the characteristic diagram B obtained in the step (1.2) from the input characteristic diagram of the U-shaped residual error network to obtain an output result of the U-shaped residual error network;
step (1.4): and cascading 8 continuous U-shaped residual error network structures to form the generator.
Further, the discriminator in step 2 is composed of 4 convolution layers, 1 spatial pyramid pooling layer, and 1 full-link layer, and the specific steps include:
step (2.1): splicing the corresponding original clear image and the corresponding foggy image to be used as an input real image of the discriminator, and splicing the defogged image output by the generator and the corresponding foggy image to be used as an input false image of the discriminator;
step (2.2): inputting the input image of step (2.1) into the discriminator to pass through the 4 convolutional layers; in the step, the convolution kernel size of each convolution layer is 5 multiplied by 5, the step length is 2, and batch normalization and an lrelu function are activated after each convolution layer;
step (2.3): passing the feature map obtained in the step (2.2) through the spatial pyramid pooling layer to obtain features with fixed lengths; the space pyramid in the step is composed of 1 × 1, 2 × 2, 3 × 3 and 4 × 4 grids, and the final characteristic length is 30 times of the number of the characteristic graphs;
step (2.4): inputting the features obtained in the step (2.3) into the full connection layer to obtain classified output; the classification recognition determines whether the input is a real image or a generated image.
Further, training a loss function of the generator to adopt cross entropy loss and L1 loss, and simultaneously introducing PSNR loss and SSIM loss; PSNR loss is the difference of PSNR value between 1 and the original clear image and the defogged image divided by 40, and SSIM loss is the difference of SSIM value between 1 and the original clear image and the defogged image; calculating the sum of all losses, and propagating backwards to update the generator;
training a loss function of the discriminator to adopt a cross entropy loss function, and calculating cross entropy loss back propagation to update the discriminator;
during training, the arbiter updates once and the generator updates four times.
Further, the defogging in the step 3 is only applicable to the generator to defogg the input foggy image, and the discriminator only participates in the training process, including:
step (3.1): initializing parameters of the generator network by using the trained model parameters;
step (3.2): and (4) inputting the foggy image into the generator network initialized in the step (3.1) to obtain a defogged image.
Has the advantages that: compared with the prior art, the invention has the advantages that: first, the generator and the discriminator both support image input of any size, and can output images of the same size after defogging the input images of any size. And secondly, the defogging performance of the image under different resolutions is effectively improved, and the defogging effect is good under different indoor and outdoor scenes.
Drawings
FIG. 1 is a schematic diagram of a generator configuration of the present invention;
FIG. 2 is a schematic diagram of the structure of the discriminator according to the present invention;
FIG. 3 is an input corresponding fog feature map and a defogged image of the present invention;
fig. 4 is a dehazed image output by the generator of the present invention.
Detailed Description
The present invention is further illustrated by the following figures and specific examples, which are to be understood as illustrative only and not as limiting the scope of the invention, which is to be given the full breadth of the appended claims and any and all equivalent modifications thereof which may occur to those skilled in the art upon reading the present specification.
As shown in fig. 1 to 4, a single image defogging method for generating a countermeasure network based on conditions comprises the following steps:
step 1: the conditional generation countermeasure network is composed of a generator and a discriminator; inputting the foggy image into a generator in the condition generation countermeasure network, wherein the generator is composed of 8U-shaped residual error network cascades as shown in the attached figure 1; training the generator to obtain a defogged generator model, inputting the defogged image again, and outputting the generator after the training to obtain a defogged image;
step 2: inputting the foggy image, the corresponding original clear image and the defogged image in the step 1 into a discriminator in the conditional generation countermeasure network together to obtain a discrimination result; as shown in fig. 2, the discriminator consists of 4 convolutional layers, 1 spatial pyramid pooling layer and 1 full-link layer;
and step 3: training the whole condition generation countermeasure network, and utilizing the generator network in the condition generation countermeasure network to defogg the foggy image.
Specifically, as shown in fig. 1, the fogging images input in step 1 are uniformly scaled to 512 × 512 sizes, and are randomly horizontally flipped, and then the processed fogging images are input into the generator. The generator is composed of 8U-shaped residual error network cascades, the U-shaped residual error network is composed of a U-shaped network and a residual error network, meanwhile, the size of a deconvolution output characteristic diagram in the U-shaped residual error network is consistent with that of an input characteristic diagram of the U-shaped residual error network, and the generator specifically comprises the following steps:
step (1.1): convolving an input feature map of the U-shaped residual error network by using a convolution kernel with the size of 5 multiplied by 5 and the step length of 2, deconvolving the convolved feature map by using a convolution kernel with the size of 5 multiplied by 5 and the step length of 2, and splicing the input feature map of the U-shaped residual error network and the deconvolved image in a channel dimension to obtain a feature map A; activating by using an lrelu function before convolution and deconvolution in the step;
step (1.2): convolving the characteristic diagram A obtained in the step (1.1) by using a convolution kernel with the size of 3 multiplied by 3 and the step length of 1, wherein the number of output channels is half of the input channel, and a characteristic diagram B is obtained; before convolution in the step, an lrelu function is used for activation;
step (1.3): subtracting the characteristic diagram B obtained in the step (1.2) from the input characteristic diagram of the U-shaped residual error network to obtain an output result of the U-shaped residual error network;
step (1.4): 8 convolution, 8 deconvolution and 8 residual errors are operated to form 8 continuous U-shaped residual error network structures, the continuous 8U-shaped residual error network structures are cascaded to form a generator, and the defogged image is output.
The loss function that trains the above generator uses cross-entropy loss and L1 loss in order to make the dehazed image look more realistic while introducing PSNR loss and SSIM loss. PSNR loss is the difference of PSNR value between 1 and the original clear image and the defogged image divided by 40, and SSIM loss is the difference of SSIM value between 1 and the original clear image and the defogged image; the sum of all these losses is calculated and the update generator is propagated backwards.
Specifically, as shown in fig. 2, the discriminator in step 2 is composed of 4 convolution layers, 1 spatial pyramid pooling layer, and 1 full-link layer, and the specific steps include:
step (2.1): splicing the corresponding original clear image and the corresponding foggy image to be used as an input real image of a discriminator, and splicing the defogged image output by the generator and the corresponding foggy image to be used as an input false image of the discriminator;
step (2.2): inputting the input image of the step (2.1) into a discriminator, and passing through 4 convolutional layers; in the step, the convolution kernel size of each convolution layer is 5 multiplied by 5, the step length is 2, and batch normalization and an lrelu function are activated after each convolution layer;
step (2.3): passing the characteristic diagram obtained in the step (2.2) through a spatial pyramid pooling layer to obtain characteristics with fixed length; the space pyramid in the step is composed of 1 × 1, 2 × 2, 3 × 3 and 4 × 4 grids, and the final characteristic length is 30 times of the number of the characteristic graphs;
step (2.4): inputting the features obtained in the step (2.3) into a full connection layer to obtain classified output; the above classification recognition determines whether the input is a real image or a generated image.
And training the loss function of the discriminator by adopting a cross entropy loss function, and calculating a cross entropy loss back propagation updating discriminator.
During training, the arbiter updates once and the generator updates four times.
The defogging in the step 3 is only suitable for the generator to defogge the input foggy image, and the discriminator only participates in the training process, and the method comprises the following steps:
step (3.1): initializing the parameters of the generator network by using the trained model parameters;
step (3.2): and (4) inputting the foggy image into the generator network initialized in the step (3.1) to obtain a defogged image.
As shown in FIG. 3, which shows the Input corresponding fog feature Map and the defogged image of the present invention, the first line (Haze Input) is the Input foggy image, the second line (Haze Map) is the resulting fog feature Map, and the third line (Dehazing Output) is the defogged image. As shown in fig. 4, which shows the dehazed image Output by the generator of the present invention, the first line (Haze Input) and the third line (Haze Input) are the hazed images, and the second line (Dehazing Output) and the fourth line (Dehazing Output) are the dehazed images corresponding to the first line and the third line, respectively.
When the method is implemented, firstly, images with fixed sizes are input into a condition generation countermeasure network, and a generator and a discriminator are trained in a countermeasure training mode to obtain a model capable of defogging the images; then, the trained parameters are used for initializing the network, images with unfixed sizes are input into the countermeasure network for fine adjustment, and a good defogging effect can be obtained for input images with any sizes. The invention effectively improves the defogging performance of the image under different resolutions, and has good defogging effect under different indoor and outdoor scenes; due to the characteristic of flexible input image size, the image does not need to be zoomed in the testing stage, the defect that information loss is caused by image zooming is overcome, and the method has a good application prospect.

Claims (5)

1. A single image defogging method based on condition generation countermeasure network is characterized by comprising the following steps: the method comprises the following steps:
step 1: the condition generation countermeasure network is composed of a generator and a discriminator; inputting a foggy image into a generator in the condition generation countermeasure network, wherein the generator consists of 8U-shaped residual error network cascades; training the generator to obtain a defogged generator model, inputting a defogged picture again, and outputting the generator after the training to obtain a defogged image;
step 2: inputting the foggy image, the corresponding original clear image and the defogged image in the step 1 into a discriminator in the conditional generation countermeasure network together to obtain a discrimination result; the discriminator consists of 4 convolution layers, 1 spatial pyramid pooling layer and 1 full-connection layer;
and step 3: training the whole condition generation countermeasure network, and defogging the foggy image by using a generator network in the condition generation countermeasure network.
2. The conditional generation network-based single image defogging method according to claim 1, wherein:
uniformly zooming the foggy images in the step 1 to 512 x 512 in size, randomly horizontally turning, and inputting the processed foggy images into the generator;
the generator in the step 1 is composed of 8U-shaped residual error network cascades, the U-shaped residual error network is composed of a U-shaped network and a residual error network, meanwhile, the size of a deconvolution output characteristic diagram in the U-shaped residual error network is consistent with that of an input characteristic diagram of the U-shaped residual error network, and the method specifically comprises the following steps:
step (1.1): convolving the input feature map of the U-shaped residual error network by using a convolution kernel with the size of 5 multiplied by 5 and the step length of 2, deconvolving the convolved feature map by using a convolution kernel with the size of 5 multiplied by 5 and the step length of 2, and splicing the input feature map of the U-shaped residual error network and the deconvolved image in a channel dimension to obtain a feature map A; activating by using an lrelu function before convolution and deconvolution in the step;
step (1.2): convolving the characteristic diagram A obtained in the step (1.1) by using a convolution kernel with the size of 3 multiplied by 3 and the step length of 1, wherein the number of output channels is half of the input channel, and a characteristic diagram B is obtained; before convolution in the step, an lrelu function is used for activation;
step (1.3): subtracting the characteristic diagram B obtained in the step (1.2) from the input characteristic diagram of the U-shaped residual error network to obtain an output result of the U-shaped residual error network;
step (1.4): and cascading 8 continuous U-shaped residual error network structures to form the generator.
3. The conditional generation network-based single image defogging method according to claim 1, wherein:
the discriminator in the step 2 is composed of 4 convolution layers, 1 spatial pyramid pooling layer and 1 full-connection layer, and the specific steps include:
step (2.1): splicing the corresponding original clear image and the corresponding foggy image to be used as an input real image of the discriminator, and splicing the defogged image output by the generator and the corresponding foggy image to be used as an input false image of the discriminator;
step (2.2): inputting the input image of step (2.1) into the discriminator to pass through the 4 convolutional layers; in the step, the convolution kernel size of each convolution layer is 5 multiplied by 5, the step length is 2, and batch normalization and an lrelu function are activated after each convolution layer;
step (2.3): passing the feature map obtained in the step (2.2) through the spatial pyramid pooling layer to obtain features with fixed lengths; the space pyramid in the step is composed of 1 × 1, 2 × 2, 3 × 3 and 4 × 4 grids, and the final characteristic length is 30 times of the number of the characteristic graphs;
step (2.4): inputting the features obtained in the step (2.3) into the full connection layer to obtain classified output; the classification recognition determines whether the input is a real image or a generated image.
4. The method for defogging a single image based on the condition generation countermeasure network according to any one of claims 1 to 3, wherein:
training a loss function of the generator to adopt cross entropy loss and L1 loss, and introducing PSNR loss and SSIM loss; PSNR loss is the difference of PSNR value between 1 and the original clear image and the defogged image divided by 40, and SSIM loss is the difference of SSIM value between 1 and the original clear image and the defogged image; calculating the sum of all losses, and propagating backwards to update the generator;
training a loss function of the discriminator to adopt a cross entropy loss function, and calculating cross entropy loss back propagation to update the discriminator;
during training, the arbiter updates once and the generator updates four times.
5. The conditional generation network-based single image defogging method according to claim 1, wherein:
the defogging in the step 3 is only suitable for the generator to defogge the input foggy image, and the discriminator only participates in the training process, including:
step (3.1): initializing parameters of the generator network by using the trained model parameters;
step (3.2): and (4) inputting the foggy image into the generator network initialized in the step (3.1) to obtain a defogged image.
CN202010217718.1A 2020-03-25 2020-03-25 Single image defogging method based on condition generation countermeasure network Pending CN113450261A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010217718.1A CN113450261A (en) 2020-03-25 2020-03-25 Single image defogging method based on condition generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010217718.1A CN113450261A (en) 2020-03-25 2020-03-25 Single image defogging method based on condition generation countermeasure network

Publications (1)

Publication Number Publication Date
CN113450261A true CN113450261A (en) 2021-09-28

Family

ID=77806888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010217718.1A Pending CN113450261A (en) 2020-03-25 2020-03-25 Single image defogging method based on condition generation countermeasure network

Country Status (1)

Country Link
CN (1) CN113450261A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119428A (en) * 2022-01-29 2022-03-01 深圳比特微电子科技有限公司 Image deblurring method and device
CN114240796A (en) * 2021-12-22 2022-03-25 山东浪潮科学研究院有限公司 Remote sensing image cloud and fog removing method and device based on GAN and storage medium
CN116109496A (en) * 2022-11-15 2023-05-12 济南大学 X-ray film enhancement method and system based on double-flow structure protection network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038832A (en) * 2017-12-25 2018-05-15 中国科学院深圳先进技术研究院 A kind of underwater picture Enhancement Method and system
CN109300090A (en) * 2018-08-28 2019-02-01 哈尔滨工业大学(威海) A kind of single image to the fog method generating network based on sub-pix and condition confrontation
CN109493303A (en) * 2018-05-30 2019-03-19 湘潭大学 A kind of image defogging method based on generation confrontation network
CN109993804A (en) * 2019-03-22 2019-07-09 上海工程技术大学 A kind of road scene defogging method generating confrontation network based on condition
CN110322419A (en) * 2019-07-11 2019-10-11 广东工业大学 A kind of remote sensing images defogging method and system
US20200074239A1 (en) * 2018-09-04 2020-03-05 Seadronix Corp. Situation awareness method and device using image segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038832A (en) * 2017-12-25 2018-05-15 中国科学院深圳先进技术研究院 A kind of underwater picture Enhancement Method and system
CN109493303A (en) * 2018-05-30 2019-03-19 湘潭大学 A kind of image defogging method based on generation confrontation network
CN109300090A (en) * 2018-08-28 2019-02-01 哈尔滨工业大学(威海) A kind of single image to the fog method generating network based on sub-pix and condition confrontation
US20200074239A1 (en) * 2018-09-04 2020-03-05 Seadronix Corp. Situation awareness method and device using image segmentation
CN109993804A (en) * 2019-03-22 2019-07-09 上海工程技术大学 A kind of road scene defogging method generating confrontation network based on condition
CN110322419A (en) * 2019-07-11 2019-10-11 广东工业大学 A kind of remote sensing images defogging method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
柳小波 等: "基于U-Net和ResUNet模型的传送带矿石图像分割方法", 东北大学学报(自然科学版), pages 2 - 3 *
贾绪仲 等: "一种基于条件生成对抗网络的去雾方法", 信息与电脑(理论版) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114240796A (en) * 2021-12-22 2022-03-25 山东浪潮科学研究院有限公司 Remote sensing image cloud and fog removing method and device based on GAN and storage medium
CN114240796B (en) * 2021-12-22 2024-05-31 山东浪潮科学研究院有限公司 Remote sensing image cloud and fog removing method, equipment and storage medium based on GAN
CN114119428A (en) * 2022-01-29 2022-03-01 深圳比特微电子科技有限公司 Image deblurring method and device
CN116109496A (en) * 2022-11-15 2023-05-12 济南大学 X-ray film enhancement method and system based on double-flow structure protection network

Similar Documents

Publication Publication Date Title
CN111127346B (en) Multi-level image restoration method based on part-to-whole attention mechanism
CN113450261A (en) Single image defogging method based on condition generation countermeasure network
CN109118467B (en) Infrared and visible light image fusion method based on generation countermeasure network
CN110009580B (en) Single-picture bidirectional rain removing method based on picture block rain drop concentration
CN109447907B (en) Single image enhancement method based on full convolution neural network
CN110046598B (en) Plug-and-play multi-scale space and channel attention remote sensing image target detection method
CN112614077B (en) Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN109829868B (en) Lightweight deep learning model image defogging method, electronic equipment and medium
CN111861939B (en) Single image defogging method based on unsupervised learning
CN110738622A (en) Lightweight neural network single image defogging method based on multi-scale convolution
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN111626960A (en) Image defogging method, terminal and computer storage medium
CN111696049A (en) Deep learning-based underwater distorted image reconstruction method
CN110766640B (en) Image defogging method based on depth semantic segmentation
CN115205544A (en) Synthetic image harmony method and system based on foreground reference image
Zheng et al. T-net: Deep stacked scale-iteration network for image dehazing
Yan et al. MMP-net: a multi-scale feature multiple parallel fusion network for single image haze removal
CN113052776A (en) Unsupervised image defogging method based on multi-scale depth image prior
Qian et al. CIASM-Net: a novel convolutional neural network for dehazing image
CN116310272A (en) Fan blade defect detection method based on improved YOLOv4
CN114943894A (en) ConvCRF-based high-resolution remote sensing image building extraction optimization method
CN111027542A (en) Target detection method improved based on fast RCNN algorithm
CN111651954B (en) Method for reconstructing SMT electronic component in three dimensions based on deep learning
CN116993975A (en) Panoramic camera semantic segmentation method based on deep learning unsupervised field adaptation
CN103595933A (en) Method for image noise reduction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination