CN114140334A - Complex coal mine image defogging method based on improved generation countermeasure network - Google Patents

Complex coal mine image defogging method based on improved generation countermeasure network Download PDF

Info

Publication number
CN114140334A
CN114140334A CN202111034865.6A CN202111034865A CN114140334A CN 114140334 A CN114140334 A CN 114140334A CN 202111034865 A CN202111034865 A CN 202111034865A CN 114140334 A CN114140334 A CN 114140334A
Authority
CN
China
Prior art keywords
image
network
defogging
attention
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111034865.6A
Other languages
Chinese (zh)
Inventor
李云飞
程吉祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202111034865.6A priority Critical patent/CN114140334A/en
Publication of CN114140334A publication Critical patent/CN114140334A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Agronomy & Crop Science (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Mining & Mineral Resources (AREA)
  • Animal Husbandry (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a complex coal mine image defogging method based on an improved generation countermeasure network, which comprises the following steps: and the two-stage network comprises a joint defogging network based on a haze imaging model and a finishing network based on generation of a countermeasure network. The first stage of learning is achieved by embedding the haze imaging model directly into the network, thereby ensuring that the proposed method strictly follows the physical scattering model for defogging. For the second stage, a conditional generation countermeasure network (GAN) is proposed to recover background details that the first stage failed to capture and to correct image artifacts introduced by this stage. Inspired by the Attention mechanism, a novel Channel Attention network (CA) is proposed for recovering a true sharp image from the output of the first stage. For better results, the basic GAN formula is further modified and perceptual fusion is introduced.

Description

Complex coal mine image defogging method based on improved generation countermeasure network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image defogging method for a channel attention mechanism and an initiation module.
Background
In the coal mine environment, the underground water vapor content is sufficient, the relative humidity is high, and fog is easily formed when a certain difference exists between the underground temperature and the air temperature above the underground temperature. Fog is a very dangerous weather phenomenon, and the fog effect causes that a computer imaging system cannot acquire clear images, so that the phenomena of fuzzy image content, reduced contrast, color distortion and information loss, low atmospheric visibility and scene image degradation are caused, the coal mine construction operation is seriously influenced, and thus, certain potential safety hazards exist, and the image defogging technology is required to meet higher requirements. Therefore, the method for rapidly developing various defogging methods aiming at the underground coal mine fog images has important significance and research value.
With the continuous improvement and development of the image defogging algorithm, researchers have respectively proposed different methods through extensive thinking and profits, the defogging algorithm based on a physical model is applied more frequently at present, and the method of deep learning is favored by the researchers due to the rapid development of computer vision and image processing technology. Even so, there is still a great development space for the image defogging algorithm, and along with the updating and progress of the scientific and technical information technology, further requirements are put forward on the reliability, timeliness, general applicability and anti-interference performance of the image defogging algorithm. Therefore, it is necessary to develop a method for defogging coal mine images, which can avoid the image distortion after defogging.
Disclosure of Invention
The traditional defogging algorithm based on deep learning needs to acquire foggy images and fogless images which are matched in pairs in the same scene as a training data set, and the data set is difficult to acquire, so that the existing defogging algorithm based on deep learning mostly adopts artificially synthesized foggy images as foggy image data sets, but the artificially synthesized foggy images and real foggy images have great difference in pixel distribution, and therefore, the defogging effect of a defogging model obtained by training based on the artificially synthesized foggy image data sets in real foggy scenes is poor. The invention provides a complex sea and air scene image defogging method based on a generation countermeasure network, which solves the problems through designing the generation countermeasure network and solves the problems of color distortion and blurring after image defogging.
1. In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
step 1: researching the defogging of the dark channel, researching the defogging from a background model of fog image imaging, and gradually deriving an estimated value of the transmissivity in a derivation model on the basis of an atmospheric scattering model, so as to deeply understand the basic process of fog image imaging and the specific meanings of each parameter in the model, make theoretical bedding for the research of the later stage, and take the problems and defects in the algorithm as important breakthrough ports of the subsequent research;
step 2: learning a convolutional neural network to carry out image defogging, estimating atmospheric degradation transmittance by using a DehazeNet model established by a CNN deep structure, and using a bilateral correction linear unit as an activation function to improve the recovery quality of an image;
and step 3: improving a CycleGAN network, building a neural network and training to complete the conversion from a fog image to a clear image, finally realizing the optimization of the defogging effect, and continuously optimizing various losses judged by the generation loss to continuously improve the quality of the generated image;
and 4, step 4: the defogging effect of the defogged clear image is verified through an experiment, qualitative and quantitative evaluation is carried out, and a comparison result is analyzed to compare whether indexes of the improved algorithm based on the generation countermeasure network and the dark channel prior-checking algorithm in the aspects of subjective evaluation and objective evaluation are improved or not.
2. Further, the generation countermeasure network built in the step 3 is an improved version of loop generation countermeasure network with double generators G1, G2 and a single discriminator D; the generator G1 extracts the parameter T in the real foggy image y, the generator G2 extracts the parameter J in the real foggy image y, and the discriminator D judges whether the generated defogged image is a real fogless image.
3. Furthermore, the image defogging network model is composed of a shallow layer feature extraction convolution layer, a sub-network formed by a channel attention fusion module and an acceptance module structure.
4. Further, a Multi-Attention Group in a containment module of the image defogging network model is composed of a plurality of Multi-Attention fusion modules, an output feature of a former Group is used as an input of a latter Group, an output of each Group is defined as Gi, i represents a Group number, and finally, the plurality of groups are cascaded to obtain an output of the containment module:
C=[G1,G2,...,Gn],
wherein n is the number of groups.
5. Furthermore, the 1 × 1 convolution layer introduced by the module plays a role in reducing dimensions, and the 3 × 3 and 5 × 5 convolution plays a role in reducing the number of network parameters and accelerating the network convergence speed.
6. Further, the channel attention module is composed of two convolutional layers and an attention unit.
7. Further, the working steps in the channel attention module are as follows:
7.1, in attention unit, first global average pooling is applied to the input features:
Fc=Hp(Xc(i,j))
where Hp represents the global average pooling function, Xc (i, j) represents the value of the c-channel of the input value at (i, j), and Fc represents the input feature;
7.2, in channel attention, obtaining CA after processing the pooled features through a convolutional layer, a ReLU, a convolutional layer and a sigmoid activation function,
CA=σ(Conv(δ(Conv(Fc)))),
wherein, sigma represents a sigmoid function, and delta represents a ReLU function;
multiplying the input features by CA to obtain a channel attention feature CA:
CA then derives pixel attention features through convolutional layer, ReLU, convolutional layer, and sigmoid activation functions:
final fusion channel attention output:
CA=σ(Conv(δ(Conv(CA))));
wherein, sigma represents a sigmoid function, and delta represents a ReLU function;
the invention has the following technical characteristics:
1. the invention provides a complex coal mine scene image defogging method based on an improved generation countermeasure network, which can realize defogging treatment of an underground foggy image in a complex coal mine scene, and simultaneously avoid the problems of color distortion of the defogged image and unnatural scene restoration.
2. The method adopts a related algorithm in the field of deep learning, constructs a model through data processing and model training, obtains a foggy day image prediction model, and has reliability and advancement.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the services required for the embodiments or the technical solutions in the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a generator network structure for improving generation of a countermeasure network according to an embodiment of the present invention.
FIG. 3 is a channel attention module according to an embodiment of the present invention.
Fig. 4 is a schematic view of the inclusion module structure.
Fig. 5 is a schematic structural diagram of a discriminator according to an embodiment of the invention.
Detailed Description
The invention will be further illustrated with reference to specific examples:
the invention discloses a complex coal mine scene image defogging method based on an improved generation countermeasure network, which comprises the following steps:
step 1: researching the defogging of the dark channel, researching the defogging from a background model of fog image imaging, and gradually deriving an estimated value of the transmissivity in a derivation model on the basis of an atmospheric scattering model, so as to deeply understand the basic process of fog image imaging and the specific meanings of each parameter in the model, make theoretical bedding for the research of the later stage, and take the problems and defects in the algorithm as important breakthrough ports of the subsequent research;
step 2: learning a convolutional neural network to carry out image defogging, estimating atmospheric degradation transmittance by using a DehazeNet model established by a CNN deep structure, and using a bilateral correction linear unit as an activation function to improve the recovery quality of an image;
the invention proposes an improved version of the cyclic generation of a countermeasure network based on a dual generator G, F, single arbiter D. The generator F converts the real fog image y into a defogged image F (y), and the generator G converts the generated defogged image F (y) into a defogged image discriminator D to judge whether the generated defogged image is a real fog-free image.
The countermeasure network is generated by adopting an improved version of cycle, and optimization and improvement of a generator network structure and a discriminator network structure are needed. The generator and arbiter network structure designed by the present invention is described in detail as follows:
the generator comprises three modules, namely an encoding module, a conversion module and a decoding module, as shown in fig. 2
And an encoding module. The invention uses a convolutional neural network to extract features from an image input into the network, the whole coding module comprises three convolutional units, the first convolutional unit comprises a convolutional layer, a batch normalization layer and an activation function layer, wherein the size of a convolutional core of the convolutional layer is 7 multiplied by 7, and the convolution step is 1; the second convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 5 multiplied by 5, and the convolution step length is 1; the third convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution layer has a convolution kernel size of 3 x 3 and a convolution step size of 1. Through the design of the coding module, deeper characteristic information of the marine foggy image can be extracted, the problem of gradient disappearance in the training process is avoided, the calculation complexity in the model training process is further reduced, and the training speed of the defogging model is improved.
And a conversion module. According to the invention, the conversion module of the dense connection network construction generator is built, the feature graphs with the same size are connected by using the dense connection network, and the input of each layer receives the output of all the layers in front, so that the problem of gradient disappearance can be solved, the problem of overfitting in the network training process is effectively relieved, and the feature utilization rate is effectively improved. The conversion module based on the dense connection network designed by the invention designs 5 layers of dense connection units in total. The output of the l-th layer of the dense connection network is shown as follows:
fl=Hl([f0,f1,f2,…,fl-1])
wherein, Hl is a nonlinear transformation function which is a combination operation comprising batch standardization operation, linear rectification operation and convolution operation; [ f0, f1, f2, …, fl-1] feature vectors output by layers 0,1,2, …, l-1.
And a decoding module. The decoding module designed by the invention adopts an inverse convolution layer to realize the reduction of characteristic information, the whole decoding module comprises three convolution units, the first convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 3 multiplied by 3, and the convolution step length is 0.5; the second convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution kernel size of the convolution layer is 3 multiplied by 3, and the convolution step length is 0.5; the third convolution unit comprises a convolution layer, a batch normalization layer and an activation function layer, wherein the convolution layer has a convolution kernel size of 7 multiplied by 7 and a convolution step size of 1. Through the design of the decoding module, the characteristic information restoring capability can be improved, the problem of gradient disappearance is avoided, and the problem of overfitting in the network training process is solved.
The invention builds a discriminator for generating the countermeasure network, which is used for discriminating whether the image is the original image or the image generated by the generator. In order to improve the acquisition capability of the local small area features, the invention designs the following discriminator network, as shown in fig. 3.
The discriminator network designed by the invention is completely composed of convolution layers, the image input to the discriminator is divided into a plurality of image blocks, the output of the discriminator is a matrix with one dimension being n multiplied by n, each element in the output matrix represents the judgment result of one image block in the plurality of image blocks, and finally the average value of the judgment results of all the image blocks is taken as the judgment result of the generated image.
And step 3: improving a CycleGAN network, building a neural network and training to complete the conversion from a fog image to a clear image, finally realizing the optimization of the defogging effect, and continuously optimizing various losses judged by the generation loss to continuously improve the quality of the generated image;
step 3.1, reconstruction loss
Regularization of the reconstructed image with reconstruction loss, which we define as LrecIs IrealAnd IrecIs a distance therebetween, i.e.
Figure BDA0003246698000000051
Wherein IrealFor fuzzy input, IrecObtained by Eq. (1), with | | being a distance measure.
Step 3.2, loss identification
Reducing reconstruction R with recognition lossjArtifacts that may be introduced. Typically, this term encourages Ry output to resemble the input when the input is a clear image of the real world. We define LidtIs composed of
Figure BDA0003246698000000052
Where | is a distance measure, the same as that appearing in Eq. I may be either L1-standard or L2-standard. In our experiments we trained the two-stage network with Ll and L2 and found that the obtained model showed almost the same performance. This indicates that it is not necessary to deliberately choose metric forms L1 or L2 to train the two-phase network.
Step 3.3, GAN loss
GAN losses were initially used to update the generator and discriminator in a counter-measure fashion. LG is used to supervise R and D. The definition is that,
Figure BDA0003246698000000061
wherein JrealIs all possible JrealSet of (2),JdcpIs all possible JdcpA collection of (a).
Step 3.4, overall loss function.
The overall goal, expressed as,
(4)
wherein a is a group representing LGThe default value of a is 0.02.
And 4, step 4: the defogging effect of the defogged clear image is verified through an experiment, qualitative and quantitative evaluation is carried out, and a comparison result is analyzed to compare whether indexes of the improved algorithm based on the generation countermeasure network and the dark channel prior-checking algorithm in the aspects of subjective evaluation and objective evaluation are improved or not.
Step 4.1, subjective evaluation
The subjective evaluation is an intuitive feeling of an observer in terms of contrast, saturation, sharpness, and the like of an image, and has subjective consciousness. It can be classified into an absolute evaluation method with a reference image and a relative evaluation method without a reference image. The method can simply and intuitively give the evaluation result, but different observers may have different evaluation results due to different subjective feelings of the same image.
Step 4.2, Objective evaluation
The objective evaluation is to evaluate the processed image by data. Mean Square Error (MSE), peak signal to noise ratio (PSNR) and Structural Similarity (SSIM) defogging effects were used for evaluation.
MSE English is called 'Mean Square Error' in all, namely Mean Square Error, is based on the Error between corresponding pixel points, and is an index for measuring the similarity between an original image and a processed image. The smaller the mean square error, the more similar the original image is to the processed image. Assuming that the size of the foggy image is HxW, the mathematical formula is as follows:
Figure BDA0003246698000000062
where X (i.j) represents the pixel value of the ith row and jth column of the original image, and Y (i, j) represents the pixel value of the ith row and jth column of the processed image. PSNR, known as "Peak Signal to Noise Ratio", is the most widely used objective criterion for evaluating images. The method is based on the error between corresponding pixel points, and the larger the peak signal-to-noise ratio is, the more similar the original image and the processed image is. The mathematical formula is as follows
Figure BDA0003246698000000071
Figure BDA0003246698000000072
The SSIM is called "Structural Similarity Index" in english, that is, the Structural Similarity, and in order to counteract the defect that the mean square error and the peak signal-to-noise ratio cannot measure the Structural Similarity of an image, the Structural Similarity between an original image and a processed image is made based on SSIM proposed by a human visual system. The human eye is not sensitive to illumination, but is sensitive to variations in local illumination; human eyes are not sensitive to the gray scale, but are sensitive to the relative change degree of the gray scale of each part and the whole local structure; the greater the structural similarity, the more similar the original image is to the processed image. The mathematical formula is as follows:
Figure BDA0003246698000000073
wherein σxyMean value σ representing all pixels of the original image and of the processed imagexyRepresenting the variance, λ, of all pixels of the original image and of the processed imagexyCovariance between the original image and the processed image.

Claims (7)

1. A complex coal mine image defogging method based on improved generation of a countermeasure network comprises the following steps:
step 1, researching defogging of a dark channel, researching defogging from a background model of fog image imaging, and gradually deriving an estimated value of transmissivity in a derivation model on the basis of an atmospheric scattering model, so that the basic process of fog image imaging and the specific meanings of each parameter in the model are deeply understood, a theoretical cushion is laid for later-stage research, and problems and defects occurring in an algorithm are used as important breakthrough openings of follow-up research;
step 2, learning a convolutional neural network to carry out image defogging, estimating atmospheric degradation transmittance by using a DehazeNet model established by a CNN deep structure, and using a bilateral correction linear unit as an activation function to improve the recovery quality of an image;
step 3, improving a CycleGAN network, building a neural network and training to complete the conversion from a fog image to a clear image, finally realizing the optimization of the defogging effect, and judging the continuous optimization of various losses through the generation loss to continuously improve the quality of the generated image;
and 4, verifying the defogging effect of the defogged clear image through an experiment, performing qualitative and quantitative evaluation, and analyzing a comparison result to compare whether indexes of the improved algorithm based on the generation countermeasure network and the dark channel prior inspection algorithm in the aspects of subjective evaluation and objective evaluation are improved or not.
2. The complex coal mine scene image defogging method based on the generation of the countermeasure network as claimed in claim 1, wherein: the generation countermeasure network built in the step 3 is an improved version of a cyclic generation countermeasure network with double generators G1 and G2 and a single discriminator D; the generator G1 extracts the parameter T in the real foggy image y, the generator G2 extracts the parameter J in the real foggy image y, and the discriminator D judges whether the generated defogged image is a real fogless image.
3. The method for defogging the complex coal mine scene image based on the improvement generation countermeasure network according to the claim 2, wherein the method comprises the following steps: the image defogging network model consists of a shallow layer feature extraction convolution layer, a sub-network formed by a channel attention fusion module and an acceptance module structure.
4. The image defogging method according to the Attention mechanism of claim 3, wherein a Multi-Attention Group in a conservation module of the image defogging network model is composed of a plurality of Multi-Attention fusion modules, an output characteristic of a former Group is used as an input of a latter Group, an output of each Group is defined as Gi, i represents a serial number of the Group, and finally the plurality of groups are cascaded to obtain an output of the conservation module:
C=[G1,G2,...,Gn],
wherein n is the number of groups.
5. The image defogging method of the initiation module according to claim 3, wherein 1 x 1 convolution layer introduced by said module plays a role of reducing dimension, and 3 x 3 and 5 x 5 convolution plays a role of reducing the number of network parameters and accelerating the network convergence speed.
6. The method of image defogging according to the channel attention mechanism of claim 3, wherein said channel attention module is composed of two convolution layers and an attention unit.
7. The method for image defogging according to the channel attention mechanism, wherein the working steps in the channel attention module are as follows:
7.1, in attention unit, first global average pooling is applied to the input features:
Fc=Hp(Xc(i,j))
where Hp represents the global average pooling function, Xc (i, j) represents the value of the c-channel of the input value at (i, j), and Fc represents the input feature;
7.2, in channel attention, obtaining CA after processing the pooled features through a convolutional layer, a ReLU, a convolutional layer and a sigmoid activation function,
CA=σ(Conv(δ(Conv(Fc)))),
wherein, sigma represents a sigmoid function, and delta represents a ReLU function;
multiplying the input features by CA to obtain a channel attention feature CA:
CA then derives pixel attention features through convolutional layer, ReLU, convolutional layer, and sigmoid activation functions:
final channel attention output:
CA=σ(Conv(δ(Conv(CA))));
wherein, sigma represents a sigmoid function, and delta represents a ReLU function;
the technical scheme of the invention has the following beneficial effects: the invention designs a module which integrates channel attention and acceptance, and constructs an end-to-end-based image defogging network by overlapping the module and residual connection, thereby obtaining better defogging effect.
CN202111034865.6A 2021-09-04 2021-09-04 Complex coal mine image defogging method based on improved generation countermeasure network Pending CN114140334A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111034865.6A CN114140334A (en) 2021-09-04 2021-09-04 Complex coal mine image defogging method based on improved generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111034865.6A CN114140334A (en) 2021-09-04 2021-09-04 Complex coal mine image defogging method based on improved generation countermeasure network

Publications (1)

Publication Number Publication Date
CN114140334A true CN114140334A (en) 2022-03-04

Family

ID=80394596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111034865.6A Pending CN114140334A (en) 2021-09-04 2021-09-04 Complex coal mine image defogging method based on improved generation countermeasure network

Country Status (1)

Country Link
CN (1) CN114140334A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457265A (en) * 2022-08-25 2022-12-09 暨南大学 Image defogging method and system based on generation countermeasure network and multi-scale fusion

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457265A (en) * 2022-08-25 2022-12-09 暨南大学 Image defogging method and system based on generation countermeasure network and multi-scale fusion

Similar Documents

Publication Publication Date Title
Jiang et al. Decomposition makes better rain removal: An improved attention-guided deraining network
CN111784602B (en) Method for generating countermeasure network for image restoration
CN111739082B (en) Stereo vision unsupervised depth estimation method based on convolutional neural network
CN113870124B (en) Weak supervision-based double-network mutual excitation learning shadow removing method
CN114170286B (en) Monocular depth estimation method based on unsupervised deep learning
CN114463218B (en) Video deblurring method based on event data driving
CN114897742B (en) Image restoration method with texture and structural features fused twice
CN114187203A (en) Attention-optimized deep codec defogging generation countermeasure network
CN112070688A (en) Single image defogging method for generating countermeasure network based on context guidance
CN115100301A (en) Image compression sensing method and system based on fast Fourier convolution and convolution filtering flow
CN116309232A (en) Underwater image enhancement method combining physical priori with deep learning
CN115100223A (en) High-resolution video virtual character keying method based on deep space-time learning
CN114140334A (en) Complex coal mine image defogging method based on improved generation countermeasure network
Wang et al. Uneven image dehazing by heterogeneous twin network
CN111553856B (en) Image defogging method based on depth estimation assistance
CN116363036B (en) Infrared and visible light image fusion method based on visual enhancement
CN116596792B (en) Inland river foggy scene recovery method, system and equipment for intelligent ship
CN117036182A (en) Defogging method and system for single image
CN115760640A (en) Coal mine low-illumination image enhancement method based on noise-containing Retinex model
CN111275751A (en) Unsupervised absolute scale calculation method and system
CN114820395B (en) Underwater image enhancement method based on multi-field information fusion
CN116523794A (en) Low-light image enhancement method based on convolutional neural network
CN117151990A (en) Image defogging method based on self-attention coding and decoding
CN115578638A (en) Method for constructing multi-level feature interactive defogging network based on U-Net
Kumar et al. Underwater Image Enhancement using deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination