CN112950498A - Image defogging method based on countermeasure network and multi-scale dense feature fusion - Google Patents

Image defogging method based on countermeasure network and multi-scale dense feature fusion Download PDF

Info

Publication number
CN112950498A
CN112950498A CN202110209279.4A CN202110209279A CN112950498A CN 112950498 A CN112950498 A CN 112950498A CN 202110209279 A CN202110209279 A CN 202110209279A CN 112950498 A CN112950498 A CN 112950498A
Authority
CN
China
Prior art keywords
feature
network
module
image
feature fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110209279.4A
Other languages
Chinese (zh)
Inventor
万超颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Jiacheng Technology Co ltd
Original Assignee
Suzhou Jiacheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Jiacheng Technology Co ltd filed Critical Suzhou Jiacheng Technology Co ltd
Priority to CN202110209279.4A priority Critical patent/CN112950498A/en
Publication of CN112950498A publication Critical patent/CN112950498A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image defogging method based on an antagonistic network and multi-scale dense feature fusion, belonging to the technical field of image processing, and the defogging method comprises the following specific steps: (1) constructing a countermeasure generation network for image defogging conditions; (2) training the network to converge using the public image defogging dataset; (3) the trained generator network is used as an image defogging network, and the image defogging network is input as a foggy image and output as a defogged image; according to the invention, a residual error dense block and a dense feature fusion module based on a back projection technology are introduced into a generator network, and are used for respectively carrying out local feature fusion and multi-scale dense feature fusion, so that clearer images are obtained step by effectively fusing features of different levels; thereby contributing to making the image defogging more natural and to eliminating noise.

Description

Image defogging method based on countermeasure network and multi-scale dense feature fusion
Technical Field
The invention relates to the technical field of image processing, in particular to an image defogging method based on a countermeasure network and multi-scale dense feature fusion.
Background
Through retrieval, the Chinese patent No. CN102930514A discloses a rapid image defogging method based on an atmospheric physical scattering model, and although the method can perform image defogging through the atmospheric physical scattering model, the defogging is not natural and noise exists; in recent years, haze weather in cities of China is gradually increased, the intensity of light emitted by scenes is attenuated when the light reaches a photosensitive element of a camera through haze, the imaging effect of the camera is greatly influenced by the scattering effect of the haze on the light, a computer vision technology based on image analysis is widely applied in the fields of military and national defense, artificial intelligence, automatic control and the like, so that an effective image defogging technology is needed as a preprocessing process of computer vision application to reduce or eliminate the influence of the haze, in a widely adopted haze image imaging model, the atmospheric illumination intensity, the transmissivity of the scenes and an image to be restored are unknown quantities and are only known as a haze image acquired by the camera, therefore, the defogging problem of a single image is an uncertain problem mathematically, and the key problem of image defogging is to obtain the transmissivity image of the haze, in order to solve the problem of the inadequacy, the final image defogging method is mainly divided into a defogging method based on prior knowledge and a defogging method based on deep learning, and the methods basically adopt a universal network structure, but the structures have some limitations; therefore, it becomes important to invent an image defogging method based on countermeasure network and multi-scale dense feature fusion;
the existing image defogging methods mostly adopt universal network structures, the structures often only utilize low-level local features of relatively shallow networks, however, high-level semantic information is difficult to be coded by the low-level features under limited receptive fields, and further, the image defogging is easy to be unnatural and noise exists; therefore, an image defogging method based on the countermeasure network and multi-scale dense feature fusion is provided.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides an image defogging method based on a countermeasure network and multi-scale dense feature fusion.
In order to achieve the purpose, the invention adopts the following technical scheme:
an image defogging method based on countermeasure network and multi-scale dense feature fusion comprises the following specific steps:
(1) constructing a countermeasure generation network for image defogging conditions;
(2) training the network to converge using the public image defogging dataset;
(3) and taking the trained generator network as an image defogging network, inputting the image defogged network into a foggy image, and outputting the image defogged network.
Further, the conditional countermeasure generation network of step (1) includes a generator network, a discriminator network and an overall objective function.
Furthermore, the structural design of the generator network is based on a U-Net structure and comprises a coding module, a decoding module and a feature recovery module, wherein the coding module is provided with a coder and is integrated with a residual error dense block RDB module and a DFF module based on a back projection technology, and the decoding module is provided with a decoder; the generator network specifically operates as follows:
s1: setting an input of a network to;
s2: sending the foggy image to an encoder part, and performing convolution operation to down-sample the image to be used as a feature map;
s3: sending the feature map into 1 residual error dense block RDB module for local dense feature extraction and local feature fusion in each down-sampling stage, then carrying out down-sampling operation on the fused feature map through a convolution layer with the step length of 2, and then sending the down-sampled feature map into a DFF module to make up for missing spatial information;
s4: the number of the feature maps of each stage is doubled, and the feature maps are sent to a feature recovery module after 4 times of downsampling;
s5: the feature graph enhanced by the feature recovery module is subjected to deconvolution for 4 times to be up-sampled to the original input size;
s6: after each upsampling, connecting the output with potential features from a corresponding layer of an encoder, and then inputting the output and the potential features together into a residual error dense block RDB module for feature refinement;
s7: and inputting the refined features into a DFF module for self-adaptive feature fusion.
Further, the structure of the discriminator network adopts PatchGAN, and the output of the discriminator network is a matrix of N × N.
Further, the overall objective function adopts WGAN-GP loss and modifies it to condition settings as opposing training loss, as follows:
Figure BDA0002950724370000041
mixing L with1And L2Introduction of losses into the above formula yields:
L1(G)=EI,J[∣∣J-G(I)∣∣1] (2)
L2(G)=EI,J[∣∣J-F(I)∣∣2]2 (3)
combining the losses to obtain a final overall objective function:
Figure BDA0002950724370000042
in the formula: g is the generator network structure, D is the discriminator network structure, I is the foggy image, J is the real image,
Figure BDA0002950724370000043
is a sampling, λ, along a line between J and the image G (I) generated by the generator network structureGPIs a weighting factor.
Further, the feature fusion employs a DFF module of a back projection technique for generating high resolution content by minimizing reconstruction errors between the estimated high resolution result and a plurality of observed low resolution inputs.
Further, the feature recovery module includes 8 residual dense block RDB modules, and each residual dense block RDB module includes a dense connection layer, local feature fusion, and local residual learning, and its specific operation process is as follows:
SS 1: reading the state of the previous RDB module through a continuous memory mechanism, then transmitting the state to each layer of the current RDB module, and simultaneously fully utilizing all layers in the current RDB module through local dense connection;
SS 2: performing local feature fusion through a convolution layer of 1 x 1 to retain the accumulation features in a self-adaptive manner;
SS 3: the information flow is further improved through local residual learning, and the representation capability of the network is improved.
Further, the encoder is provided with a DFF module at the nth stage
Figure BDA0002950724370000051
Is defined as:
Figure BDA0002950724370000052
in the formula: i.e. inIs a potential feature of the nth stage of the encoder,
Figure BDA0002950724370000053
is an enhanced feature that is achieved by feature fusion,
Figure BDA0002950724370000054
all enhancement features fused by a DFF module in the front (n-1) level of the encoder;
DFF module of decoder at nth stage
Figure BDA0002950724370000055
Is defined as:
Figure BDA0002950724370000056
in the formula: j is a function ofnIs a potential feature of the nth stage of the encoder,
Figure BDA0002950724370000057
is an enhanced feature implemented by feature fusion, L is the number of feature levels,
Figure BDA0002950724370000058
is a front (n-1) stage of a decoderThere are enhanced features that are fused by the DFF module.
Further, the enhancement features are updated in a progressive manner, and the specific updating steps are as follows:
SSS 1: calculating a feedback error between the potential feature and the enhanced feature of the DDF module at the time of the t-th feature fusion through a formula I;
SSS 2: subtracting the enhanced features to obtain a feedback error;
SSS 3: then, the feedback error is downsampled to be the same as the enhanced feature in size through a formula II; then adding the potential features to obtain fused enhanced features;
step SSS1 formula one is specifically as follows:
Figure BDA0002950724370000061
step SSS3 equation two is specifically as follows:
Figure BDA0002950724370000062
in the formula:
Figure BDA0002950724370000063
in order to feed back the error, the error is,
Figure BDA0002950724370000064
in order to enhance the features of the present invention,
Figure BDA0002950724370000065
in order to perform the up-sampling of the operator,
Figure BDA0002950724370000066
in order to be a potential feature,
Figure BDA0002950724370000067
is a downsampling operator.
Compared with the prior art, the invention has the beneficial effects that:
1. the anti-network and multi-scale dense feature fusion image defogging method based on condition generation introduces a residual error dense block and a dense feature fusion module based on a back projection technology into a generator network, and respectively performs local feature fusion and multi-scale dense feature fusion by utilizing the residual error dense block and the dense feature fusion module, and obtains clearer images step by effectively fusing features of different layers; and each grade of the coding and decoding part is applied with an RDB module, rich local features are extracted through the densely connected convolution layer, and meanwhile, more effective features can be learned from the previous and the present local features through local feature fusion self-adaption, so that the defogging of the image is more natural, and the noise is eliminated;
2. the structural design of the generator network in the condition-based generation countermeasure network and the multi-scale dense feature fusion image defogging method is based on the U-Net structure, and because the U-Net structure is used for processing image defogging, inherent problems exist, for example, downsampling exists in an encoder and spatial information is lost, and sufficient connection is lacked between features of non-adjacent layers; therefore, the DFF module based on the back projection technology is introduced at each level of the generator network, multi-scale dense feature fusion is carried out, and features of different scales are fully utilized; therefore, the problem that the spatial information is lost in the fully extracted local dense features is avoided.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is an overall flow chart of an image defogging method based on a countermeasure network and multi-scale dense feature fusion, which is provided by the invention;
FIG. 2 is a schematic diagram of a generator network structure and a discriminator network structure according to the invention;
FIG. 3 is a diagram of a residual dense block RDB module according to the present invention;
fig. 4 is a schematic diagram of the network architecture of the DFF module of the present invention at the nth stage of the encoder.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
In the description of the present invention, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.
Referring to fig. 1-4, a defogging method for an image based on a countermeasure network and multi-scale dense feature fusion, which comprises the following specific steps:
(1) constructing a countermeasure generation network for image defogging conditions;
(2) training the network to converge using the public image defogging dataset;
(3) and taking the trained generator network as an image defogging network, inputting the image defogged network into a foggy image, and outputting the image defogged network.
The conditional countermeasure generation network in the step (1) comprises a generator network, a discriminator network and an overall objective function.
The generator network is structurally designed based on a U-Net structure and comprises a coding module, a decoding module and a feature recovery module, wherein the coding module is provided with a coder and integrated with a residual error dense block RDB module and a DFF module based on a back projection technology, and the decoding module is provided with a decoder; the specific operation process of the generator network is as follows:
s1: setting an input of a network to;
s2: sending the foggy image to an encoder part, and performing convolution operation to down-sample the image to be used as a feature map;
s3: sending the feature map into 1 residual error dense block RDB module for local dense feature extraction and local feature fusion in each down-sampling stage, then carrying out down-sampling operation on the fused feature map through a convolution layer with the step length of 2, and then sending the down-sampled feature map into a DFF module to make up for missing spatial information;
s4: the number of the feature maps of each stage is doubled, and the feature maps are sent to a feature recovery module after 4 times of downsampling;
s5: the feature graph enhanced by the feature recovery module is subjected to deconvolution for 4 times to be up-sampled to the original input size;
s6: after each upsampling, connecting the output with potential features from a corresponding layer of an encoder, and then inputting the output and the potential features together into a residual error dense block RDB module for feature refinement;
s7: and inputting the refined features into a DFF module for self-adaptive feature fusion.
The structure of the discriminator network adopts PatchGAN, and the output of the discriminator network is a matrix of N by N.
The overall objective function takes the WGAN-GP loss and modifies it to a conditional setting as opposing training loss, as follows:
Figure BDA0002950724370000101
mixing L with1And L2Introduction of losses into the above formula yields:
L1(G)=EI,J[∣∣J-G(I)∣∣1] (2)
L2(G)=EI,J[∣∣J-G(I)∣∣2]2 (3)
combining the losses to obtain a final overall objective function:
Figure BDA0002950724370000102
in the formula: g is the generator network structure, D is the discriminator network structure, I is the foggy image, J is the real image,
Figure BDA0002950724370000103
is a sampling, λ, along a line between J and the image G (I) generated by the generator network structureGPIs a weighting factor.
Feature fusion a DFF module that employs a backprojection technique for generating high-resolution content by minimizing reconstruction errors between an estimated high-resolution result and a plurality of observed low-resolution inputs.
The feature recovery module comprises 8 residual error dense block RDB modules, each residual error dense block RDB module comprises a dense connection layer, local feature fusion and local residual error learning, and the specific operation process is as follows:
SS 1: reading the state of the previous RDB module through a continuous memory mechanism, then transmitting the state to each layer of the current RDB module, and simultaneously fully utilizing all layers in the current RDB module through local dense connection;
SS 2: performing local feature fusion through a convolution layer of 1 x 1 to retain the accumulation features in a self-adaptive manner;
SS 3: the information flow is further improved through local residual learning, and the representation capability of the network is improved.
DFF module of encoder at nth stage
Figure BDA0002950724370000111
Is defined as:
Figure BDA0002950724370000112
in the formula: i.e. inIs a potential feature of the nth stage of the encoder,
Figure BDA0002950724370000113
is an enhanced feature that is achieved by feature fusion,
Figure BDA0002950724370000114
all enhancement features fused by a DFF module in the front (n-1) level of the encoder;
DFF module of decoder at nth stage
Figure BDA0002950724370000115
Is defined as:
Figure BDA0002950724370000116
in the formula: j is a function ofnIs a potential feature of the nth stage of the encoder,
Figure BDA0002950724370000117
is an enhanced feature implemented by feature fusion, L is the number of feature levels,
Figure BDA0002950724370000118
is the enhanced feature that is fused by the DFF module at the (n-1) stage before the decoder.
The enhancement features are updated in a progressive manner, and the specific updating steps are as follows:
SSS 1: calculating a feedback error between the potential feature and the enhanced feature of the DDF module at the time of the t-th feature fusion through a formula I;
SSS 2: subtracting the enhanced features to obtain a feedback error;
SSS 3: then, the feedback error is downsampled to be the same as the enhanced feature in size through a formula II; then adding the potential features to obtain fused enhanced features;
step SSS1 formula one is specifically as follows:
Figure BDA0002950724370000121
step SSS3 formula two is specifically as follows:
Figure BDA0002950724370000122
in the formula:
Figure BDA0002950724370000123
in order to feed back the error, the error is,
Figure BDA0002950724370000124
in order to enhance the features of the present invention,
Figure BDA0002950724370000125
in order to perform the up-sampling of the operator,
Figure BDA0002950724370000126
in order to be a potential feature,
Figure BDA0002950724370000127
is a downsampling operator.
The working principle and the using process of the invention are as follows: according to the condition-based generation countermeasure network and multi-scale dense feature fusion image defogging method, firstly, a condition countermeasure generation network for image defogging needs to be constructed; then training the network to converge using the public image defogging dataset; finally, the trained generator network is used as an image defogging network, the image defogged by the generator network is input as a foggy image, and the image defogged by the generator network is output; the generator network specifically operates as follows: firstly, setting the input of a network as; then, the foggy image is sent to an encoder part, and the image is sampled down to be used as a characteristic image through convolution operation; then, in each down-sampling stage, the feature map is sent to 1 residual error dense block RDB module for local dense feature extraction and local feature fusion, then the fused feature map is subjected to down-sampling operation through a convolution layer with the step length of 2, and then the down-sampled feature map is sent to a DFF module to make up for missing spatial information; then, the number of the feature maps of each stage is doubled, and the feature maps are sent to a feature recovery module after 4 times of downsampling; then, the feature graph enhanced by the feature recovery module is subjected to deconvolution for 4 times to be up-sampled to the original input size; then, after each upsampling, connecting the output with potential features from a corresponding layer of an encoder, and then inputting the output and the potential features together into a residual error dense block RDB module for feature refinement; finally, inputting the refined features into a DFF module for self-adaptive feature fusion; according to the invention, a residual error dense block and a dense feature fusion module based on a back projection technology are introduced into a generator network, and are used for respectively carrying out local feature fusion and multi-scale dense feature fusion, so that clearer images are obtained step by effectively fusing features of different levels; thereby contributing to making the image defogging more natural and to eliminating noise.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (9)

1. A defogging method based on an image fusion of a countermeasure network and multi-scale dense features is characterized by comprising the following specific steps:
(1) constructing a countermeasure generation network for image defogging conditions;
(2) training the network to converge using the public image defogging dataset;
(3) and taking the trained generator network as an image defogging network, inputting the image defogged network into a foggy image, and outputting the image defogged network.
2. The image defogging method based on the countermeasure network and the multi-scale dense feature fusion is characterized in that the conditional countermeasure generation network in the step (1) comprises a generator network, a discriminator network and an overall objective function.
3. The image defogging method based on the countermeasure network and the multi-scale dense feature fusion is characterized in that the structural design of the generator network is based on a U-Net structure, the generator network comprises an encoding module, a decoding module and a feature recovery module, the encoding module is provided with an encoder, a residual dense block RDB module and a DFF module based on a back projection technology are integrated, and the decoding module is provided with a decoder; the generator network specifically operates as follows:
s1: setting an input of a network to;
s2: sending the foggy image to an encoder part, and performing convolution operation to down-sample the image to be used as a feature map;
s3: sending the feature map into 1 residual error dense block RDB module for local dense feature extraction and local feature fusion in each down-sampling stage, then carrying out down-sampling operation on the fused feature map through a convolution layer with the step length of 2, and then sending the down-sampled feature map into a DFF module to make up for missing spatial information;
s4: the number of the feature maps of each stage is doubled, and the feature maps are sent to a feature recovery module after 4 times of downsampling;
s5: the feature graph enhanced by the feature recovery module is subjected to deconvolution for 4 times to be up-sampled to the original input size;
s6: after each upsampling, connecting the output with potential features from a corresponding layer of an encoder, and then inputting the output and the potential features together into a residual error dense block RDB module for feature refinement;
s7: and inputting the refined features into a DFF module for self-adaptive feature fusion.
4. The image defogging method based on the countermeasure network and the multi-scale dense feature fusion is characterized in that the structure of the discriminator network adopts PatchGAN, and the output of the discriminator network is a matrix of N x N.
5. The image defogging method based on the countermeasure network and the multi-scale dense feature fusion as claimed in claim 2, wherein the overall objective function adopts the WGAN-GP loss and modifies the condition setting as the countermeasure training loss as follows:
Figure FDA0002950724360000021
mixing L with1And L2Loss is introduced into the above formulaObtaining:
L1(G)=EI,J[∣∣J-G(I)∣∣1] (2)
L2(G)=EI,J[∣∣J-G(I)∣∣2]2 (3)
combining the losses to obtain a final overall objective function:
Figure FDA0002950724360000031
in the formula: g is the generator network structure, D is the discriminator network structure, I is the foggy image, J is the real image,
Figure FDA0002950724360000032
is a sampling, λ, along a line between J and the image G (I) generated by the generator network structureGPIs a weighting factor.
6. The method of claim 3, wherein the feature fusion employs a DFF module of a backprojection technique for generating high-resolution content by minimizing reconstruction errors between the estimated high-resolution result and a plurality of observed low-resolution inputs.
7. The image defogging method based on the countermeasure network and the multi-scale dense feature fusion is characterized in that the feature restoration module comprises 8 residual dense block RDB modules, and each residual dense block RDB module comprises a dense connection layer, local feature fusion and local residual learning, and the specific operation processes are as follows:
SS 1: reading the state of the previous RDB module through a continuous memory mechanism, then transmitting the state to each layer of the current RDB module, and simultaneously fully utilizing all layers in the current RDB module through local dense connection;
SS 2: performing local feature fusion through a convolution layer of 1 x 1 to retain the accumulation features in a self-adaptive manner;
SS 3: the information flow is further improved through local residual learning, and the representation capability of the network is improved.
8. The image defogging method based on the countermeasure network and the multi-scale dense feature fusion as claimed in claim 3, wherein the encoder is provided with a DFF module at the nth stage
Figure FDA0002950724360000041
Is defined as:
Figure FDA0002950724360000042
in the formula: i.e. inIs a potential feature of the nth stage of the encoder,
Figure FDA0002950724360000043
is an enhanced feature that is achieved by feature fusion,
Figure FDA0002950724360000044
all enhancement features fused by a DFF module in the front (n-1) level of the encoder;
DFF module of decoder at nth stage
Figure FDA0002950724360000045
Is defined as:
Figure FDA0002950724360000046
in the formula: j is a function ofnIs a potential feature of the nth stage of the encoder,
Figure FDA0002950724360000047
is an enhanced feature implemented by feature fusion, L is the number of feature levels,
Figure FDA0002950724360000048
is the enhanced feature that is fused by the DFF module at the (n-1) stage before the decoder.
9. The image defogging method based on the countermeasure network and the multi-scale dense feature fusion as claimed in claim 8, wherein the enhanced features are updated in a progressive manner, and the specific updating steps are as follows:
SSS 1: calculating a feedback error between the potential feature and the enhanced feature of the DDF module at the time of the t-th feature fusion through a formula I;
SSS 2: subtracting the enhanced features to obtain a feedback error;
SSS 3: then, the feedback error is downsampled to be the same as the enhanced feature in size through a formula II; then adding the potential features to obtain fused enhanced features;
step SSS1 formula one is specifically as follows:
Figure FDA0002950724360000051
step SSS3 equation two is specifically as follows:
Figure FDA0002950724360000052
in the formula:
Figure FDA0002950724360000053
in order to feed back the error, the error is,
Figure FDA0002950724360000054
in order to enhance the features of the present invention,
Figure FDA0002950724360000055
in order to perform the up-sampling of the operator,
Figure FDA0002950724360000056
in order to be a potential feature,
Figure FDA0002950724360000057
is a downsampling operator.
CN202110209279.4A 2021-02-24 2021-02-24 Image defogging method based on countermeasure network and multi-scale dense feature fusion Pending CN112950498A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110209279.4A CN112950498A (en) 2021-02-24 2021-02-24 Image defogging method based on countermeasure network and multi-scale dense feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110209279.4A CN112950498A (en) 2021-02-24 2021-02-24 Image defogging method based on countermeasure network and multi-scale dense feature fusion

Publications (1)

Publication Number Publication Date
CN112950498A true CN112950498A (en) 2021-06-11

Family

ID=76246035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110209279.4A Pending CN112950498A (en) 2021-02-24 2021-02-24 Image defogging method based on countermeasure network and multi-scale dense feature fusion

Country Status (1)

Country Link
CN (1) CN112950498A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496472A (en) * 2021-06-24 2021-10-12 中汽创智科技有限公司 Image defogging model construction method, road image defogging device and vehicle
CN115331083A (en) * 2022-10-13 2022-11-11 齐鲁工业大学 Image rain removing method and system based on gradual dense feature fusion rain removing network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496472A (en) * 2021-06-24 2021-10-12 中汽创智科技有限公司 Image defogging model construction method, road image defogging device and vehicle
CN115331083A (en) * 2022-10-13 2022-11-11 齐鲁工业大学 Image rain removing method and system based on gradual dense feature fusion rain removing network

Similar Documents

Publication Publication Date Title
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN111539887B (en) Channel attention mechanism and layered learning neural network image defogging method based on mixed convolution
CN113096017B (en) Image super-resolution reconstruction method based on depth coordinate attention network model
CN109410135B (en) Anti-learning image defogging and fogging method
CN111833277B (en) Marine image defogging method with unpaired multi-scale mixed coding and decoding structure
CN112950498A (en) Image defogging method based on countermeasure network and multi-scale dense feature fusion
CN111402128A (en) Image super-resolution reconstruction method based on multi-scale pyramid network
CN113888550A (en) Remote sensing image road segmentation method combining super-resolution and attention mechanism
CN112241939B (en) Multi-scale and non-local-based light rain removal method
CN113297804B (en) Anomaly detection method and system based on U-Transformer multi-level feature reconstruction
CN116205962B (en) Monocular depth estimation method and system based on complete context information
CN112949636A (en) License plate super-resolution identification method and system and computer readable medium
CN115578280A (en) Construction method of double-branch remote sensing image defogging network
CN115775316A (en) Image semantic segmentation method based on multi-scale attention mechanism
CN116645598A (en) Remote sensing image semantic segmentation method based on channel attention feature fusion
CN116229106A (en) Video significance prediction method based on double-U structure
Yang et al. Image super-resolution reconstruction based on improved Dirac residual network
CN113379606B (en) Face super-resolution method based on pre-training generation model
CN112184555B (en) Stereo image super-resolution reconstruction method based on deep interactive learning
Wang et al. Gridformer: Residual dense transformer with grid structure for image restoration in adverse weather conditions
CN117058043A (en) Event-image deblurring method based on LSTM
CN116385265B (en) Training method and device for image super-resolution network
CN116934613A (en) Branch convolution channel attention module for character repair
CN116228576A (en) Image defogging method based on attention mechanism and feature enhancement
CN113409321B (en) Cell nucleus image segmentation method based on pixel classification and distance regression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination