CN115587946A - Remote sensing image defogging method based on multi-scale network - Google Patents

Remote sensing image defogging method based on multi-scale network Download PDF

Info

Publication number
CN115587946A
CN115587946A CN202211245651.8A CN202211245651A CN115587946A CN 115587946 A CN115587946 A CN 115587946A CN 202211245651 A CN202211245651 A CN 202211245651A CN 115587946 A CN115587946 A CN 115587946A
Authority
CN
China
Prior art keywords
image
atmospheric
scale
remote sensing
transmittance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211245651.8A
Other languages
Chinese (zh)
Inventor
宋文韬
于艺
曹坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electric Rice Information System Co ltd
Original Assignee
China Electric Rice Information System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electric Rice Information System Co ltd filed Critical China Electric Rice Information System Co ltd
Priority to CN202211245651.8A priority Critical patent/CN115587946A/en
Publication of CN115587946A publication Critical patent/CN115587946A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a remote sensing image defogging method based on a multi-scale network, which comprises the following steps: extracting multi-level multi-scale optical characteristics of the satellite remote sensing image through network training; calculating a transmittance map reflecting the atmospheric environment condition at the imaging moment by integrating the multi-scale features; and calculating according to the atmospheric scattering model to obtain the recovered fog-free remote sensing image. The method carries out pretreatment such as radiometric calibration, atmospheric correction, resampling, reprojection and the like on the remote sensing image, thereby ensuring the reliability of a data source; establishing a multi-scale parallel neural network model through a feature extraction layer, multi-scale mapping, local polarization and a loss function, calculating atmospheric transmissivity of each point on the image and synthesizing to obtain a transmissivity image; obtaining an atmospheric light constant value by analyzing the transmittance graph; and restoring the fog-free image according to the atmospheric transmittance and the atmospheric light constant value. The method can better keep the spatial characteristics and the radiation characteristics of the image, and has the advantages of self-adaption, no need of auxiliary prior information and the like.

Description

Remote sensing image defogging method based on multi-scale network
Technical Field
The invention relates to a remote sensing image defogging method, in particular to a remote sensing image defogging method based on a multi-scale network.
Background
Nowadays, with the continuous development of domestic commercial satellites, the satellite is widely applied to the fields of earth science, military affairs, agriculture, forestry, hydrology, environmental protection and the like. When the optical remote sensing satellite is used for imaging, if haze consisting of water vapor, ice, dust and other small particles exists on a radiation transmission path, the radiation energy of the earth surface reflectivity is attenuated in the process of being acquired by a remote sensing detector, so that a remote sensing image becomes fuzzy, the image definition and the contrast are obviously reduced, and the subsequent processing and application of data are seriously influenced.
The defogging method for the satellite remote sensing image mainly comprises three methods based on time sequence observation, image enhancement and a physical model. However, most of the above methods implement image defogging based on image information or simplified physical models, both of which have their own disadvantages. The former (refer to Qi Q, zhang C, yuan Q, et al. An Adaptive Haze Method for Single remote Sensed Image correlation the Spatial and Spectral properties [ J ]. Geomatics and Information Science of Wuhan University,2019,44 (09): 1369-1376.) is limited in that the Method focuses on using the Spatial Information of the Image itself, lacks the support of the cloud imaging mechanism, has poor effect on maintaining the radiation Information contained in the remote sensing Image itself, and tends to cause distortion, supersaturation and the like; the latter (refer to Dai S B, xu W, piao Y J, et al. Remote Sensing Image unfolding Based on Dark Channel Prior [ J ]. Acta optical Sinica,2017,37 (03): 348-354.) is contrary to the former, too depends on the physical basis of atmospheric scattering transmission, needs certain Prior knowledge, and has larger influence on Defogging effect by parameter estimation of the atmospheric scattering model. The utilization of the self information of the image is less, and the recovered image has the problems of discontinuous spatial information and the like.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the technical problem of providing a remote sensing image defogging method based on a multi-scale network aiming at the defects of the prior art.
In order to solve the technical problem, the invention discloses a remote sensing image defogging method based on a multi-scale network, which comprises the following steps:
step 1: data acquisition and pretreatment; the data acquisition comprises: acquiring an input image to obtain a remote sensing multispectral image; preprocessing the remote sensing multispectral image to obtain a processed image;
step 2: establishing a multi-scale parallel neural network model, extracting multi-scale features of the processed image in the step 1 according to the model, mining a mapping relation between the multi-scale features and atmospheric transmissivity, and further calculating to obtain the atmospheric transmissivity of each point on the processed image; integrating the atmospheric transmissivity of each point to obtain an atmospheric transmissivity image;
and 3, step 3: calculating an atmospheric light constant; taking the atmosphere transmittance image obtained by calculation and integration in the step 2 as the spatial distribution of haze, and calculating the atmosphere light constant by using the spatial distribution;
and 4, step 4: restoring the image; and restoring the input image according to the atmospheric transmittance obtained in the step 2 and the atmospheric light constant obtained in the step 3 to obtain a fog-free image.
The pretreatment in step 1 comprises: radiometric calibration, atmospheric correction, resampling, and reprojection.
The method for calculating and obtaining the atmospheric transmittance in the step 2 comprises the following steps:
step 2-1: extracting characteristics;
step 2-2: multi-scale mapping;
step 2-3: local polarization;
step 2-4: constructing a loss function, and training the multi-scale parallel neural network;
step 2-5: and calculating to obtain the atmospheric transmittance according to the trained multi-scale parallel neural network.
The method for extracting the features in the step 2-1 comprises the following steps:
as a feature extraction layer of the multi-scale parallel neural network, a Maxout convolutional layer (refer: goodfellow I J, warde-Farley D, mirza M, et al. Maxout Networks [ J ]. 2013.) is used, which has the following functional form:
Figure BDA0003886479420000021
wherein j represents the characteristic diagram of the mth layer, i represents the characteristic diagram of the (m-1) th layer, and x represents the convolution operation; n represents the number of output characteristic graphs of m layers, k represents parameters required by the Maxout convolutional layers,
Figure BDA0003886479420000022
a mth layer neural network representing the number of neurons j, f m-1 i (x) And (3) representing the m-1 layer neural network with the number of the neurons being i.
The method for multi-scale mapping described in step 2-2 comprises:
the image features of the input image are computed at multiple spatial scales, i.e., multi-scale features.
The method of local polarization described in step 2-3 includes:
processing the image characteristics obtained by calculation in the step 2-2 by using a local extremization method; the expression for local polarization is as follows:
Figure BDA0003886479420000031
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003886479420000032
the mth layer neural network representing the number of neurons as i, wherein omega (x) is l with pixel x as center m ×l m And (4) a region.
The method for constructing the loss function in the step 2-4 comprises the following steps:
an L2 paradigm for calculating pixel-by-pixel difference values is adopted as a loss function, and the specific form is as follows:
Figure BDA0003886479420000033
where, t represents the true atmospheric transmission,
Figure BDA0003886479420000034
and N is the number of samples input into the multi-scale parallel neural network for training each time.
The method for calculating the atmospheric light constant in the step 3 comprises the following steps:
selecting a position of an area on the atmospheric transmittance image, and then calculating an atmospheric light constant a by using the pixel value of the position on the processed image in the step 1, wherein the specific method comprises the following steps:
I(x)=J(x)t(x)+A[1-t(x)]
wherein, I (x) is the processed image in the step 1; j (x) is an atmospheric transmittance image obtained through the processing of the step 2; t (x) is the atmospheric transmittance; x represents the coordinates of the image; a is the atmospheric light constant.
The method for restoring the image in the step 4 comprises the following steps:
the fog-free image is restored by:
Figure BDA0003886479420000035
t in the formula 0 Is the lower limit of the transmittance.
Selecting a position of an area on the atmospheric transmittance image in the step 3, wherein the method comprises the following steps: the position of the area of 0.1% where the neighborhood average is darkest is selected on the atmospheric transmittance image.
Has the beneficial effects that:
compared with the traditional method, the method has better effects in the aspects of image similarity, color authenticity, ground feature detail recovery and the like, can better keep the spatial characteristics and radiation characteristics of the image, and has the advantages of self-adaption, no need of auxiliary prior information and the like.
Drawings
The foregoing and/or other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a schematic flow chart of an embodiment of the present invention.
FIG. 2 is a schematic view of the defogging process of the present invention.
FIG. 3 is a graph showing a comparison of defogging results of different methods.
FIG. 4 is a schematic diagram of local object image contrast for different defogging methods.
Detailed Description
The invention provides a remote sensing image defogging method based on a Multi-Scale Network, namely a satellite remote sensing image defogging method based on a Multi-Scale Parallel Convolutional Neural Network (MSPNN), which has the following technical scheme:
according to the method, a domestic high-resolution array satellite is selected as experimental data, the image characteristics of the remote sensing image are extracted through a multi-scale parallel neural network algorithm, the multi-scale spatial characteristics of the remote sensing image to be recovered are calculated, the transmissivity image and the atmospheric light constant of each channel are further calculated, complete image defogging is realized according to an atmospheric scattering model, and a result map is formed.
(1) Data acquisition and processing
The method comprises the steps of obtaining a remote sensing multispectral image, carrying out radiometric calibration and atmospheric correction on the image (refer to Liu Rong, forest yog, preprocessing of remote sensing images [ J ]. Gilin university report (Nature science edition), 2007 (04): 6-10.DOI.
(2) Multi-scale parallel neural network
Step 2-1: a feature extraction layer. The feature extraction layer is the first layer of the MSPNN network and mainly used for preliminarily extracting haze features on the image, and the specific mode is that the Maxout convolution layer is used as the feature extraction layer. The functional form of the Maxout layer is as follows:
Figure BDA0003886479420000041
where j denotes the characteristic diagram of the mth layer, i denotes the characteristic diagram of the m-1 th layer, and "+" denotes the convolution operation. N represents the number of output profiles of m layers.
Step 2-2: and (4) multi-scale mapping. The multi-scale features densely calculate the features of the input image on a plurality of spatial scales, and the extraction capability of the features can be improved and the sparsity can be enhanced by utilizing the network structure design of multi-scale parallel convolution.
Step 2-3: and (5) local polarization. In general, haze is locally continuous on an image, so that the characteristics of a previous layer are considered to be processed by using a neighborhood maximum value, and meanwhile, noise in the transmission process can be effectively suppressed by taking a local extreme value. The expression for local polarization is as follows:
Figure BDA0003886479420000042
wherein
Figure BDA0003886479420000051
The mth layer of neural network with the number of the neurons being i, wherein omega (x) is l taking the pixel x as the center m ×l m And (4) a region. In contrast to maximum pooling in convolutional neural networks, which typically reduces the resolution of the feature map, the local extrema operation here is applied densely to each feature map pixel, enabling the resolution to be preserved for image restoration. And the number of model parameters is reduced, and the over-fitting problem is prevented.
Step 2-4: a loss function. The purpose of MSPNN network training is to calculate atmospheric transmittance by combining image multi-scale deep features, so that an L2 paradigm for calculating pixel-by-pixel difference values is adopted as a final loss function in the loss function of network model training, and the specific form is as follows:
Figure BDA0003886479420000052
where t represents the true transmission of the light,
Figure BDA0003886479420000053
representing the transmittance values calculated by the network model, N being the number of samples trained per input model.
Step 2-5: and obtaining the multi-scale parallel neural network through the training in the steps, and calculating to obtain the atmospheric transmittance according to the trained multi-scale parallel neural network.
(3) Calculating atmospheric light constant
After the transmittance t (x) is obtained, the atmospheric light constant a is required to be obtained to restore the fog-free image. The atmospheric light constant generally corresponds to a region where haze is the densest, and the transmittance graph obtained by the MSPNN calculation can be regarded as the spatial distribution of the haze, so that the atmospheric light constant can be calculated by using the distribution. And selecting the position of the 0.1% area with the darkest neighborhood average value on the transmission image, and then obtaining the calculated value of the atmospheric light constant A by using the following formula according to the pixel value of the position on the original haze image.
I(x)=J(x)t(x)+A[1-t(x)]
Wherein I (x) is an image disturbed by haze; j (x) is a fog-free image needing to be restored; t (x) is the transmission of light through the atmospheric medium; x represents the image coordinates; a is the global atmospheric light constant.
(4) Haze image restoration
After the values of the transmittance t (x) and the atmospheric light constant a are obtained, the haze-free image is restored by the following formula. T in the formula 0 This is a lower limit of transmittance set to avoid transition of the obtained defogged image to a white field when the obtained transmittance is too small.
Figure BDA0003886479420000054
Compared with the traditional method, the method can better maintain the spatial characteristics and the radiation characteristics of the image, and has the advantages of self-adaption, no need of auxiliary prior information and the like.
Example (b):
the main process is as follows: selecting a high-resolution first image, extracting image features of the remote sensing image through a multi-scale parallel neural network algorithm, calculating multi-scale space features of the remote sensing image to be recovered, further calculating a transmissivity image and an atmospheric light constant of each channel, and then realizing complete image defogging according to an atmospheric scattering model to form a result image.
(1) Data acquisition and processing
The method selects 5 cloud-free fog-free top-resolution first-order images to construct a data set, and the specific information is shown in table 1, wherein the first column is the image serial number, the second column is the image shooting time, the third column is the longitude and latitude of the upper left corner of the image, and the fourth column is the longitude and latitude of the lower right corner of the image.
TABLE 1 Experimental image information Table
Figure BDA0003886479420000061
Each image is cut out into 5 images of 320 × 320, and the images are equally divided into 10000 image blocks of 16 × 16. Each image is uniformly sampled by 10 random medium transmission rates t epsilon (0, 1) to synthesize 10 foggy synthetic images by the method, 100000 foggy synthetic images are finally obtained, 80% of the foggy synthetic images are used as a training set, and 20% of the foggy synthetic images are used as a test set.
(2) Multi-scale parallel neural network
Step 2-1: a feature extraction layer. The feature extraction layer is the first layer of the MSPNN network and mainly used for preliminarily extracting haze features on the image, and the specific mode is that a Maxout convolutional layer is adopted as the feature extraction layer. The functional form of the Maxout layer is as follows:
Figure BDA0003886479420000062
where j denotes the characteristic diagram of the mth layer, i denotes the characteristic diagram of the m-1 th layer, and "+" denotes the convolution operation. N represents the number of output profiles of m layers.
Step 2-2: when the MSPNN designs a parallel network structure, firstly, parallel convolutional layers with three dimensions of 3 × 3, 5 × 5 and 7 × 7 are determined, and a plurality of 3 × 3 convolutional layers are used for replacing the 5 × 5 convolutional layers and the 7 × 7 convolutional layers, so that the strong feature extraction capability is ensured, and the model training complexity is reduced. And simultaneously, a 3 × 3 convolution parallel layer is added, and then the 1 × 3 convolution layer and the 3 × 1 convolution layer are connected to further extract information. The four-layer parallel network can fully extract deep characteristic information of the foggy image.
Step 2-3: and (5) local polarization. In general, haze is locally continuous on an image, so that the characteristics of a previous layer are considered to be processed by using a neighborhood maximum value, and meanwhile, noise in the transmission process can be effectively suppressed by taking a local extreme value. The expression for local polarization is as follows:
Figure BDA0003886479420000063
wherein
Figure BDA0003886479420000064
The mth layer of neural network with the number of the neurons being i, wherein omega (x) is l with the pixel x as the center m ×l m And (4) a region. In contrast to maximum pooling in convolutional neural networks, which typically reduces the resolution of the feature map, the local extrema operation here is applied densely to each feature map pixel, enabling the resolution to be preserved for image restoration. And the number of model parameters is reduced, and the over-fitting problem is prevented.
Step 2-4: a loss function. The purpose of MSPNN network training is to combine the image multi-scale deep features to calculate the atmospheric transmittance, so that the loss function of the network model training adopts an L2 paradigm for calculating pixel-by-pixel difference values as a final loss function, and the specific form is as follows:
Figure BDA0003886479420000071
where t represents the true transmission of light,
Figure BDA0003886479420000072
representing the value of transmission calculated by the network model, N being perAnd the number of samples trained by the secondary input model.
Step 2-5: the detailed configuration and parameter settings of the multi-scale parallel neural network obtained by training are shown in table 2, the first column is neurons, the second column is function type, the third column is input size, the fourth column is the number of convolution kernels, including 5 convolution layers and 1 maximum pooling layer, and Maxout and PReLU are used as activation functions after the first and last convolution operations, respectively. And finally, outputting a transmittance value t (x) through the global average pooling layer.
TABLE 2 defogging network concrete structure Table
Figure BDA0003886479420000073
(3) Calculating atmospheric light constant
The atmospheric light constant generally corresponds to a region with the densest haze, and a transmittance graph obtained through the MSPNN calculation can be regarded as the spatial distribution of the haze, so that the atmospheric light constant can be calculated by using the distribution, and the specific method is as follows: and selecting the position of the 0.1% area with the darkest neighborhood average value on the transmission image, and then obtaining the calculated value of the atmospheric light constant A by using the following formula according to the pixel value of the position on the original haze image.
I(x)=J(x)t(x)+A[1-t(x)]
Wherein I (x) is an image disturbed by haze; j (x) is a fog-free image needing to be restored; t (x) is the transmission of light through the atmospheric medium; x represents the image coordinates; a is the global atmospheric light constant.
(4) Haze image restoration
After the values of the transmittance t (x) and the atmospheric light constant a are found, the haze-free image is restored by the following equation. T in the formula 0 The lower limit of the transmittance is set to avoid transition of the obtained defogged image to a white field when the obtained transmittance is too small, and the lower limit of the transmittance is 0.1 in the invention.
Figure BDA0003886479420000081
The defogging process is illustrated in fig. 2, wherein the first row is an original haze image, the second row is a transmittance image obtained by a network, and the third row is a defogged image.
In order to verify the application effect of the method, three aspects of texture, color and radiation are selected for evaluation, and compared with a traditional FVR (fast visibility restoration) method, a DCP (dark channel prior) method and a retinex colorconstancy method for verification. FIG. 3 shows the defogging results of different methods, wherein the first row is the original haze image; the second column is the defogging result of the retinex color constant method; the third column shows the defogging results by the FVR method; the fourth column shows the defogging result by the DCP method; the fifth column is the defogging result of the method of the present invention.
And selecting three quantitative indexes of average gradient (G), standard deviation (STD) and information entropy (E) to carry out quantitative analysis on the defogging effect of each method on the texture feature recovery effect. The average gradient reflects the contrast of tiny details and the texture change characteristics in the image, the standard deviation represents the dispersion degree of the gray value of the image pixel relative to the average value, the information entropy refers to the average information quantity of the image, the information quantity in the image is measured from the angle of the information theory, and the larger the information entropy, the more the information contained in the image is. The definition of the information entropy is shown as follows:
Figure BDA0003886479420000082
where L represents the total number of levels of pixel values, L represents the pixel value, and P (L) represents the probability of the pixel value occurring. Table 3 shows the average value of the three indexes for the indexes of each group of experimental results in fig. 3, the first row is the name of each method, and the first column is the average gradient (G), the standard deviation (STD), and the information entropy (E), so that the method of the present invention is superior to the other methods in terms of each index as a whole, and is greatly improved compared with the haze image. The image processed by the method of the invention has clear details and rich colors.
TABLE 3 quantitative index of defogging results
Figure BDA0003886479420000083
Experimental results show that the retinex color constancy method and the FVR recover the haze image, but the color cast phenomenon occurs to a certain degree. The dark channel prior method and the method of the invention have no obvious color cast, but the dark channel prior method has certain residue on some images for removing haze. The defogging effect based on the MSPNN has better effects than the traditional method in the aspects of color reality and saturation.
In addition, the method of the invention has better reduction effect for the details of the ground features, as shown in fig. 4, the method is a local ground feature image processed by several defogging methods, wherein the first column is an original haze image; the second column is the defogging result of the retinex color constant method; the third column shows the defogging results by the FVR method; the fourth column shows the defogging result of the DCP method; the fifth column shows the defogging result of the method, and other methods have the phenomena of overexposure and detail loss to a certain degree for the ground objects with higher brightness, but the method can better keep the details of the ground objects.
In specific implementation, the present application provides a computer storage medium and a corresponding data processing unit, where the computer storage medium is capable of storing a computer program, and the computer program, when executed by the data processing unit, may run the inventive content of the remote sensing image defogging method based on a multi-scale network and some or all of the steps in each embodiment provided in the present invention. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
It is obvious to those skilled in the art that the technical solutions in the embodiments of the present invention can be implemented by means of a computer program and its corresponding general-purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in the form of a computer program, that is, a software product, which may be stored in a storage medium and includes several instructions to enable a device (which may be a personal computer, a server, a single chip, an MUU, or a network device) including a data processing unit to execute the method in each embodiment or some parts of the embodiments of the present invention.
The invention provides a remote sensing image defogging method based on a multi-scale network. The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (10)

1. A remote sensing image defogging method based on a multi-scale network is characterized by comprising the following steps:
step 1: data acquisition and preprocessing; the data acquisition comprises: acquiring an input image to obtain a remote sensing multispectral image; preprocessing the remote sensing multispectral image to obtain a processed image;
step 2: establishing a multi-scale parallel neural network model, extracting multi-scale features of the processed image in the step 1 according to the model, mining a mapping relation between the multi-scale features and atmospheric transmissivity, and further calculating to obtain the atmospheric transmissivity of each point on the processed image; integrating the atmospheric transmissivity of each point to obtain an atmospheric transmissivity image;
and step 3: calculating an atmospheric light constant; taking the atmosphere transmittance image obtained by calculation and integration in the step 2 as the spatial distribution of haze, and calculating the atmosphere light constant by using the spatial distribution;
and 4, step 4: restoring the image; and restoring the input image according to the atmospheric transmittance obtained in the step 2 and the atmospheric light constant obtained in the step 3 to obtain a fog-free image.
2. The remote sensing image defogging method based on the multi-scale network according to claim 1, wherein the preprocessing in the step 1 comprises: radiometric calibration, atmospheric correction, resampling, and reprojection.
3. The remote sensing image defogging method based on the multi-scale network according to claim 2, wherein the method for calculating the atmospheric transmittance in the step 2 comprises the following steps:
step 2-1: extracting characteristics;
step 2-2: multi-scale mapping;
step 2-3: local polarization;
step 2-4: constructing a loss function, and training the multi-scale parallel neural network;
step 2-5: and calculating to obtain the atmospheric transmittance according to the trained multi-scale parallel neural network.
4. The remote sensing image defogging method based on the multi-scale network as claimed in claim 3, wherein the feature extraction method in the step 2-1 comprises:
and adopting a Maxout convolutional layer as a characteristic extraction layer of the multi-scale parallel neural network, wherein the function form of the Maxout layer is as follows:
Figure FDA0003886479410000011
wherein j represents the characteristic diagram of the mth layer, i represents the characteristic diagram of the (m-1) th layer, and x represents the convolution operation; n represents the number of output characteristic graphs of m layers, k represents parameters required by the Maxout convolutional layers,
Figure FDA0003886479410000012
the mth layer neural network representing the number of neurons as j, f m-1 i (x) And the (m-1) layer neural network represents the number i of the neurons.
5. The remote sensing image defogging method based on the multi-scale network as claimed in claim 4, wherein the multi-scale mapping method in step 2-2 comprises:
the image features of the input image are computed over multiple spatial scales, i.e., multi-scale features.
6. The remote sensing image defogging method based on the multi-scale network as claimed in claim 5, wherein the local polarization method in step 2-3 comprises:
processing the image characteristics obtained by calculation in the step 2-2 by using a local extremization method; the expression for local polarization is as follows:
Figure FDA0003886479410000021
wherein the content of the first and second substances,
Figure FDA0003886479410000022
the mth layer neural network representing the number of neurons as i, wherein omega (x) is l with pixel x as center m ×l m And (4) a region.
7. The remote-sensing image defogging method based on the multi-scale network as claimed in claim 6, wherein the method for constructing the loss function in the step 2-4 comprises the following steps:
an L2 paradigm for calculating pixel-by-pixel difference values is adopted as a loss function, and the specific form is as follows:
Figure FDA0003886479410000023
where, t represents the true atmospheric transmission,
Figure FDA0003886479410000024
representing the passing of multi-scale parallel godsAnd (3) calculating the atmospheric transmittance through the network, wherein N is the number of samples input into the multi-scale parallel neural network for training each time.
8. The remote sensing image defogging method based on the multi-scale network according to claim 7, wherein the method for calculating the atmospheric light constant in the step 3 comprises the following steps:
selecting a position of an area on the atmospheric transmittance image, and then calculating an atmospheric light constant a by using the pixel value of the position on the processed image in the step 1, wherein the specific method comprises the following steps:
I(x)=J(x)t(x)+A[1-t(x)]
wherein, I (x) is the processed image in the step 1; j (x) is an atmospheric transmittance image obtained through the processing of the step 2; t (x) is the atmospheric transmittance; x represents the coordinates of the image; a is the atmospheric light constant.
9. The remote sensing image defogging method based on the multi-scale network as claimed in claim 8, wherein the image restoration method in the step 4 comprises:
the fog-free image is restored by:
Figure FDA0003886479410000031
t in the formula 0 Is the lower limit of the transmittance.
10. The remote sensing image defogging method based on the multi-scale network according to claim 9, wherein the position of an area on the atmospheric transmittance image is selected in the step 3 by: the position of the area of 0.1% where the neighborhood average is darkest is selected on the atmospheric transmittance image.
CN202211245651.8A 2022-10-12 2022-10-12 Remote sensing image defogging method based on multi-scale network Pending CN115587946A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211245651.8A CN115587946A (en) 2022-10-12 2022-10-12 Remote sensing image defogging method based on multi-scale network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211245651.8A CN115587946A (en) 2022-10-12 2022-10-12 Remote sensing image defogging method based on multi-scale network

Publications (1)

Publication Number Publication Date
CN115587946A true CN115587946A (en) 2023-01-10

Family

ID=84779533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211245651.8A Pending CN115587946A (en) 2022-10-12 2022-10-12 Remote sensing image defogging method based on multi-scale network

Country Status (1)

Country Link
CN (1) CN115587946A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109513A (en) * 2023-02-27 2023-05-12 南京林业大学 Image defogging method based on local ambient light projection constant priori
CN117474801A (en) * 2023-10-30 2024-01-30 安徽大学 Non-uniform remote sensing video image defogging method integrating space-time frequency information

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109513A (en) * 2023-02-27 2023-05-12 南京林业大学 Image defogging method based on local ambient light projection constant priori
CN117474801A (en) * 2023-10-30 2024-01-30 安徽大学 Non-uniform remote sensing video image defogging method integrating space-time frequency information
CN117474801B (en) * 2023-10-30 2024-05-07 安徽大学 Non-uniform remote sensing video image defogging method integrating space-time frequency information

Similar Documents

Publication Publication Date Title
Meraner et al. Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion
CN108921799B (en) Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network
CN115587946A (en) Remote sensing image defogging method based on multi-scale network
CN105976330B (en) A kind of embedded greasy weather real time video image stabilization
CN110349117B (en) Infrared image and visible light image fusion method and device and storage medium
Bouali et al. Adaptive reduction of striping for improved sea surface temperature imagery from Suomi National Polar-Orbiting Partnership (S-NPP) visible infrared imaging radiometer suite (VIIRS)
Anandhi et al. An algorithm for multi-sensor image fusion using maximum a posteriori and nonsubsampled contourlet transform
CN114119444A (en) Multi-source remote sensing image fusion method based on deep neural network
Kang et al. Fog model-based hyperspectral image defogging
JP6943251B2 (en) Image processing equipment, image processing methods and computer-readable recording media
Oehmcke et al. Creating cloud-free satellite imagery from image time series with deep learning
Wang et al. Multiscale single image dehazing based on adaptive wavelet fusion
CN117115669B (en) Object-level ground object sample self-adaptive generation method and system with double-condition quality constraint
Hu et al. GAN-based SAR and optical image translation for wildfire impact assessment using multi-source remote sensing data
CN113869262A (en) Prediction method and device of land information of target area based on Unet model
KR102095444B1 (en) Method and Apparatus for Removing gain Linearity Noise Based on Deep Learning
Roy et al. WLMS-based Transmission Refined self-adjusted no reference weather independent image visibility improvement
Li et al. ConvFormerSR: Fusing Transformers and Convolutional Neural Networks for Cross-sensor Remote Sensing Imagery Super-resolution
CN116385281A (en) Remote sensing image denoising method based on real noise model and generated countermeasure network
Shahtahmassebi et al. Evaluation on the two filling functions for the recovery of forest information in mountainous shadows on Landsat ETM+ Image
Rokni Investigating the impact of Pan Sharpening on the accuracy of land cover mapping in Landsat OLI imagery
Kumar et al. An empirical review on image dehazing techniques for change detection of land cover
CN114724023A (en) Twin network-based water body change detection method
Lu et al. Application of improved CNN in SAR image noise reduction
Yu et al. Haze removal using deep convolutional neural network for Korea Multi-Purpose Satellite-3A (KOMPSAT-3A) multispectral remote sensing imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination