CN118333861B - Remote sensing image reconstruction method, system, device and medium - Google Patents

Remote sensing image reconstruction method, system, device and medium Download PDF

Info

Publication number
CN118333861B
CN118333861B CN202410756700.7A CN202410756700A CN118333861B CN 118333861 B CN118333861 B CN 118333861B CN 202410756700 A CN202410756700 A CN 202410756700A CN 118333861 B CN118333861 B CN 118333861B
Authority
CN
China
Prior art keywords
remote sensing
image data
image
reconstructed image
reconstructed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410756700.7A
Other languages
Chinese (zh)
Other versions
CN118333861A (en
Inventor
宋永超
孙丽俊
王璇
刘兆伟
李湘南
毕季平
王莹洁
徐金东
阎维青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yantai University
Original Assignee
Yantai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai University filed Critical Yantai University
Priority to CN202410756700.7A priority Critical patent/CN118333861B/en
Publication of CN118333861A publication Critical patent/CN118333861A/en
Application granted granted Critical
Publication of CN118333861B publication Critical patent/CN118333861B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the field of image processing technology and specifically relates to a remote sensing image reconstruction method, system, device, and medium. Firstly, the authenticity and complexity of the data are enhanced by simulating the degradation of actual remote sensing images. Then, the remote sensing images are reconstructed twice and adaptively fused based on contrast to integrate image information of different resolutions, thereby improving the accuracy, clarity, and high quality of image reconstruction.

Description

Remote sensing image reconstruction method, system, device and medium
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a remote sensing image reconstruction method, a remote sensing image reconstruction system, a remote sensing image reconstruction device and a remote sensing image reconstruction medium.
Background
The super-resolution reconstruction of the remote sensing image is a technology for improving the spatial resolution of the remote sensing image by using a computer vision algorithm. In the field of remote sensing images, generation of a countermeasure network (GAN) is often used to improve the spatial resolution of the remote sensing image, thereby implementing super-resolution reconstruction of the remote sensing image. By carrying out super-resolution reconstruction on the remote sensing image, higher-quality and richer image data can be generated, and the method has important application value in the fields of geographic information analysis, resource management, environment monitoring and the like.
However, existing remote sensing image super-resolution reconstruction methods based on GAN still face some challenges. First, training GAN requires a large amount of high resolution image data, which is often difficult and expensive to acquire. Second, artifacts, blurring, and distortion may occur in the generated image, mainly due to insufficient training or unreasonable model design. Therefore, it is necessary to develop a more accurate and efficient super-resolution reconstruction method to improve the resolution and quality of the remote sensing image.
Disclosure of Invention
In order to solve the problems set forth in the background art, the invention provides a remote sensing image reconstruction method, a remote sensing image reconstruction system, a remote sensing image reconstruction device and a remote sensing image reconstruction medium.
The technical scheme of the invention is as follows:
the invention provides a remote sensing image reconstruction method, which comprises the following steps:
s1: acquiring remote sensing image data, respectively carrying out different degradation treatments, then randomly arranging to obtain low-resolution image data, and constructing a low-resolution image dataset based on the low-resolution image data;
S2: after the remote sensing image data and the low-resolution image data are led to the generated network model, high-resolution image data are obtained, the high-resolution image data are compared with the remote sensing image data, a first loss value is calculated according to a first loss function, training is carried out based on the first loss value, and a first reconstructed image and a generated network model after training are obtained;
S3: leading the remote sensing image data to a discrimination network model, calculating a second loss value according to a second loss function, and training based on the second loss value to obtain a discrimination network model after preliminary training;
Leading the low-resolution image data to a trained generated network model to obtain preliminary reconstructed image data, leading the preliminary reconstructed image data and the remote sensing image data to a preliminary trained discrimination network model, calculating a third loss value according to a third loss function, carrying out iterative training on the trained generated network model based on the third loss value until the third loss value reaches a threshold value, ending iteration to obtain a second reconstructed image and generating an countermeasure network model;
S4: based on the first reconstructed image and the second reconstructed image, respectively calculating standard deviation of pixel values in a preset contrast area according to each pixel and the preset contrast area to obtain contrast of the preset contrast area of the first reconstructed image and the second reconstructed image, respectively calculating contrast mean values of the first reconstructed image and the second reconstructed image based on the contrast, and obtaining fusion weight of each pixel according to the duty ratio of the contrast mean value of the first reconstructed image or the second reconstructed image in the sum of the contrast mean values of the first reconstructed image and the second reconstructed image, and carrying out weighted fusion on the first reconstructed image and the second reconstructed image to obtain the reconstructed remote sensing image.
In S2, the first loss function is a weighted sum of the L1 loss function and Charbonnier loss functions, specifically,
According to the formula: a first loss function is obtained, which,
In the method, in the process of the invention,As a function of the first loss,The weight of the loss function for L1,As a function of the loss of L1,The weight of the penalty function is Charbonnier,Is Charbonnier loss functions.
In the step S2, training is performed, specifically,
Comparing the high-resolution image data with the remote sensing image data, calculating a first loss value according to a first loss function, updating the weight parameters of the generated network model through a back propagation algorithm based on the first loss value to obtain new high-resolution image data, comparing the new high-resolution image data with the remote sensing image data, and calculating the first loss value according to the first loss function until the first loss value reaches a preset condition.
In S4, a fusion weight of each pixel is obtained, specifically,
According to the formula: Obtaining the fusion weight of each pixel in the first reconstructed image,
Wherein,Representing the fusion weight of each pixel in the first reconstructed image,For the contrast value of the first reconstructed image,For the contrast value of the second reconstructed image,For the contrast mean of the first reconstructed image,A contrast mean value of the second reconstructed image;
According to the formula: obtaining the fusion weight of each pixel in the second reconstructed image,
Wherein,Representing the fusion weight for each pixel in the first reconstructed image.
In S3, the second loss function is a weighted sum of the L1 loss function, the VGG perceived loss function, and the PatchGAN loss function, specifically,
According to the formula: A second loss function is obtained, which,
Where L is a second loss function,The weight of the loss function for L1,As a function of the loss of L1,Is the weight of the VGG perceptual loss function,The weight of the penalty function is PatchGAN,Is PatchGAN loss functions.
In the step S1, after different degradation treatments are respectively performed, random arrangement is performed to obtain low-resolution image data, specifically,
And respectively carrying out fuzzy processing, downsampling and random noise addition on the remote sensing image data to obtain degraded remote sensing image data, and carrying out random arrangement on the degraded remote sensing image data to obtain low-resolution image data.
In S4, preprocessing the first reconstructed image and the second reconstructed image is further included, including:
and respectively carrying out gray level processing on the first reconstructed image and the second reconstructed image, and then carrying out Gaussian blur processing.
The invention also provides a remote sensing image reconstruction system, which comprises:
A low resolution image dataset construction module: the method comprises the steps of acquiring remote sensing image data, respectively carrying out different degradation treatments, then randomly arranging to obtain low-resolution image data, and constructing a low-resolution image dataset based on the low-resolution image data;
Generating a network model training module: the method comprises the steps of guiding remote sensing image data and low-resolution image data to a generated network model to obtain high-resolution image data, comparing the high-resolution image data with the remote sensing image data, calculating a first loss value according to a first loss function, and training based on the first loss value to obtain a first reconstructed image and a generated network model after training;
generating an countermeasure network model training module: the remote sensing image data are guided to the judging network model, a second loss value is calculated according to a second loss function, training is carried out based on the second loss value, and the judging network model after preliminary training is obtained;
Leading the low-resolution image data to a trained generated network model to obtain preliminary reconstructed image data, leading the preliminary reconstructed image data and the remote sensing image data to a preliminary trained discrimination network model, calculating a third loss value according to a third loss function, carrying out iterative training on the trained generated network model based on the third loss value until the third loss value reaches a threshold value, ending iteration to obtain a second reconstructed image and generating an countermeasure network model;
And a remote sensing image reconstruction module: based on the first reconstructed image and the second reconstructed image, respectively calculating standard deviation of pixel values in a preset contrast area according to each pixel and the preset contrast area to obtain contrast of the preset contrast area of the first reconstructed image and the second reconstructed image, respectively calculating contrast mean values of the first reconstructed image and the second reconstructed image based on the contrast, and obtaining fusion weight of each pixel according to the duty ratio of the contrast mean value of the first reconstructed image or the second reconstructed image in the sum of the contrast mean values of the first reconstructed image and the second reconstructed image, and carrying out weighted fusion on the first reconstructed image and the second reconstructed image to obtain the reconstructed remote sensing image.
The invention also provides a remote sensing image reconstruction device, which comprises a processor and a memory, wherein the remote sensing image reconstruction method is realized when the processor executes the computer program stored in the memory.
The invention also provides a remote sensing image reconstruction medium for storing a computer program, wherein the computer program realizes the remote sensing image reconstruction method when being executed by a processor.
The beneficial effects are that: the method comprises the steps of firstly enhancing the authenticity and complexity of data by simulating the degradation condition of an actual remote sensing image, then respectively reconstructing the remote sensing image twice, and carrying out self-adaptive fusion according to the contrast ratio to integrate the image information with different resolutions, thereby improving the accuracy, definition and high quality of image reconstruction;
The remote sensing image data after different degradation processing is randomly arranged to generate a low-resolution image with diversified degradation characteristics so as to increase randomness and sense of reality of the degradation process, expand degradation space and increase diversity of a degradation model, and is more close to various degradation conditions in the real world so as to adapt to the requirements of different degradation conditions in the real world;
The invention uses the remote sensing image data as a reference, compares the high resolution image data with the remote sensing image data, innovates a loss function, provides richer information to carry out the reconstruction process, thereby being capable of more accurately restoring the details and the characteristics of the high resolution image and improving the quality and the accuracy of image reconstruction;
The invention realizes detail reservation and fusion balance of different areas by dynamically adjusting fusion weights based on the contrast information of the first reconstructed image and the second reconstructed image, and maintains the characteristics and detail information of the original image in the reconstructed remote sensing image by self-adaptive fusion, and simultaneously utilizes the high similarity and the verisimilitude of the generated image of the trained generated network model, thereby improving the quality and the authenticity of the reconstructed image.
Drawings
FIG. 1 is a flowchart of a remote sensing image reconstruction method according to an embodiment,
FIG. 2 shows the results of reconstruction of remote sensing images in RSC11 dataset by different remote sensing image reconstruction methods, where (a) is the remote sensing image, (b) is the image obtained by bicubic interpolation, (c) is the image obtained by generating an countermeasure network model, (d) is the image obtained by generating an enhanced countermeasure network model, (e) is the image obtained by generating the countermeasure network model by the best partner, (f) is the image obtained by blind super-resolution reconstruction model, (g) is the image obtained by using the remote sensing image reconstruction method of the present application,
FIG. 3 shows the results of reconstruction of remote sensing images in AID data sets by different remote sensing image reconstruction methods, wherein (a) is a remote sensing image, (b) is an image obtained by bicubic interpolation, (c) is an image obtained by generating an countermeasure network model, (d) is an image obtained by generating an enhanced countermeasure network model, (e) is an image obtained by generating a countermeasure network model by an optimal partner, (f) is an image obtained by blind super-resolution reconstruction model, (g) is an image obtained by the remote sensing image reconstruction method of the present application,
Fig. 4 shows the results of reconstruction of a train station remote sensing image in a WHU-RS19 dataset by different remote sensing image reconstruction methods, wherein (a) is a remote sensing image, (b) is a locally enlarged image of the remote sensing image, (c) is a locally enlarged image of an image obtained by bicubic interpolation, (d) is a locally enlarged image of an image obtained by generating an countermeasure network model, (e) is a locally enlarged image of an image obtained by generating an countermeasure network model through enhancement, (f) is a locally enlarged image of an image obtained by blind super-resolution reconstruction model, and (g) is a locally enlarged image of an image obtained by adopting the remote sensing image reconstruction method of the present application.
Fig. 5 shows the results of reconstruction of a remote sensing image of a residential area in a WHU-RS19 dataset by different remote sensing image reconstruction methods, wherein (a) is a remote sensing image, (b) is a locally enlarged image of the remote sensing image, (c) is a locally enlarged image of an image obtained by bicubic interpolation, (d) is a locally enlarged image of an image obtained by generating an countermeasure network model, (e) is a locally enlarged image of an image obtained by generating an countermeasure network model through enhancement, (f) is a locally enlarged image of an image obtained by blind super-resolution reconstruction model, and (g) is a locally enlarged image of an image obtained by adopting the remote sensing image reconstruction method of the present application.
Fig. 6 shows the results of reconstruction of airport remote sensing images in WHU-RS19 dataset by different remote sensing image reconstruction methods, wherein (a) is a remote sensing image, (b) is a locally enlarged image of the remote sensing image, (c) is a locally enlarged image of an image obtained by bicubic interpolation, (d) is a locally enlarged image of an image obtained by generating an countermeasure network model, (e) is a locally enlarged image of an image obtained by generating an countermeasure network model by enhancement, (f) is a locally enlarged image of an image obtained by blind super-resolution reconstruction model, and (g) is a locally enlarged image of an image obtained by adopting the remote sensing image reconstruction method of the present application.
Fig. 7 shows the results of reconstruction of a business region remote sensing image in a WHU-RS19 dataset by different remote sensing image reconstruction methods, wherein (a) is a remote sensing image, (b) is a locally enlarged image of the remote sensing image, (c) is a locally enlarged image of an image obtained by bicubic interpolation, (d) is a locally enlarged image of an image obtained by generating an countermeasure network model, (e) is a locally enlarged image of an image obtained by generating an countermeasure network model through enhancement, (f) is a locally enlarged image of an image obtained by a blind super-resolution reconstruction model, and (g) is a locally enlarged image of an image obtained by adopting the remote sensing image reconstruction method of the present application.
Detailed Description
The following examples are intended to illustrate the invention, but not to limit it further.
The invention provides a remote sensing image reconstruction method, which is shown in figure 1 and comprises the following steps:
S1: acquiring remote sensing image data, performing random arrangement after performing unused degradation treatment respectively to obtain low-resolution image data, and constructing a low-resolution image dataset based on the low-resolution image data;
S2: after the remote sensing image data and the low-resolution image data are led to the generated network model, high-resolution image data are obtained, the high-resolution image data are compared with the remote sensing image data, a first loss value is calculated according to a first loss function, training is carried out based on the first loss value, and a first reconstructed image and a generated network model after training are obtained;
S3: leading the remote sensing image data to a discrimination network model, calculating a second loss value according to a second loss function, and training based on the second loss value to obtain a discrimination network model after preliminary training;
Leading the low-resolution image data to a trained generated network model to obtain preliminary reconstructed image data, leading the preliminary reconstructed image data and the remote sensing image data to a preliminary trained discrimination network model, calculating a third loss value according to a third loss function, carrying out iterative training on the trained generated network model based on the third loss value until the third loss value reaches a threshold value, ending iteration to obtain a second reconstructed image and generating an countermeasure network model;
S4: based on the first reconstructed image and the second reconstructed image, respectively calculating standard deviation of pixel values in a preset contrast area according to each pixel and the preset contrast area to obtain contrast of the preset contrast area of the first reconstructed image and the second reconstructed image, respectively calculating contrast mean values of the first reconstructed image and the second reconstructed image based on the contrast, and obtaining fusion weight of each pixel according to the duty ratio of the contrast mean value of the first reconstructed image or the second reconstructed image in the sum of the contrast mean values of the first reconstructed image and the second reconstructed image, and carrying out weighted fusion on the first reconstructed image and the second reconstructed image to obtain the reconstructed remote sensing image.
The method and the device enhance the reality and complexity of the data by simulating the degradation condition of the actual remote sensing image, respectively reconstruct the remote sensing image twice, and adaptively fuse the remote sensing image according to the contrast ratio to integrate the image information with different resolutions, thereby improving the accuracy, the definition and the high quality of the image reconstruction.
S1: and acquiring remote sensing image data, respectively carrying out different degradation treatments, then, randomly arranging to obtain low-resolution image data, and constructing a low-resolution image dataset based on the low-resolution image data.
In order to generate a low resolution image corresponding to an actual scene, it is preferable that in S1, after different degradation processes are performed, respectively, the images are randomly arranged, specifically,
And respectively carrying out fuzzy processing, downsampling and random noise addition on the remote sensing image data to obtain degraded remote sensing image data, and carrying out random arrangement on the degraded remote sensing image data to obtain low-resolution image data.
For example, the blurring process includes: isotropic gaussian blur, anisotropic gaussian blur, and motion blur to simulate different blurring effects that occur in real world images during acquisition, transmission, or display.
For gaussian blur, a smooth gaussian kernel can be used to achieve the blurring effect of the image, and the degree of blur can be controlled by varying the kernel size and standard deviation. Isotropic gaussian blur means that the same blur degree exists in all directions, and anisotropic gaussian blur can adjust the blur degree according to gradient information of an image, so that edge information of the image is better reserved.
For motion blur, the blurring effect generated by an image during movement can be simulated by creating a blurring kernel, specifically, the blurring direction and length are determined first, and then the blurring kernel is calculated according to the parameters and applied to an input image, so that the blurring process of the image is realized.
Downsampling, the original image is reconstructed using interpolation methods to generate a new, scaled-down image. Wherein bilinear interpolation and bicubic interpolation are common interpolation methods, respectively considering information of surrounding four nearest neighbor pixels and further neighboring pixels, to maintain the overall structure and texture details of the image.
In order to overcome the limitation of the traditional degradation model in the degradation process of simulating the low-resolution image, the low-resolution image with diversified degradation characteristics is generated by randomly arranging the remote sensing image data after degradation processing, so that the randomness and the sense of reality of the degradation process are increased, the degradation space is expanded, the diversity of the degradation model is increased, and the degradation model is more close to various degradation situations in the real world, so that the requirements of different degradation situations in the real world are met.
S2: and leading the remote sensing image data and the low-resolution image data into a generated network model to obtain high-resolution image data, comparing the high-resolution image data with the remote sensing image data, calculating a first loss value according to a first loss function, and training based on the first loss value to obtain a first reconstructed image and a generated network model after training.
The basic architecture of the generated network model is as follows: based on Residual dense blocks (Residual-in-Residual Dense Block, RRDB), the Residual connection and dense connection are combined, so that the performance and the convergence speed of generating a network model are effectively improved. Wherein each RRDB consists of 3 RDB (Residual Dense Block), wherein each RDB consists of 5 convolutional layers and 1 active layer, for a total of 25 RRDB. This allows the model to learn better about the image features and reduces the loss of information during deep propagation while also helping to speed up the convergence rate of the model.
The invention adopts the method of comparing the generated high-resolution image data with the remote sensing image data by generating a network model to process the low-resolution image so as to restore the detail and quality of the high-resolution image. This contrast process is typically accomplished by calculating the differences between the images. The differences may be quantified by various image quality assessment indicators (e.g., peak signal-to-noise ratio, structural similarity indicators, etc.). The whole process aims to gradually learn the mapping relation from the low resolution image to the remote sensing image by generating a network model so as to minimize the image difference.
Preferably, in S2, training is performed, specifically,
Comparing the high-resolution image data with the remote sensing image data, calculating a first loss value according to a first loss function, updating the weight parameters of the generated network model through a back propagation algorithm based on the first loss value to obtain new high-resolution image data, comparing the new high-resolution image data with the remote sensing image data, and calculating the first loss value according to the first loss function until the first loss value reaches a preset condition.
The invention uses the remote sensing image data as a reference, compares the high-resolution image data with the remote sensing image data, innovates a loss function, and provides richer information to carry out the reconstruction process, thereby being capable of more accurately restoring the details and the characteristics of the high-resolution image and improving the quality and the accuracy of image reconstruction.
Preferably, in S2, the first loss function is a weighted sum of the L1 loss function and Charbonnier loss functions, specifically,
According to the formula: a first loss function is obtained, which,
In the method, in the process of the invention,As a function of the first loss,The weight of the loss function for L1,As a function of the loss of L1,The weight of the penalty function is Charbonnier,Is Charbonnier loss functions.
Wherein, the L1 loss function formula is:
In the method, in the process of the invention, As a function of the loss of L1,For the remote sensing of the image data,In order for the image data to be of high resolution,The difference between the high resolution image data and the remote sensing image data at the pixel level is measured.
Charbonnier loss function formula is:
In the method, in the process of the invention, As a function of Charbonnier losses,For the remote sensing of the image data,In order for the image data to be of high resolution,Is a constant for avoiding the case where the denominator is zero when calculating the square root.
S3: leading the remote sensing image data to a discrimination network model, calculating a second loss value according to a second loss function, and training based on the second loss value to obtain a discrimination network model after preliminary training;
and guiding the low-resolution image data to the trained generated network model to obtain preliminary reconstructed image data, guiding the preliminary reconstructed image data and the remote sensing image data to the preliminary trained discrimination network model, calculating a third loss value according to a third loss function, performing iterative training on the trained generated network model based on the third loss value until the third loss value reaches a threshold value, ending the iteration, obtaining a second reconstructed image and generating an countermeasure network model.
The basic architecture of the discrimination network model is as follows: the Convolutional Neural Network (CNN) based on the VGG model comprises a plurality of convolutional layers, a pooling layer and a full-connection layer, so that characteristics are extracted from images, and the reconstructed images and the remote sensing images are distinguished through the characteristics. The convolution and pooling layers therein may effectively capture local and global features of the image. While the fully connected layer is used for the final sorting operation. The structure can enable the discrimination network model to have stronger perceptibility, and can evaluate the quality of the generated high-resolution image more accurately.
Specifically, the trained generated network model is used as an initialization model, and the countermeasure network model is trained and generated. Firstly, remote sensing image data are led to a discrimination network model through forward propagation, and a loss value of the discrimination network model is calculated;
to improve the perceived quality of the reconstructed image, the second loss function is a weighted sum of the L1 loss function, the VGG perceived loss function and the PatchGAN loss function, specifically,
According to the formula: A second loss function is obtained, which,
Where L is a second loss function,The weight of the loss function for L1,As a function of the loss of L1,Is the weight of the VGG perceptual loss function,The weight of the penalty function is PatchGAN,Is PatchGAN loss functions.
The VGG perception loss function utilizes a pre-trained VGG network to extract features, and differences between the primary reconstructed image and the remote sensing image on the extracted features are compared.
The VGG perceptual loss function is given by the formula:
Wherein, Representing the VGG perceptual loss function,Representing the j-th layer of the network,Representing the size of the j-th layer feature map,Representing a lossy network.
PatchGAN loss functions, and judging the local areas of the primary reconstructed image and the remote sensing image to measure the similarity of the primary reconstructed image and the remote sensing image.
PatchGAN loss function, the formula is shown below:
Wherein, Representing PatchGAN the loss function of the device,Representing the number of patches,AndRepresenting the height and width of the Patch respectively,Representing the input image of the discrimination network modelIs provided with an output of (a),The label representing the target output (typically a true image of 1, a generated image of 0).
By combining the three loss functions, the structure, perceived quality and detail accuracy of the image can be comprehensively considered, so that the perceived quality of the reconstructed image is improved.
Secondly, guiding the low-resolution image data to a trained generated network model to obtain preliminary reconstructed image data, guiding the preliminary reconstructed image data and the remote sensing image data to a preliminary trained discrimination network model, and calculating a third loss value of the trained generated network model; the third loss function is an L1 loss function.
And updating the parameters of the trained generated network model and the discrimination network model by using a back propagation algorithm in a gradient descent mode so as to minimize the second loss value and the third loss value. In each iteration, the training generator and the discriminator are alternated to compete and learn each other, so that the capability of generating a realistic sample by the trained generated network model and the capability of accurately discriminating real and false samples by the discriminating network model are gradually improved. And repeating the iteration until the performance of the trained generated network model and the trained judging network model reaches the expected level or the training times reach the set value.
S4: based on the first reconstructed image and the second reconstructed image, respectively calculating standard deviation of pixel values in a preset contrast area according to each pixel and the preset contrast area to obtain contrast of the preset contrast area of the first reconstructed image and the second reconstructed image, respectively calculating contrast mean values of the first reconstructed image and the second reconstructed image based on the contrast, and obtaining fusion weight of each pixel according to the duty ratio of the contrast mean value of the first reconstructed image or the second reconstructed image in the sum of the contrast mean values of the first reconstructed image and the second reconstructed image, and carrying out weighted fusion on the first reconstructed image and the second reconstructed image to obtain the reconstructed remote sensing image.
Considering that the trained generated network model in S2 is peak-oriented to signal-to-noise ratio, its objective is to minimize the mean square error between the high resolution image data and the remote sensing image data, thereby improving the similarity between the two and increasing the peak signal-to-noise ratio. This enables the trained generated network model to generate reconstructed images with high similarity, but may lack some of the details and features of the real images. The generation of the countermeasure network model is oriented to the perception quality, and the aim is to learn the distribution of real data and generate vivid new data through the judgment of the judgment network model. In order to obtain images with more richness and diversity, the invention fuses the two reconstructed images, namely the first reconstructed image and the second reconstructed image, so that the characteristics of the two images can be integrated to obtain the images with more richness and diversity.
First, contrast is calculated for an input image. Preferably, in S4, the preprocessing further includes preprocessing the first reconstructed image and the second reconstructed image, including:
After the first reconstructed image and the second reconstructed image are respectively subjected to graying processing, gaussian blur processing is performed, so that the influence of noise is reduced, and the images are smoothed, and the contrast is calculated more accurately.
The calculations regarding the contrast of the first reconstructed image and the second reconstructed image are both: for each pixel, a contrast area, such as a window of fixed size, is preset. Then, the standard deviation of the pixel values in the preset contrast area is calculated. The larger the standard deviation, the larger the difference in pixel values around the pixel, and thus the higher the contrast.
And secondly, respectively calculating the average value of the contrast of the first reconstructed image and the second reconstructed image based on the contrast, and obtaining the fusion weight of each pixel according to the ratio of the average value of the contrast of the first reconstructed image or the second reconstructed image in the average value summation of the contrast of the first reconstructed image and the second reconstructed image, thereby dynamically adjusting and realizing detail reservation and fusion balance of different areas. The following are provided:
According to the formula: Obtaining the fusion weight of each pixel in the first reconstructed image,
Wherein,Representing the fusion weight of each pixel in the first reconstructed image,For the contrast value of the first reconstructed image,For the contrast value of the second reconstructed image,For the contrast mean of the first reconstructed image,A contrast mean value of the second reconstructed image;
According to the formula: obtaining the fusion weight of each pixel in the second reconstructed image,
Wherein,Representing the fusion weight for each pixel in the first reconstructed image.
Generally, for areas with higher contrast, higher weight is given to keep more detail information; while for low contrast areas, their weight is reduced to avoid excessive fusion.
And finally, carrying out weighted fusion on the two images to be fused according to the fusion weight of each pixel. In this way, the details of the high contrast regions can be preserved while the low contrast regions are fused in a balanced manner, thereby obtaining a comprehensive image.
The invention realizes detail reservation and fusion balance of different areas by dynamically adjusting fusion weights based on the contrast information of the first reconstructed image and the second reconstructed image, and maintains the characteristics and detail information of the original image in the reconstructed remote sensing image by self-adaptive fusion, and simultaneously utilizes the high similarity and the verisimilitude of the generated image of the trained generated network model, thereby improving the quality and the authenticity of the reconstructed image.
Experimental results
The remote sensing image reconstruction method of the application is adopted to reconstruct the remote sensing images in the RSC11 data set and the AID data set, which are respectively shown in fig. 2 and 3.
Wherein (a) in fig. 2 and 3 is a remote sensing image, (b) is an image obtained by bicubic interpolation, (c) is an image obtained by generating an countermeasure network model, (d) is an image obtained by generating an countermeasure network model through enhancement, (e) is an image obtained by generating an countermeasure network model through an optimal partner, (f) is an image obtained by blind super-resolution reconstruction model, and (g) is an image obtained by adopting the remote sensing image reconstruction method of the present application.
In addition, the remote sensing image reconstruction method is also adopted to reconstruct remote sensing images of train stations, residential areas, airports and commercial areas in WHU-RS19 data sets, as shown in fig. 4, 5, 6 and 7.
In fig. 4,5,6, and 7, (a) is a remote sensing image, (b) is a locally enlarged image of the remote sensing image, (c) is a locally enlarged image of an image obtained by bicubic interpolation, (d) is a locally enlarged image of an image obtained by generating an countermeasure network model, (e) is a locally enlarged image of an image obtained by generating an countermeasure network model by enhancement, (f) is a locally enlarged image of an image obtained by blind super-resolution reconstruction, and (g) is a locally enlarged image of an image obtained by the remote sensing image reconstruction method of the present application.
In the above description, the image obtained by bicubic interpolation is obtained by amplifying the low resolution image to the high resolution image without adding additional details or textures;
Obtaining an image through generating an countermeasure network model is a classical image super-resolution method, and the mapping relation from low resolution to high resolution is learned through generating the countermeasure network model, but the mapping relation may cause some image distortion or blurring;
The image is obtained through the enhanced generation of the countermeasure network model, and a reconstruction residual block and a perception loss function are introduced to improve the quality of the generated image, and the image detail and texture can be better reserved;
Generating an image derived against the network model via the best partner, to allow the estimated patch to dynamically seek optimal supervision during training by relaxing the non-variable one-to-one constraint, thereby yielding more reasonable details;
The image obtained by the blind super-resolution reconstruction model is a more complex and practical image degradation model, and aims to better simulate the degradation condition of the image in the real world.
As can be seen from fig. 2,3, 4, 5, 6 and 7, compared with other methods, the remote sensing image reconstruction method of the present application can more effectively simulate the degradation of a real image, and adaptively fuse information with different resolutions in the reconstruction process, thereby improving the definition and accuracy of the reconstructed image.

Claims (10)

1. The remote sensing image reconstruction method is characterized by comprising the following steps of:
s1: acquiring remote sensing image data, respectively carrying out different degradation treatments, then randomly arranging to obtain low-resolution image data, and constructing a low-resolution image dataset based on the low-resolution image data;
S2: after the remote sensing image data and the low-resolution image data are led to the generated network model, high-resolution image data are obtained, the high-resolution image data are compared with the remote sensing image data, a first loss value is calculated according to a first loss function, training is carried out based on the first loss value, and a first reconstructed image and a generated network model after training are obtained;
S3: leading the remote sensing image data to a discrimination network model, calculating a second loss value according to a second loss function, and training based on the second loss value to obtain a discrimination network model after preliminary training;
Leading the low-resolution image data to a trained generated network model to obtain preliminary reconstructed image data, leading the preliminary reconstructed image data and the remote sensing image data to a preliminary trained discrimination network model, calculating a third loss value according to a third loss function, carrying out iterative training on the trained generated network model based on the third loss value until the third loss value reaches a threshold value, ending iteration to obtain a second reconstructed image and generating an countermeasure network model;
S4: based on the first reconstructed image and the second reconstructed image, respectively calculating standard deviation of pixel values in a preset contrast area according to each pixel and the preset contrast area to obtain contrast of the preset contrast area of the first reconstructed image and the second reconstructed image, respectively calculating contrast mean values of the first reconstructed image and the second reconstructed image based on the contrast, and obtaining fusion weight of each pixel according to the duty ratio of the contrast mean value of the first reconstructed image or the second reconstructed image in the sum of the contrast mean values of the first reconstructed image and the second reconstructed image, and carrying out weighted fusion on the first reconstructed image and the second reconstructed image to obtain the reconstructed remote sensing image.
2. The method for reconstructing a remote sensing image according to claim 1, wherein,
In S2, the first loss function is a weighted sum of the L1 loss function and Charbonnier loss functions, specifically,
According to the formula: a first loss function is obtained, which,
In the method, in the process of the invention,As a function of the first loss,The weight of the loss function for L1,As a function of the loss of L1,The weight of the penalty function is Charbonnier,Is Charbonnier loss functions.
3. The method for reconstructing a remote sensing image according to claim 1, wherein,
In the step S2, training is performed, specifically,
Comparing the high-resolution image data with the remote sensing image data, calculating a first loss value according to a first loss function, updating the weight parameters of the generated network model through a back propagation algorithm based on the first loss value to obtain new high-resolution image data, comparing the new high-resolution image data with the remote sensing image data, and calculating the first loss value according to the first loss function until the first loss value reaches a preset condition.
4. The method for reconstructing a remote sensing image according to claim 1, wherein,
In S4, a fusion weight of each pixel is obtained, specifically,
According to the formula: Obtaining the fusion weight of each pixel in the first reconstructed image,
Wherein,Representing the fusion weight of each pixel in the first reconstructed image,For the contrast value of the first reconstructed image,For the contrast value of the second reconstructed image,For the contrast mean of the first reconstructed image,A contrast mean value of the second reconstructed image;
According to the formula: obtaining the fusion weight of each pixel in the second reconstructed image,
Wherein,Representing the fusion weight for each pixel in the second reconstructed image.
5. The method for reconstructing a remote sensing image according to claim 1, wherein,
In S3, the second loss function is a weighted sum of the L1 loss function, the VGG perceived loss function, and the PatchGAN loss function, specifically,
According to the formula: A second loss function is obtained, which,
Where L is a second loss function,The weight of the loss function for L1,As a function of the loss of L1,Is the weight of the VGG perceptual loss function,The weight of the penalty function is PatchGAN,Is PatchGAN loss functions.
6. The method for reconstructing a remote sensing image according to claim 1, wherein,
In the step S1, after different degradation treatments are respectively performed, random arrangement is performed to obtain low-resolution image data, specifically,
And respectively carrying out fuzzy processing, downsampling and random noise addition on the remote sensing image data to obtain degraded remote sensing image data, and carrying out random arrangement on the degraded remote sensing image data to obtain low-resolution image data.
7. The method for reconstructing a remote sensing image according to claim 1, wherein,
In S4, preprocessing the first reconstructed image and the second reconstructed image is further included, including:
and respectively carrying out gray level processing on the first reconstructed image and the second reconstructed image, and then carrying out Gaussian blur processing.
8. A remote sensing image reconstruction system, comprising:
A low resolution image dataset construction module: the method comprises the steps of acquiring remote sensing image data, respectively carrying out different degradation treatments, then randomly arranging to obtain low-resolution image data, and constructing a low-resolution image dataset based on the low-resolution image data;
Generating a network model training module: the method comprises the steps of guiding remote sensing image data and low-resolution image data to a generated network model to obtain high-resolution image data, comparing the high-resolution image data with the remote sensing image data, calculating a first loss value according to a first loss function, and training based on the first loss value to obtain a first reconstructed image and a generated network model after training;
generating an countermeasure network model training module: the remote sensing image data are guided to the judging network model, a second loss value is calculated according to a second loss function, training is carried out based on the second loss value, and the judging network model after preliminary training is obtained;
Leading the low-resolution image data to a trained generated network model to obtain preliminary reconstructed image data, leading the preliminary reconstructed image data and the remote sensing image data to a preliminary trained discrimination network model, calculating a third loss value according to a third loss function, carrying out iterative training on the trained generated network model based on the third loss value until the third loss value reaches a threshold value, ending iteration to obtain a second reconstructed image and generating an countermeasure network model;
And a remote sensing image reconstruction module: based on the first reconstructed image and the second reconstructed image, respectively calculating standard deviation of pixel values in a preset contrast area according to each pixel and the preset contrast area to obtain contrast of the preset contrast area of the first reconstructed image and the second reconstructed image, respectively calculating contrast mean values of the first reconstructed image and the second reconstructed image based on the contrast, and obtaining fusion weight of each pixel according to the duty ratio of the contrast mean value of the first reconstructed image or the second reconstructed image in the sum of the contrast mean values of the first reconstructed image and the second reconstructed image, and carrying out weighted fusion on the first reconstructed image and the second reconstructed image to obtain the reconstructed remote sensing image.
9. A remote sensing image reconstruction device, comprising a processor and a memory, wherein the processor, when executing a computer program stored in the memory, implements the remote sensing image reconstruction method according to any one of claims 1-7.
10. A remote sensing image reconstruction medium for storing a computer program, wherein the computer program when executed by a processor implements the remote sensing image reconstruction method of any one of claims 1-7.
CN202410756700.7A 2024-06-13 2024-06-13 Remote sensing image reconstruction method, system, device and medium Active CN118333861B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410756700.7A CN118333861B (en) 2024-06-13 2024-06-13 Remote sensing image reconstruction method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410756700.7A CN118333861B (en) 2024-06-13 2024-06-13 Remote sensing image reconstruction method, system, device and medium

Publications (2)

Publication Number Publication Date
CN118333861A CN118333861A (en) 2024-07-12
CN118333861B true CN118333861B (en) 2024-08-20

Family

ID=91766633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410756700.7A Active CN118333861B (en) 2024-06-13 2024-06-13 Remote sensing image reconstruction method, system, device and medium

Country Status (1)

Country Link
CN (1) CN118333861B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992262A (en) * 2019-11-26 2020-04-10 南阳理工学院 Remote sensing image super-resolution reconstruction method based on generation countermeasure network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538246B (en) * 2021-08-10 2023-04-07 西安电子科技大学 Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN115578263B (en) * 2022-11-16 2023-03-10 之江实验室 CT super-resolution reconstruction method, system and device based on generation network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992262A (en) * 2019-11-26 2020-04-10 南阳理工学院 Remote sensing image super-resolution reconstruction method based on generation countermeasure network

Also Published As

Publication number Publication date
CN118333861A (en) 2024-07-12

Similar Documents

Publication Publication Date Title
CN110570353B (en) Super-resolution reconstruction method for generating single image of countermeasure network by dense connection
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN108898560B (en) Core CT image super-resolution reconstruction method based on three-dimensional convolutional neural network
CN110211045A (en) Super-resolution face image method based on SRGAN network
CN111429355A (en) Image super-resolution reconstruction method based on generation countermeasure network
CN112489164B (en) Image coloring method based on improved depth separable convolutional neural network
CN115880158B (en) Blind image super-resolution reconstruction method and system based on variation self-coding
CN114897694B (en) Image super-resolution reconstruction method based on mixed attention and double-layer supervision
CN117710216B (en) Image super-resolution reconstruction method based on variation self-encoder
CN116029902A (en) Knowledge distillation-based unsupervised real world image super-resolution method
CN116486074A (en) Medical image segmentation method based on local and global context information coding
CN117151990B (en) Image defogging method based on self-attention coding and decoding
CN115170915A (en) Infrared and visible light image fusion method based on end-to-end attention network
CN116152061A (en) Super-resolution reconstruction method based on fuzzy core estimation
CN116563101A (en) Unmanned aerial vehicle image blind super-resolution reconstruction method based on frequency domain residual error
CN117422619A (en) Training method of image reconstruction model, image reconstruction method, device and equipment
Liu et al. Facial image inpainting using multi-level generative network
CN114092329B (en) Super-resolution fluorescence microscopic imaging method based on sub-pixel neural network
CN117575915A (en) Image super-resolution reconstruction method, terminal equipment and storage medium
CN113096015B (en) Image super-resolution reconstruction method based on progressive perception and ultra-lightweight network
CN118333861B (en) Remote sensing image reconstruction method, system, device and medium
CN114764754B (en) Occlusion face restoration method based on geometric perception priori guidance
CN116188265A (en) Space variable kernel perception blind super-division reconstruction method based on real degradation
CN115100091A (en) Conversion method and device for converting SAR image into optical image
CN116503260B (en) Image super-resolution reconstruction method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant