CN115601281A - Remote sensing image space-time fusion method and system based on deep learning and electronic equipment - Google Patents

Remote sensing image space-time fusion method and system based on deep learning and electronic equipment Download PDF

Info

Publication number
CN115601281A
CN115601281A CN202211374179.8A CN202211374179A CN115601281A CN 115601281 A CN115601281 A CN 115601281A CN 202211374179 A CN202211374179 A CN 202211374179A CN 115601281 A CN115601281 A CN 115601281A
Authority
CN
China
Prior art keywords
image
data
remote sensing
time
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211374179.8A
Other languages
Chinese (zh)
Inventor
陈圣波
崔亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202211374179.8A priority Critical patent/CN115601281A/en
Publication of CN115601281A publication Critical patent/CN115601281A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a remote sensing image space-time fusion method and system based on deep learning and electronic equipment, and belongs to the technical field of remote sensing image processing. According to the time-space fusion network model based on the time-phase remote sensing image historical data, the high-resolution remote sensing image or the low-resolution remote sensing image is used as the input data of the time-space fusion network model obtained by training the deep learning network based on the time-phase remote sensing image historical data, and the time resolution of the high-resolution remote sensing image can be improved under the condition that the accuracy of spectral information and the consistency of spatial details are guaranteed.

Description

Remote sensing image space-time fusion method and system based on deep learning and electronic equipment
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a remote sensing image space-time fusion method and system based on deep learning and electronic equipment.
Background
Due to the current hardware conditions and budget constraints, it is difficult to acquire satellite imagery with both high spatial resolution and high temporal resolution. For example, a Landsat image is acquired. The resolution of Landsat images is high, but the 16-day revisit period severely limits the application of Landsat images in vegetation physiological process and phenological monitoring. In cloudy areas, this problem is more pronounced. A Moderate-resolution Imaging spectrometer (MODIS) can acquire images of the same region every day or every half day, but the spatial resolution of 250m to 1000m makes the MODIS difficult to apply to complex landscapes with large spatial heterogeneity. To solve this problem, a spatiotemporal fusion technique is proposed.
At present, the space-time fusion technology is widely used in the fields of vegetation monitoring, disaster prediction and the like. Existing spatio-temporal fusion methods can be divided into five categories: a decomposition-based spatio-temporal fusion algorithm, a weight function-based spatio-temporal fusion method, a Bayesian-based spatio-temporal fusion method, a learning-based spatio-temporal fusion method, and a hybrid spatio-temporal fusion method.
The first proposed method is a decomposition-based spatio-temporal fusion algorithm, followed by a number of weight function-based spatio-temporal fusion methods, the most representative of which is the spatio-temporal adaptive reflection fusion model (STARFM). The Bayesian-based spatio-temporal fusion method considers spatio-temporal fusion as a maximum posterior probability problem, and a representative method is Bayesian Maximum Entropy (BME). The hybrid spatio-temporal fusion method is to integrate the greatest advantages of various methods, such as flexible spatial-temporal data fusion (FSDAF).
In recent years, deep learning models such as neural networks can automatically and efficiently learn spatial proximity and temporal correlation from raw data due to their powerful automatic feature representation learning capabilities, and have achieved remarkable results also in spatio-temporal fusion tasks. Although many methods prove the superiority of the convolutional neural network, when deep learning such as neural network is applied to the spatio-temporal fusion algorithm, many problems still exist, for example: the fusion precision is not ideal in the region with higher spatial heterogeneity, the requirement on the data set is higher, and the time resolution is difficult to maintain when the accuracy of the spectral information is ensured.
Disclosure of Invention
The invention aims to provide a remote sensing image space-time fusion method, a remote sensing image space-time fusion system and electronic equipment based on deep learning, which can improve the time resolution of a high-resolution remote sensing image under the condition of ensuring the accuracy of spectral information and the consistency of spatial details.
In order to achieve the purpose, the invention provides the following scheme:
a remote sensing image space-time fusion method based on deep learning comprises the following steps:
acquiring remote sensing image data of a region to be predicted; the remote sensing image data comprises high-resolution remote sensing data and low-resolution remote sensing data;
preprocessing the remote sensing image data;
inputting the preprocessed remote sensing image data into a space-time fusion network model to obtain a fusion image; the space-time fusion network model is obtained by training a deep learning network by adopting time-phase remote sensing image historical data.
Preferably, the spatio-temporal fusion network model includes a plurality of branches;
each branch comprises four convolution layers which are connected in sequence; a Relu activation function layer is arranged behind the first three convolution layers;
residual learning is introduced, and input and output of each branch are integrated and then weighted to obtain a fused image.
Preferably, the sizes of the four sequentially connected convolutional layers are respectively as follows: 32 × 9, 32 × 5, and 4 × 5; where 32 and 4 are the number of channels of the convolution kernel and 9 and 5 are the size of the convolution kernel.
Preferably, the preprocessing the remote sensing image data specifically includes:
processing the high-resolution remote sensing data to obtain first processed data; the processing comprises the following steps: radiometric calibration, atmospheric correction, and geometric correction;
embedding and cutting the low-resolution remote sensing data to obtain first subdata;
resampling the first subdata to obtain second subdata; the resolution of the second subdata is the same as that of the first processed data;
extracting data with the same wave band as the first processed data from the second subdata to obtain second processed data; and taking the first processed data and the second processed data as the preprocessed remote sensing image data.
Preferably, the process of obtaining the space-time fusion network model by training the deep learning network with time-phase remote sensing image historical data specifically comprises:
acquiring remote sensing image historical data, and preprocessing the remote sensing image historical data;
acquiring a first time phase image, a second time phase image and a third time phase image based on the preprocessed remote sensing image historical data; each time phase image comprises a first low-resolution image, a second low-resolution image, a third low-resolution image, a first high-resolution image, a second high-resolution image and a third high-resolution image;
respectively cutting the first time phase image, the second time phase image and the third time phase image to obtain first time phase image area data, second time phase image area data and third time phase image area data;
taking a first low-resolution image, a second low-resolution image and a first high-resolution image in the first time phase image region data and a first low-resolution image, a second low-resolution image and a first high-resolution image in the second time phase image region data as input sample data, and taking a second high-resolution image in the second time phase image region data as output sample data to construct a training sample data set;
training a deep learning model by adopting the training sample data set to obtain an initial space-time fusion network model;
inputting a first low-resolution image, a second low-resolution image and a first high-resolution image in the first time-phase image area data and a first low-resolution image, a second low-resolution image and a first high-resolution image in the third time-phase image area data into the initial space-time fusion network model as input sample data;
when the difference value between the test image result output by the initial space-time fusion network model and the second high-resolution image in the third time-phase image region data meets the preset requirement, obtaining the space-time fusion network model;
and when the difference value between the test image result output by the initial space-time fusion network model and the second high-resolution image in the third time-phase image region data does not meet the preset requirement, adjusting the parameters of the initial space-time fusion network model and returning to execute the step of training the deep learning model by adopting the training sample data set to obtain the initial space-time fusion network model.
Preferably, in the process of training a deep learning network by adopting time-phase remote sensing image historical data to obtain the space-time fusion network model, the data iteration times are set to be 100; the batch size is set to 16; the learning rate is set to 0.02.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
according to the remote sensing image space-time fusion method based on deep learning, the high-resolution remote sensing image or the low-resolution remote sensing image is used as the input data of the time-phase remote sensing image history data training deep learning network to obtain the space-time fusion network model, and the time resolution of the high-resolution remote sensing image can be improved under the condition that the accuracy of spectral information and the consistency of spatial details are guaranteed.
Corresponding to the provided remote sensing image space-time fusion method based on deep learning, the invention also provides the following implementation structure:
a remote sensing image space-time fusion system based on deep learning comprises:
the data acquisition module is used for acquiring remote sensing image data of the area to be predicted; the remote sensing image data comprises high-resolution remote sensing data and low-resolution remote sensing data;
the preprocessing module is used for preprocessing the remote sensing image data;
the image fusion module is used for inputting the preprocessed remote sensing image data into a space-time fusion network model to obtain a fusion image; the space-time fusion network model is obtained by training a deep learning network by adopting time-phase remote sensing image historical data.
An electronic device, comprising:
a memory for storing computer control instructions;
a processor connected with the memory for retrieving and executing the computer control instructions, so that the electronic equipment executes the provided remote sensing image space-time fusion method based on deep learning.
Because the technical effect realized by the implementation structure provided by the invention is the same as that realized by the remote sensing image space-time fusion method based on deep learning, therefore, the description thereof is omitted.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of a remote sensing image space-time fusion method based on deep learning provided by the invention;
FIG. 2 is a schematic structural diagram of a spatio-temporal fusion network model according to an embodiment of the present invention;
FIG. 3 (a) is a schematic diagram of an actual image;
FIG. 3 (b) is a schematic image of a medium resolution imaging spectrometer;
FIG. 3 (c) is a schematic diagram of a predicted image;
FIG. 4 is a blue band density scattergram of the relationship between the band reflectivity and the reflectivity of each band of the original image according to an embodiment of the present invention;
FIG. 5 is a green band density scattergram of the relationship between the band reflectivity and the reflectivity of each band of the original image according to an embodiment of the present invention;
FIG. 6 is a red band density scatter diagram showing the relationship between the band reflectivity and the band reflectivity of the original image according to an embodiment of the present invention;
fig. 7 is a near infrared band density scattergram of a relationship between a band reflectivity and a band reflectivity of an original image according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a remote sensing image space-time fusion method, a remote sensing image space-time fusion system and electronic equipment based on deep learning, which can improve the time resolution of a high-resolution remote sensing image under the condition of ensuring the accuracy of spectral information and the consistency of spatial details.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in FIG. 1, the remote sensing image space-time fusion method based on deep learning provided by the invention comprises the following steps:
step 100: and acquiring remote sensing image data of the area to be predicted. The remote sensing image data comprises high-resolution remote sensing data and low-resolution remote sensing data. For example, the WFV data of the first high-resolution grade is used as a high-resolution remote sensing data source, MOD09A1 data in an MODIS satellite image is used as a low-resolution remote sensing data supplement, and remote sensing data in the whole growth period of the corn are obtained in real time, wherein the WFV data of the first high-resolution grade 1 contains four wave bands including blue, green, red and near infrared wave bands, the wavelength range is 450-900 μm, and the spatial resolution is 16 m. The MOD09A1 product contains 7 bands including blue, green, red, near infrared, thermal infrared and two mid-infrared bands, with a wavelength range of 450-2155 μm and a spatial resolution of 500 meters. In the time-space fusion, the invention can select and use blue, green, red and near infrared bands.
Step 101: and preprocessing the remote sensing image data.
Step 102: and inputting the preprocessed remote sensing image data into a space-time fusion network model to obtain a fusion image. The space-time fusion network model is obtained by training a deep learning network by adopting time-phase remote sensing image historical data.
The process of obtaining the space-time fusion network model by adopting the time phase remote sensing image historical data training deep learning network specifically comprises the following steps:
in the training process, t is mainly used 1 、t 2 And t 3 The images of three time phases (i.e. the first time phase image, the second time phase image and the third time phase image) correspond to the three low-resolution images L 1 、L 2 、L 3 (i.e., the first low resolution picture, the second low resolution picture, the third low resolution picture) and the three-scene high resolution picture H 1 、H 2 、H 3 (i.e., the first high resolution picture, the second high resolution picture, and the third high resolution picture).
First, MODIS data and high-score one-number multispectral data (GF-1 WFV) are preprocessed. And carrying out radiometric calibration, atmospheric correction, geometric correction and the like on the high-resolution first-number multispectral data. After the MODIS data is downloaded, firstly using MODIS reproduction Tool and EVNI5.3 software to perform mosaic and clipping, resampling to 16m spatial resolution which is the same as GF-1WFV, and finally extracting four wave bands of red, green, blue and near infrared which are matched with the wave band range in GF-1 WFV. The data of the four wave bands are input into a deep learning network for training and testing, and a final fusion image is obtained through space-time fusion.
When building a data set, images of each phase are first randomly cropped into small regions of 128 × 4, and then input in batches. Wherein 128, 128 and 6 are the number of image rows h, the number of image columns w and the number of channels, respectively. Training set composed of t 1 And t 2 Temporal image composition, low resolution image L 1 And L 2 And high resolution image H 1 As network input, t 2 Temporal high resolution image H 2 As a label (i.e., a network output). Test set will train t in the set 2 The image of time phase is replaced by t 3 The time phase image.
As shown in FIG. 2, the space-time fusion network model is a multi-stream numberAccording to the input network, the input network comprises three branches. The input data of each branch is respectively a low resolution image L 1 Low resolution image L 2 And high resolution picture H 1 . Each branch comprises four convolutional layers (Conv 1-Conv 4). The size of each convolution kernel is 32 × 9, 32 × 5, and 4 × 5, respectively, where 32 and 4 are the number of channels of the convolution kernel, and 9 and 5 are the sizes of the convolution kernels. A Relu activation function layer is arranged after the first three convolution layers. Compared with the functions such as sigmoid and tanh, the Relu function enables the network training to be faster, increases the nonlinearity of the network, can prevent the gradient disappearance during the network training, and effectively reduces overfitting.
The purpose of setting four layers of convolution layers is to enable the deep learning network to have a larger receptive field and better perform feature extraction. Meanwhile, in order to better solve the problems of network overfitting, gradient disappearance, gradient explosion and the like possibly occurring in the deep learning network, residual error learning is introduced, the input image and the extracted feature map are integrated, and finally the outputs of the three branches are summed to obtain a high-resolution image with accurate predicted date. During training, the number of data iteration is 100, the batch size is set to be 16, the learning rate is 0.02, the learning rate is exponentially attenuated, the loss function adopts a root-mean-square error, and a random gradient descent method is adopted for parameter optimization of the loss function.
The remote sensing image space-time fusion method based on deep learning provided by the invention is tested in a research area. As shown in table 1, the prediction result obtained by the deep learning-based remote sensing image space-time fusion method of the present invention, the Root Mean Square Error (RMSE) between each band of the real image, and the statistical evaluation result of the Correlation Coefficient (CC) can be obtained, and as can be seen from fig. 4 to 7, the deep learning-based remote sensing image space-time fusion method of the present invention is in a better level in both statistical indexes, the root mean square error is less than 0.003, and the correlation coefficient is greater than 0.8. Based on fig. 3 (a) to 3 (c), the result of the comparison between the high-resolution image and the real image can be obtained, and the fusion result is closer to the real image from the visual point of view. The method can reconstruct the space details in the field, and has high precision of the predicted reflectivity and stable overall performance.
TABLE 1 statistical evaluation table for prediction of reflectivity of each band in a research area by the deep learning-based remote sensing image space-time fusion method provided by the invention
Figure BDA0003925814940000071
Figure BDA0003925814940000081
Based on the above description, the present invention also has the following advantages over the prior art:
1. compared with an algorithm which needs two pairs of reference images, the method only needs one pair of reference images, is more suitable for areas with poor remote sensing data availability, obtains higher fusion precision in farmland areas with higher heterogeneity, and is beneficial to avoiding cloud interference in the areas with poor remote sensing image availability and monitoring all key growth periods of vegetation.
2. Test results on MODIS images and GF-1WFV data sets show that the algorithm provided by the invention can better reconstruct space details of ground features, has higher precision and is suitable for farmlands or other areas with higher heterogeneity.
In addition, corresponding to the provided remote sensing image space-time fusion method based on deep learning, the invention also provides the following implementation structure:
a remote sensing image space-time fusion system based on deep learning comprises:
and the data acquisition module is used for acquiring remote sensing image data of the area to be predicted. The remote sensing image data comprises high-resolution remote sensing data and low-resolution remote sensing data.
And the preprocessing module is used for preprocessing the remote sensing image data.
And the image fusion module is used for inputting the preprocessed remote sensing image data into the space-time fusion network model to obtain a fusion image. The space-time fusion network model is obtained by training a deep learning network by adopting time-phase remote sensing image historical data.
An electronic device, comprising:
a memory for storing computer control instructions.
And the processor is connected with the memory and used for calling and executing the computer control instruction so as to enable the electronic equipment to execute the provided remote sensing image space-time fusion method based on deep learning.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A remote sensing image space-time fusion method based on deep learning is characterized by comprising the following steps:
acquiring remote sensing image data of a region to be predicted; the remote sensing image data comprises high-resolution remote sensing data and low-resolution remote sensing data;
preprocessing the remote sensing image data;
inputting the preprocessed remote sensing image data into a space-time fusion network model to obtain a fusion image; the space-time fusion network model is obtained by training a deep learning network by adopting time-phase remote sensing image historical data.
2. The deep learning-based remote sensing image spatiotemporal fusion method according to claim 1, wherein the spatiotemporal fusion network model comprises a plurality of branches;
each branch comprises four convolution layers which are connected in sequence; a Relu activation function layer is arranged behind the first three convolution layers;
residual learning is introduced, the input and the output of each branch are integrated, and then weighting processing is carried out to obtain a fused image.
3. The remote sensing image space-time fusion method based on deep learning of claim 2, wherein the sizes of the four sequentially connected convolution layers are respectively as follows: 32 × 9, 32 × 5, and 4 × 5; where 32 and 4 are the number of channels of the convolution kernel and 9 and 5 are the size of the convolution kernel.
4. The remote sensing image space-time fusion method based on deep learning of claim 1, wherein the preprocessing of the remote sensing image data specifically comprises:
processing the high-resolution remote sensing data to obtain first processed data; the processing comprises the following steps: radiometric calibration, atmospheric correction, and geometric correction;
embedding and cutting the low-resolution remote sensing data to obtain first subdata;
resampling the first subdata to obtain second subdata; the resolution of the second subdata is the same as that of the first processed data;
extracting data with the same wave band as the first processed data from the second subdata to obtain second processed data; and taking the first processed data and the second processed data as the preprocessed remote sensing image data.
5. The remote sensing image space-time fusion method based on deep learning of claim 1, wherein the process of training a deep learning network to obtain the space-time fusion network model by using time-phase remote sensing image historical data specifically comprises:
acquiring remote sensing image historical data, and preprocessing the remote sensing image historical data;
acquiring a first time phase image, a second time phase image and a third time phase image based on the preprocessed remote sensing image historical data; each time phase image comprises a first low-resolution image, a second low-resolution image, a third low-resolution image, a first high-resolution image, a second high-resolution image and a third high-resolution image;
respectively cutting the first time phase image, the second time phase image and the third time phase image to obtain first time phase image area data, second time phase image area data and third time phase image area data;
taking a first low-resolution image, a second low-resolution image and a first high-resolution image in the first time phase image region data and a first low-resolution image, a second low-resolution image and a first high-resolution image in the second time phase image region data as input sample data, and taking a second high-resolution image in the second time phase image region data as output sample data to construct a training sample data set;
training a deep learning model by adopting the training sample data set to obtain an initial space-time fusion network model;
inputting a first low-resolution image, a second low-resolution image and a first high-resolution image in the first time-phase image area data and a first low-resolution image, a second low-resolution image and a first high-resolution image in the third time-phase image area data into the initial space-time fusion network model as input sample data;
when the difference value between the test image result output by the initial space-time fusion network model and the second high-resolution image in the third time-phase image region data meets the preset requirement, obtaining the space-time fusion network model;
and when the difference value between the test image result output by the initial space-time fusion network model and the second high-resolution image in the third time-phase image area data does not meet the preset requirement, adjusting the parameters of the initial space-time fusion network model and returning to execute the step of training the deep learning model by adopting the training sample data set to obtain the initial space-time fusion network model.
6. The remote sensing image space-time fusion method based on deep learning of claim 5, characterized in that in the process of training a deep learning network to obtain the space-time fusion network model by adopting time-phase remote sensing image historical data, the number of data iterations is set to 100; the batch size is set to 16; the learning rate is set to 0.02.
7. A remote sensing image space-time fusion system based on deep learning is characterized by comprising:
the data acquisition module is used for acquiring remote sensing image data of the area to be predicted; the remote sensing image data comprises high-resolution remote sensing data and low-resolution remote sensing data;
the preprocessing module is used for preprocessing the remote sensing image data;
the image fusion module is used for inputting the preprocessed remote sensing image data into a space-time fusion network model to obtain a fusion image; the space-time fusion network model is obtained by training a deep learning network by adopting time-phase remote sensing image historical data.
8. An electronic device, comprising:
a memory for storing computer control instructions;
a processor connected with the memory and used for retrieving and executing the computer control instructions to enable the electronic device to execute the remote sensing image space-time fusion method based on deep learning according to any one of claims 1 to 6.
CN202211374179.8A 2022-11-04 2022-11-04 Remote sensing image space-time fusion method and system based on deep learning and electronic equipment Pending CN115601281A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211374179.8A CN115601281A (en) 2022-11-04 2022-11-04 Remote sensing image space-time fusion method and system based on deep learning and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211374179.8A CN115601281A (en) 2022-11-04 2022-11-04 Remote sensing image space-time fusion method and system based on deep learning and electronic equipment

Publications (1)

Publication Number Publication Date
CN115601281A true CN115601281A (en) 2023-01-13

Family

ID=84852256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211374179.8A Pending CN115601281A (en) 2022-11-04 2022-11-04 Remote sensing image space-time fusion method and system based on deep learning and electronic equipment

Country Status (1)

Country Link
CN (1) CN115601281A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036889A (en) * 2023-08-22 2023-11-10 黑龙江省网络空间研究中心(黑龙江省信息安全测评中心、黑龙江省国防科学技术研究院) MLP-based remote sensing image fusion method
CN117216480A (en) * 2023-09-18 2023-12-12 宁波大学 Near-surface ozone remote sensing estimation method for deep coupling geographic space-time information
CN117593658A (en) * 2023-11-21 2024-02-23 岭南师范学院 BP neural network-based earth surface high-resolution methane inversion method and application method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036889A (en) * 2023-08-22 2023-11-10 黑龙江省网络空间研究中心(黑龙江省信息安全测评中心、黑龙江省国防科学技术研究院) MLP-based remote sensing image fusion method
CN117216480A (en) * 2023-09-18 2023-12-12 宁波大学 Near-surface ozone remote sensing estimation method for deep coupling geographic space-time information
CN117593658A (en) * 2023-11-21 2024-02-23 岭南师范学院 BP neural network-based earth surface high-resolution methane inversion method and application method

Similar Documents

Publication Publication Date Title
US11551333B2 (en) Image reconstruction method and device
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN115601281A (en) Remote sensing image space-time fusion method and system based on deep learning and electronic equipment
CN111985543B (en) Construction method, classification method and system of hyperspectral image classification model
CN109214989A (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN109117894B (en) Large-scale remote sensing image building classification method based on full convolution neural network
CN114283158A (en) Retinal blood vessel image segmentation method and device and computer equipment
CN110363770A (en) A kind of training method and device of the infrared semantic segmentation model of margin guide formula
CN116152591B (en) Model training method, infrared small target detection method and device and electronic equipment
CN116310883B (en) Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment
CN116309070A (en) Super-resolution reconstruction method and device for hyperspectral remote sensing image and computer equipment
CN110189282A (en) Based on intensive and jump connection depth convolutional network multispectral and panchromatic image fusion method
CN114494821A (en) Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation
CN115375548A (en) Super-resolution remote sensing image generation method, system, equipment and medium
CN114120036A (en) Lightweight remote sensing image cloud detection method
US20220058824A1 (en) Method and apparatus for image labeling, electronic device, storage medium, and computer program
CN111402131A (en) Method for acquiring super-resolution land cover classification map based on deep learning
CN111696042A (en) Image super-resolution reconstruction method based on sample learning
CN113705538A (en) High-resolution remote sensing image road change detection device and method based on deep learning
CN114092803A (en) Cloud detection method and device based on remote sensing image, electronic device and medium
CN117788296A (en) Infrared remote sensing image super-resolution reconstruction method based on heterogeneous combined depth network
Yang et al. Sparse representation and SRCNN based spatio-temporal information fusion method of multi-sensor remote sensing data
CN117058367A (en) Semantic segmentation method and device for high-resolution remote sensing image building
CN111401453A (en) Mosaic image classification and identification method and system
CN115294467A (en) Detection method and related device for tea diseases

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination