CN113409413A - Time sequence image reconstruction method based on gated convolution-long and short memory network - Google Patents

Time sequence image reconstruction method based on gated convolution-long and short memory network Download PDF

Info

Publication number
CN113409413A
CN113409413A CN202110620359.9A CN202110620359A CN113409413A CN 113409413 A CN113409413 A CN 113409413A CN 202110620359 A CN202110620359 A CN 202110620359A CN 113409413 A CN113409413 A CN 113409413A
Authority
CN
China
Prior art keywords
image
time
convolution
long
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110620359.9A
Other languages
Chinese (zh)
Other versions
CN113409413B (en
Inventor
吴炜
谢煜晨
吴宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quzhou Southeast Feishi Technology Co ltd
Southeast Digital Economic Development Research Institute
Original Assignee
Quzhou Southeast Feishi Technology Co ltd
Southeast Digital Economic Development Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quzhou Southeast Feishi Technology Co ltd, Southeast Digital Economic Development Research Institute filed Critical Quzhou Southeast Feishi Technology Co ltd
Priority to CN202110620359.9A priority Critical patent/CN113409413B/en
Publication of CN113409413A publication Critical patent/CN113409413A/en
Application granted granted Critical
Publication of CN113409413B publication Critical patent/CN113409413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a time sequence image reconstruction method by utilizing a gated convolution-long and short memory network, aiming at the capability of describing earth surface time and space change characteristics of a time sequence remote sensing image. The network is based on a generation confrontation network architecture, firstly, a time sequence image is utilized to train a feature extraction model, the model adopts a gate control convolution combined with a long and short memory network to extract the variation relation of the time sequence image on time and space; then, inputting the artificially generated result and the real image into a classifier, judging whether the input is the artificially generated result or the real image, and realizing the cooperative optimization of the artificially generated result and the real image through the countertraining of the artificially generated result and the real image; then, the image to be reconstructed is input into a generator, and the missing part on the image is predicted according to the effective part to generate a non-missing image.

Description

Time sequence image reconstruction method based on gated convolution-long and short memory network
Technical Field
The invention relates to a time sequence image reconstruction method based on a gated convolution-long and short memory network.
Background
The time series images are formed by arranging multiple images of a research area according to time, and describe the change of the surface features of the research area along with time. However, the data of the corresponding area is lost due to the influence of cloud shadow and other factors on the partial date of the time-series image. Therefore, the dimensions of the feature part extracted from the time series image are missing, so that the features cannot be aligned accurately, and difficulty is brought to subsequent image classification and information extraction. The missing region filling is to fill the missing part of the image according to a certain rule to obtain the image without missing. According to different dimensions of the utilized information, the remote sensing data missing area filling method is divided into a method combining space dimension, time dimension, spectrum dimension and space-time.
Since the missing region of the remote sensing image is mostly missing part of the region, the missing region can be predicted by using the effective region on the remote sensing image, and the remote sensing image without missing is obtained. According to whether a reference image is used, the method can be divided into single-phase image filling and multi-phase image filling, wherein the single-phase image filling is to extract structural information such as gradients from the image and popularize the structural information from an effective area to an unknown area on the image so as to fill a missing area; the multi-period image maps the information of the reference image to the reference image according to a certain method. In consideration of the difference of radiation characteristics between images acquired at different times, directly pasting the original image results in color difference between the reconstructed image block and the original image block, which results in color inconsistency. Another method is to use only one image as a reference and copy similar regions on the same image to fill in the missing regions.
The missing feature reconstruction based on the time dimension is a feature description which considers each pixel as changing along with time, the feature can be reflectivity, DN value or NDVI and the like, and the missing feature is reconstructed by describing the change of the feature in the time dimension according to a certain rule. Common methods such as mean filling, previous pixel value filling and next pixel value filling have good effects on stationary time sequences, but have general effects on ground objects with periodic changes or sudden changes. The other method is to select a time sequence characteristic as a condition for selecting similar pixels, namely, a change curve with the same or similar characteristic is selected according to the time sequence characteristic, and a reference time sequence curve is utilized to reconstruct a missing region.
The remote sensing image shows not only spatial correlation but also temporal correlation. The reconstruction result can not only reflect the texture details of the local area, but also be integrated with the surrounding non-missing area; but also reflects the time-varying characteristics of the image. The method of fusion of time domain and space domain tries to realize the reconstruction of the missing region by simultaneously utilizing the two kinds of information. One method divides the image into homogeneous image blocks by multi-scale segmentation, and describes the spatial similarity of image pixels because the image blocks are formed by similar pixels. Typical methods are as follows: forming superpixels by pixel clustering to obtain superpixels with similar spectral blocks (Zhouya' nan, Yang Xiaonzeng, Feng Li, Wu Wei, Wu Tianjun, Luo Jiancheng, Zhou Xiaocheng, Zhang Xin. Superpixel-based time-series retrieval for optical images using SAR data using autoencoder networks [ J ]. GIScience & remove sensing.2020,57(8): 1005-) 1025); partitioning by multi-Temporal phases so that the partitions have similar variation laws (WuWei, Ge Luoqi, Luo Jiancheng, Huang Ruohong, Yang Yingpin.A Spectral-Temporal Patch-based MissingArea Reconstruction for Time-Series Images [ J ] Remote Sensing,2018,10(10): 1560.); on the basis, the whole reconstruction of the missing region is realized by establishing the time change relation of the image blocks. Time series similarity judgment and statistics-based methods are widely used for time series reconstruction, where euclidean distance and correlation coefficient are two commonly used measures for describing the correlation between time series.
From the above process, it can be seen that: the spatial domain and the time domain of the method are processed separately, namely, pixels are firstly gathered into super pixels through clustering, or segmentation blocks are obtained through image segmentation; and then, carrying out time series modeling by using the LSTM to describe the time variation characteristics of the image. The clustering and timing modeling of this method are performed independently, and this method assumes that the superpixel or inside of the segmented block is uniform, which is quite different from the fact. A more optimized mode is that the time change rule and the space change rule can be optimized in a coordinated mode, so that the model can describe not only the space change of the earth surface, but also the time change of the earth surface. Meanwhile, the space-time distribution rule of the earth surface is described through nonlinear models such as convolution and the like.
Disclosure of Invention
In order to fully utilize the time-space characteristics shown by the time series image, the invention provides a time series image reconstruction method based on a gated convolution-long and short memory network.
Assuming that a group of n remote sensing images I acquired at different time are arranged according to a time sequence to form a time sequence image:
I=<I1,I2,...,In> (1)
where < > denotes an ordered set, i.e. the individual elements in the sequence are ordered.
The data state of the partial area on the remote sensing image is represented by using a mask M. The mask identifies whether each pixel of the image is missing, 0 represents a missing value and 1 represents a valid value. The mask data is obtained using a cloud mask algorithm or a manual visual approach. Inputting an image I with a scene covering the same areajThe image is from a time-series image or other time images, and partial areas on the image are missing, and the missing areas need to be filled.
The invention adopts a generation countermeasure network structure, utilizes a time sequence image I to train a model G, and the model describes the change relation of the time sequence image in time and space. Meanwhile, the artificially generated result and the real image are input into a classifier D, and the judgment input is the artificial generationThe result is also a real image, and the two images are subjected to antagonistic training to realize the cooperative optimization of the two images. Then, the image I to be reconstructed isjInput into a generator G according to IjPredicting the missing part on the image to generate a non-missing image.
Comprises the following key steps:
step 1: generator training
Substep 1: GatedConvLSTM-based time sequence image feature extraction
Inputting an image mask M and a time series image I, and extracting time series characteristics F of the image through a convolutional neural network GatedConvLSTM1 TAnd hidden features
Figure BDA0003099639690000031
Figure BDA0003099639690000032
Where subscript 1 of gateconvlstm 1 indicates the hierarchy of the classifier.
Substep 2: GatedConv3 d-based image downsampling
The extracted features F1 TDownsampling the spatial resolution to obtain the downsampled feature F by three-dimensional gated convolution (GatedConv3d)2 C
F2 C=GatedConv3d1(F1 T)
The same reasoning can be obtained according to substeps 1 and 2:
Figure BDA0003099639690000033
Figure BDA0003099639690000034
Figure BDA0003099639690000035
substep 3: multi-scale hole feature fusion based on GatedConv3d
Extracting the feature F3 TThe depth fusion feature F was obtained by GatedConv3d fusion of four void ratios (scaled) of 2, 4, 6, 8, respectivelyB
FB=DilatedGatedConv3ds(F3 T)
The substeps 2, 3 correspond to a coding process, i.e. the coding of time-series images is characterized.
Substep 4: image feature connection
Depth fusion feature FBFeature F obtained by comparison with the previous GatedConvLSTM3 TCombining and inputting gated convolution to obtain pre-upsampled features F4 C
F4 C=GatedConv3d3(Concat(F3 T,FB))
Substep 5: image feature upsampling
Pre-upsampling feature F4 CObtaining hidden features after processing with GatedConvLSTM of the same scale
Figure BDA0003099639690000041
Input to the current GatedConvLSTM to maintain the temporal dimension of the association over the network features.
Figure BDA0003099639690000042
Will be characterized by
Figure BDA0003099639690000043
Two times amplified by the upsampling gated convolution.
Figure BDA0003099639690000044
The same can be obtained according to substeps 4 and 5:
Figure BDA0003099639690000045
Figure BDA0003099639690000046
F6=UpsampledGatedConv3d2(F5 T)
F6 C=GatedConv3d6(Concat(F1 T,F6))
Figure BDA0003099639690000047
substep 6: time series image generation
After the structure and the steps of up-sampling and down-sampling, the repaired time series image I is output by using gated convolutionout
Figure BDA0003099639690000048
Step 2: classifier training
The real time series image and the generated image are input to a classifier C, and whether the image is real or an image synthesized by the classifier is judged.
The invention uses a multi-temporal spectral local discriminator which makes full use of the information of the temporal-spatial features and the time dimension. It consists of 3d convolutions with 6 convolution kernels of 3 x 5 and step sizes of 1 x 2.
Each convolutional layer of the discriminator uses spectral normalization to stabilize the training.
In addition, we use least squares to generate a training form of the countermeasure network, and the optimization function of the discriminator is as follows:
Figure BDA0003099639690000051
Figure BDA0003099639690000052
wherein G represents a generator; d represents a discriminator; z represents the input deletion sequence; both a and c represent true, denoted by 1, and b represents false.
And step 3: reconstruction of a region of absence
Step of partial area missing image IjTo the generator G which finishes the training in the step 1, obtaining a time sequence reconstruction image Iout
The invention has the advantages that: 1. modeling the time series image by using an LSTM network so that a reconstruction result can describe the time variation characteristics of the earth surface;
2. equivalent to the conventional convolution mode, all pixels are regarded as effective pixels, and the gated convolution can better perform feature extraction on a missing region by explicitly modeling the missing pixels.
3. The invention fuses LSTM and gated convolution, can model by considering time and space characteristics at the same time, and has better accuracy in reconstruction.
4. The invention adopts the training of the generation confrontation network, and can improve the reconstruction effect through the cooperative optimization of the generation network and the confrontation network.
Drawings
FIG. 1 is a flow chart of a method for reconstructing images based on time series.
Fig. 2 is a network structure diagram based on time-series image reconstruction.
Fig. 3 time series images before reconstruction.
Fig. 4 time series images after reconstruction.
Detailed Description
This embodiment will describe an embodiment of the present invention with reference to the flowchart of fig. 1 and the network structure diagram of fig. 2. And constructing a time sequence image I by using an 18 scene sentinel second image acquired between 3 months and 8 months in 2019, wherein the time sequence image I is located in the Anhui shou county of China.
First, the 18-view image is preprocessed, and atmospheric correction and cloud shadow removal are performed. Wherein, the atmosphere correction uses Sen2Cor software; and (3) cloud shadow removal, namely, generating a cloud shadow mask M by using Fmask4 software, and extracting clear and clear pixels according to the mask. Fig. 3 shows a time-series image I with a missing partial region.
The data set is a clean part in time series and is divided into 32 × 32 blocks with a time length of 10 and a total of 8811. Training and test sets were written at 9: 1 are randomly divided. The true value target is the originally divided small block, and the random time and random position of the input sequence is lost.
The training data is subjected to data enhancement by using a random rotation mode. The generator and classifier were co-trained for 200 generations in a challenge-generating fashion, iterating 8 data at a time. Both the generator and the classifier were learned using Adam optimizer, with the generator initial learning rate set to 0.001 and the classifier initial learning rate 0.0005, both attenuated by 0.5 for each 20 generations.
Step 1: generator training
Substep 1: GatedConvLSTM-based time sequence image feature extraction
Inputting the time-series image I and the cloud shadow mask M according to the formula (2), and extracting the time-series characteristics F of the image through a convolutional neural network GatedConvLSTM1 TAnd hidden features
Figure BDA0003099639690000061
Substep 2: GatedConv3 d-based image downsampling
The extracted features F1 TDownsampling the spatial resolution to obtain the downsampled feature F by three-dimensional gated convolution (GatedConv3d)2 CThe input of the step is the convolution feature F extracted in the last step1 TAs in equation (3).
The same can be obtained from substeps 1 and 2, as shown in equations (4), (5) and (6).
Substep 3: multi-scale hole feature fusion based on GatedConv3d
Equation (7) willExtracting the feature F3 TThe deep fusion characteristic F is obtained by four GatedConv3d fusions with the voidage rates of 2, 4, 6 and 8 respectivelyB
Substep 4: image feature connection
Depth fusion feature FBFeature F obtained by comparison with the previous GatedConvLSTM3 TCombining and inputting gated convolution to obtain pre-upsampled features F4 CAs in equation (8).
Substep 5: image feature upsampling
Pre-upsampling feature F as in equation (9)4 CObtaining hidden features after processing with GatedConvLSTM of the same scale
Figure BDA0003099639690000071
Input to the current GatedConvLSTM to maintain the temporal dimension of the association over the network features.
Characteristic F as shown in formula (10)4 TTwo times amplified by the upsampling gated convolution.
The same applies to substeps 4 and 5, as given by equations (11), (12), (13), (14) and (15).
Substep 6: time series image generation
After the structure and steps similar to encoding-decoding, the repaired time series image I is finally output by using gated convolutionoutAs in equation (16).
Step 2: classifier training
Inputting the real image and the image generated by the algorithm into a classifier D, and judging whether the image is real or the image synthesized by the classifier. In this regard, we use a multi-temporal spectral local discriminator that fully exploits the information in the spatiotemporal features and the time dimension. It consists of 3d convolutions with 6 convolution kernels of 3 x 5 and step sizes of 1 x 2. Each convolutional layer of the discriminator uses spectral normalization to stabilize the training. Whether the final output image is real or generator-synthesized.
In addition, we use the training of least squares GAN, and the optimization function of the discriminator is as shown in equations (17) and (18).
And step 3: reconstruction of a region of absence
Inputting the image Ij with partial area missing to the generator trained in the step 1 to obtain the image without missing.
The missing regions in the time-series image I are filled in sequence by the above method, and the obtained non-missing time-series image is shown in fig. 4. It can be seen that the result of filling the missing region can not only clearly reflect the texture space distribution of the ground object, but also reflect the time sequence change of the image, which indicates that the method can accurately reflect the time change and the space change of the ground object.
The foregoing is merely a description of embodiments of the invention and is not intended to limit the scope of the invention to the particular forms set forth, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A time series image reconstruction method based on a gate-controlled convolution-long and short memory network is characterized in that a time series image I with a part of missing areas and a mask M for identifying the missing areas are input, and the method comprises the following key steps:
step 1: training a generator;
step 2: training a classifier;
and step 3: and (4) reconstructing a deletion area.
2. According to the above method for reconstructing a time series image based on a gated convolution-long and short memory network, the step 1 comprises:
substep 1: GatedConvLSTM-based time sequence image feature extraction
Inputting an image mask M and a time series image I, and extracting time series characteristics of the image through a convolutional neural network GatedConvLSTM
Figure FDA0003099639680000011
And hidden features
Figure FDA0003099639680000012
Figure FDA0003099639680000013
Where subscript 1 of gateconvlstm 1 indicates the hierarchy of the classifier.
Substep 2: GatedConv3 d-based image downsampling
The extracted features
Figure FDA0003099639680000014
Downsampling features by spatial resolution downsampling through three-dimensional gated convolution (GatedConv3d)
Figure FDA0003099639680000015
Figure FDA0003099639680000016
The same reasoning can be obtained according to substeps 1 and 2:
Figure FDA0003099639680000017
Figure FDA0003099639680000018
Figure FDA0003099639680000019
substep 3: multi-scale hole feature fusion based on GatedConv3d
Extracting the feature
Figure FDA00030996396800000110
The depth fusion feature F was obtained by GatedConv3d fusion of four void ratios (scaled) of 2, 4, 6, 8, respectivelyB
Figure FDA00030996396800000111
Sub-steps 2, 3 correspond to a coding process, i.e. the coding of time-series images is characterized,
substep 4: image feature connection
Depth fusion feature FBFeatures obtained by comparison with previous GatedConvLSTM
Figure FDA00030996396800000112
Combining and inputting gated convolutions to obtain pre-upsampled features
Figure FDA00030996396800000113
Figure FDA00030996396800000114
Substep 5: image feature upsampling
Pre-upsampling features
Figure FDA0003099639680000021
Obtaining hidden features after processing with GatedConvLSTM of the same scale
Figure FDA0003099639680000022
Input to the current GatedConvLSTM, to maintain the time dimension of the association over the network features,
Figure FDA0003099639680000023
will be characterized by
Figure FDA0003099639680000024
Two times amplified by the upsampling gated convolution.
Figure FDA0003099639680000025
The same can be obtained according to substeps 4 and 5:
Figure FDA0003099639680000026
Figure FDA0003099639680000027
Figure FDA0003099639680000028
Figure FDA0003099639680000029
Figure FDA00030996396800000210
substep 6: time series image generation
After the structure and the steps of up-sampling and down-sampling, the repaired time series image I is output by using gated convolutionout
Figure FDA00030996396800000211
3. The method for reconstructing time-series images based on a gated convolution-long and short memory network as claimed in claim 1 or 2, wherein said step 2 comprises: the real time series image and the generated image are input to a classifier C, and whether the image is real or an image synthesized by the classifier is judged.
4. The time series image reconstruction method based on the gated convolution-long and short memory network as claimed in claim 3, wherein the judgment method is specifically a multi-temporal spectral local discriminator, which is composed of 3d convolutions with 6 convolution kernels of 3 × 5 × 5 and step length of 1 × 2 × 2, each convolution layer of the discriminator uses spectral normalization to stabilize training, in addition, we use least squares to generate a training form of the countermeasure network, and the optimization function of the discriminator is as follows:
Figure FDA00030996396800000212
Figure FDA00030996396800000213
wherein G represents a generator; d represents a discriminator; z represents the input deletion sequence; both a and c represent true, denoted by 1, and b represents false.
5. The method for reconstructing time-series images based on a gated convolution-long and short memory network as claimed in claim 1 or 2, wherein said step 3 comprises: step of partial area missing image IjTo the generator G which finishes the training in the step 1, obtaining a time sequence reconstruction image Iout
6. The method according to claim 2, wherein the gated convolution and long-short memory network are used in substep 1 to extract spatial and temporal features of the image respectively.
7. The method for reconstructing time-series images based on the gated convolution-long and short memory network as claimed in claim 2, wherein sub-steps 2 and 5 in step 1 respectively adopt down-sampling and up-sampling to blend image information in multiple scales.
CN202110620359.9A 2021-06-03 2021-06-03 Time sequence image reconstruction method based on gating convolution-long and short memory network Active CN113409413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110620359.9A CN113409413B (en) 2021-06-03 2021-06-03 Time sequence image reconstruction method based on gating convolution-long and short memory network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110620359.9A CN113409413B (en) 2021-06-03 2021-06-03 Time sequence image reconstruction method based on gating convolution-long and short memory network

Publications (2)

Publication Number Publication Date
CN113409413A true CN113409413A (en) 2021-09-17
CN113409413B CN113409413B (en) 2024-04-19

Family

ID=77676130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110620359.9A Active CN113409413B (en) 2021-06-03 2021-06-03 Time sequence image reconstruction method based on gating convolution-long and short memory network

Country Status (1)

Country Link
CN (1) CN113409413B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111669373A (en) * 2020-05-25 2020-09-15 山东理工大学 Network anomaly detection method and system based on space-time convolutional network and topology perception
WO2020234449A1 (en) * 2019-05-23 2020-11-26 Deepmind Technologies Limited Generative adversarial networks with temporal and spatial discriminators for efficient video generation
CN112288647A (en) * 2020-10-13 2021-01-29 武汉大学 Remote sensing image cloud and shadow restoration method based on gating convolution

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020234449A1 (en) * 2019-05-23 2020-11-26 Deepmind Technologies Limited Generative adversarial networks with temporal and spatial discriminators for efficient video generation
CN111669373A (en) * 2020-05-25 2020-09-15 山东理工大学 Network anomaly detection method and system based on space-time convolutional network and topology perception
CN112288647A (en) * 2020-10-13 2021-01-29 武汉大学 Remote sensing image cloud and shadow restoration method based on gating convolution

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王艳恒;高连如;陈正超;张兵;: "结合深度学习和超像元的高分遥感影像变化检测", 中国图象图形学报, no. 06 *

Also Published As

Publication number Publication date
CN113409413B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
Liu et al. Meshdiffusion: Score-based generative 3d mesh modeling
CN111383192B (en) Visible light remote sensing image defogging method fusing SAR
CN111931787A (en) RGBD significance detection method based on feature polymerization
CN101976435A (en) Combination learning super-resolution method based on dual constraint
US20230245266A1 (en) Generating digital images utilizing high-resolution sparse attention and semantic layout manipulation neural networks
Li et al. A superresolution land-cover change detection method using remotely sensed images with different spatial resolutions
Song et al. MLFF-GAN: A multilevel feature fusion with GAN for spatiotemporal remote sensing images
Wang et al. Spatiotemporal subpixel mapping of time-series images
CN115131674A (en) Multi-temporal optical remote sensing image cloud detection method based on deep low-rank network
CN113327231A (en) Hyperspectral abnormal target detection method and system based on space-spectrum combination
CN110363178B (en) Airborne laser point cloud classification method based on local and global depth feature embedding
Yang et al. Deep residual network with multi-image attention for imputing under clouds in satellite imagery
CN117115359B (en) Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion
Liu A review on remote sensing data fusion with generative adversarial networks (GAN)
CN113409413A (en) Time sequence image reconstruction method based on gated convolution-long and short memory network
CN116563103A (en) Remote sensing image space-time fusion method based on self-adaptive neural network
CN116051936A (en) Chlorophyll concentration ordered complement method based on space-time separation external attention
CN113487546B (en) Feature-output space double-alignment change detection method
Boss et al. Deep Dual Loss BRDF Parameter Estimation.
CN114612315A (en) High-resolution image missing region reconstruction method based on multi-task learning
CN114463175A (en) Mars image super-resolution method based on deep convolution neural network
Chen et al. Ground 3D Object Reconstruction Based on Multi-View 3D Occupancy Network using Satellite Remote Sensing Image
San-You et al. Adaptive diagonal total-variation generative adversarial network for super-resolution imaging
Zhang et al. Deep-Learning-Based Point Cloud Upsampling of Natural Entities and Scenes
Song et al. Building roof detection from a single high-resolution satellite image in dense urban area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant