CN108876754A - A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks - Google Patents

A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks Download PDF

Info

Publication number
CN108876754A
CN108876754A CN201810549450.4A CN201810549450A CN108876754A CN 108876754 A CN108876754 A CN 108876754A CN 201810549450 A CN201810549450 A CN 201810549450A CN 108876754 A CN108876754 A CN 108876754A
Authority
CN
China
Prior art keywords
convolution
data
information
image
missing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810549450.4A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201810549450.4A priority Critical patent/CN108876754A/en
Publication of CN108876754A publication Critical patent/CN108876754A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks proposed in the present invention, main contents include:The fusion of multi-source data, multiple dimensioned convolution feature extraction unit, expansion convolution, Enhanced time-space-optical spectrum information, jump connection, its process is, the different data of input two types first, Feature Mapping is generated by convolution algorithm, then more features are extracted by multiple dimensioned convolution feature extraction unit and by Fusion Features, then identical filtering is realized using different broadening factors, define residual error mapping, structural deficiency region, using gradient descent algorithm as optimization method, transfer characteristic information is connected by jump, keeps the details of image;The mean square error for finally calculating loss function obtains truthful data by convergence loss until numerical value minimum.The present invention can operate with the different types of recovery for losing information, improve visual perception and reconstruction precision, improve working efficiency.

Description

Remote sensing image missing data reconstruction method based on deep convolutional neural network
Technical Field
The invention relates to the field of missing data reconstruction, in particular to a remote sensing image missing data reconstruction method based on a deep convolutional neural network.
Background
The remote sensing image is characterized in that a computer is utilized to analyze spectral information and spatial information of various ground objects in the remote sensing image, characteristics are selected, the characteristic space is divided into complementary and overlapped subspaces by a certain means, and then each pixel in the image is classified into the subspaces. The remote sensing image with high time resolution and high spatial resolution plays an important role in land utilization change detection, dynamic monitoring, rapid earth surface change detection and other applications. In the military aspect, the method is shown in military reconnaissance and monitoring; in the aspect of geological and mineral products, the application is mainly shown in the aspects of basic geological work, mineral geological work, engineering geology, seismic geology, geological comprehensive investigation of disaster geology and the like; in agriculture and forestry, the method is mainly shown in the aspects of agriculture and forestry land resource investigation, land utilization status investigation, agriculture and forestry plant diseases and insect pests, soil drought, salinization and desertification investigation and monitoring, crop growth monitoring and yield estimation, forest resource clearing and the like. Due to the internal fault of the satellite sensor and the influence of severe weather, the acquired remote sensing data has the problem of information loss, and the usability of the remote sensing data is greatly reduced.
As far as the current state of the art is concerned, the spectrum-based method is able to better recover missing spectral data with higher accuracy by exploiting the high correlation between different spectral data, however this cannot deal with cloud coverage because all spectral bands are missing to different degrees; space-based methods are suitable for reconstructing small missing regions or regions with regular texture, however the reconstruction accuracy cannot be guaranteed, especially for larger or complex texture regions; in the time-based method, time difference is a main obstacle in the reconstruction process, and registration errors among the multi-time images can also have adverse effects on the precision of corresponding recovery areas; in addition, the highly nonlinear spatial relationship between the multi-source remote sensing images shows that high-order expression and better feature representation are crucial to reconstruction of missing information, and most methods based on linear models cannot well process complex nonlinear degradation models.
The invention provides a remote sensing image missing data reconstruction method based on a deep convolutional neural network, which comprises the steps of firstly inputting two types of different data, generating feature mapping through convolution operation, then extracting more features by a multi-scale convolution feature extraction unit and fusing the features, then realizing the same filtering by using different expansion factors, defining residual mapping, constructing a missing region, adopting a gradient descent algorithm as an optimization method, transmitting feature information by jump connection, and keeping the details of an image; and finally, calculating the mean square error of the loss function until the numerical value is minimum, and acquiring real data through convergence loss. The invention utilizes the strong nonlinear expression capability of the deep convolutional neural network, also fully utilizes the advantages of time, space and spectrum auxiliary complementary data, realizes the reconstruction of missing data, can be applied to the recovery of different types of lost information, improves the visual perception and reconstruction precision and improves the working efficiency.
Disclosure of Invention
Aiming at the problem of reconstructing the missing information of the remote sensing image, the invention aims to provide a remote sensing image missing data reconstruction method based on a deep convolutional neural network, which comprises the steps of firstly inputting two types of different data, generating feature mapping through convolution operation, then extracting more features by a multi-scale convolution feature extraction unit and fusing the features, then realizing the same filtering by using different expansion factors, defining residual mapping, constructing a missing region, adopting a gradient descent algorithm as an optimization method, transmitting feature information by jump connection, and keeping the details of the image; and finally, calculating the mean square error of the loss function until the numerical value is minimum, and acquiring real data through convergence loss.
In order to solve the above problems, the present invention provides a remote sensing image missing data reconstruction method based on a deep convolutional neural network, which mainly comprises:
fusing multi-source data;
(II) a multi-scale convolution feature extraction unit;
(III) expanding convolution;
(IV) enhancing the temporal-spatial-spectral information;
and (V) jump connection.
The remote sensing image missing data reconstruction method comprises the steps of adopting a space-time spectrum frame model (STS-CNN) of a deep convolutional neural network, firstly inputting two types of different data, generating feature mapping through convolution operation, then extracting more features by a multi-scale convolution feature extraction unit and fusing the features, then realizing the same filtering by using different expansion factors, defining residual mapping, constructing a missing region, adopting a gradient descent algorithm as an optimization method, transmitting feature information by jump connection, and keeping the details of an image; and finally, calculating the mean square error of the loss function until the numerical value is minimum, and acquiring real data through convergence loss.
Further, the gradient descent algorithm is used for ensuring the nonlinear combination of input when the output of the CNN is output, therebyIntroducing a nonlinear function as an excitation function, and updating network parameters by adopting a gradient descent strategyAndthe iterative training rule of (1) is as follows:
where α is the learning rate of the entire network.
In the fusion of the multi-source data, data information is highly related to the missing regions in the surface features and the texture features, so that complementary information such as spectrum or space-time data can improve the reconstruction accuracy; two types of data, such as spatial data of a missing region and supplementary information such as spectral temporal data, are input in the STS-CNN, and the two inputs are respectively subjected to a layer of convolution operation of 3 × 3 to generate 30 feature map outputs, respectively, which are then connected to 3 × 3 × 60.
The multi-scale convolution feature extraction unit has a multiple relation in different non-local areas in the process of reconstructing the missing information of the remote sensing image, so that the process depends on context information of different scales, and the multi-scale convolution feature extraction unit is introduced so as to extract more features; the multi-scale convolution unit comprises three convolution operations, the sizes of kernels are respectively 3 multiplied by 3, 5 multiplied by 5 and 7 multiplied by 7, the three convolutions are simultaneously carried out on the feature map of input data to generate feature maps of 20 channels, and then the three feature maps are connected into a feature map of 60 channels, so that the features for extracting context information of different scales are fused.
In the expanding convolution, the STS-CNN model adopts expanding convolution, so that a receiving domain can be expanded, the size of a convolution kernel filter can be kept, and the calculated amount and parameters of the model are not increased; the dilation convolution can use different convolution factors in different ranges to realize the same filtering, and a receiving domain and a layer depth are in an exponential relation; for the reconstruction model, the dilation factors of the 3 × 3 dilation convolution of layers 2 to 6 are set to 1, 2, 3, 2, and 1, respectively.
In order to reserve spatial information, the enhanced time-space-spectrum information transmits a residual image between an original image and an image of a lacking region to the last layer before a loss function, which is equivalent to constructing the lacking region, so that residual mapping is defined, a training image pair is provided for a proposed model, and a mean square error of the model loss function is set; in addition, to ensure that the information reduces distortion, the mask is transmitted to subsequent layers in the network, in connection with the dilation convolution.
Further, in the residual mapping, in the model, the relation between different supplementary data is learned by using residual output instead of direct output, the learning process of a residual unit is very sparse, and the original data is more easily approximated by extracting and expressing deep layers and internal features of the original data; since the input area and the output result are approximately the same over the complete area, a residual mapping is defined:
whereinIs an image lacking data information, xiIs the original undamaged image riCorresponding to the missing region, beyond which most of the pixel values in the remaining image are close to zero, the remaining feature map distribution will be sparse, which will make the gradient descent process smoother.
Further, the mean squared error gives a set of N training image pairsWhereinIs a spectral or temporal auxiliary image, theta is a network parameter,
the Mean Squared Error (MSE) of the loss function in the model is defined as shown in the above equation.
The jump connection is used for a deep convolution neural network in order to solve or reduce the problem of gradient disappearance or explosion caused by layer depth increase, the jump connection can transmit the characteristic information of the previous layer to the next layer, and the details of the image are reserved; in the STS-CNN model, the multiscale convolution uses three-hop connection, and the dilation convolution uses two-hop connection.
Drawings
FIG. 1 is a system framework diagram of a remote sensing image missing data reconstruction method based on a deep convolutional neural network.
FIG. 2 is a framework flow chart of remote sensing image missing data reconstruction based on a deep convolutional neural network.
FIG. 3 is an enhanced spatiotemporal spectrum information structure of the remote sensing image missing data reconstruction method based on the deep convolutional neural network.
FIG. 4 is a jump connection of the remote sensing image missing data reconstruction method based on the deep convolutional neural network.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application can be combined with each other without conflict, and the present invention is further described in detail with reference to the drawings and specific embodiments.
FIG. 1 is a system architecture diagram of a remote sensing image missing data reconstruction method based on a deep convolutional neural network. The method mainly comprises the steps of fusion of multi-source data, a multi-scale convolution feature extraction unit, expansion convolution, time-space-spectrum enhancement and jump connection.
A remote sensing image missing data reconstruction method adopts a space-time spectrum frame model (STS-CNN) of a deep convolutional neural network, firstly, two types of different data are input, feature mapping is generated through convolution operation, then, more features are extracted by a multi-scale convolution feature extraction unit and fused, then, the same filtering is realized by using different expansion factors, residual mapping is defined, a missing region is constructed, a gradient descent algorithm is adopted as an optimization method, feature information is transmitted through jump connection, and the details of an image are kept; and finally, calculating the mean square error of the loss function until the numerical value is minimum, and acquiring real data through convergence loss.
FIG. 2 is a framework flow chart of remote sensing image missing data reconstruction based on a deep convolutional neural network. The graph shows the responsible nonlinear relationship between the missing information image and the supplementary image, and the convergence loss between the original undamaged image and the missing information image.
In order to ensure the nonlinear combination of input when the output of CNN is output, thereby introducing a nonlinear function as an excitation function, a gradient descent strategy is adopted to update network parametersAndthe iterative training rule of (1) is as follows:
where α is the learning rate of the entire network.
The fusion of multi-source data, data information is highly related to the missing region in the surface characteristic and the texture characteristic, so that the reconstruction accuracy can be improved by complementary information such as spectrum or space-time data; two types of data, such as spatial data of a missing region and supplementary information such as spectral temporal data, are input in the STS-CNN, and the two inputs are respectively subjected to a layer of convolution operation of 3 × 3 to generate 30 feature map outputs, respectively, which are then connected to 3 × 3 × 60.
The multi-scale convolution feature extraction unit is used for extracting the size of the ground target in different non-local areas, wherein the size of the ground target is usually in a multiple relation, the process depends on context information of different scales, and the multi-scale convolution feature extraction unit can provide more features for the context information; the multi-scale convolution unit comprises three convolution operations, the sizes of kernels are respectively 3 multiplied by 3, 5 multiplied by 5 and 7 multiplied by 7, the three convolutions are simultaneously carried out on the feature map of input data to generate feature maps of 20 channels, and then the three feature maps are connected into a feature map of 60 channels, so that the features for extracting context information of different scales are fused.
Expanding convolution, which can realize the same filtering by using different convolution factors in different ranges, can expand the receiving domain and keep the size of a convolution kernel filter, a common convolution receiving field is linearly related to the depth of layer, and the expansion convolution receiving field is in exponential relation to the depth of layer; for STS-CNN, the expansion factors of the 3 × 3 expansion convolution from layer 2 to layer 6 are set to 1, 2, 3, 2, and 1, respectively.
FIG. 3 is an enhanced spatiotemporal spectrum information structure of the remote sensing image missing data reconstruction method based on the deep convolutional neural network. The figure shows that the residual image between the original image and the missing information image is transmitted to the last layer of the loss function, which is equivalent to constructing the missing region; the two different types of information complement each other and the mask is transmitted to the network for subsequent use in connection with the dilation convolution.
The method comprises the steps of enhancing time-space-spectrum information, defining residual mapping, giving a training image pair aiming at the proposed model, and setting the mean square error of a model loss function, so that more layers can be added into the network and the performance of the network can be improved.
Further, the relation between different supplementary data is learned by using residual output instead of direct output, and the original data is more easily approximated by extracting and expressing deep layers and internal features of the original data; since the input area and the output result are approximately the same over the complete area, a residual mapping is defined:
whereinIs an image lacking data information, xiIs the original undamaged image riCorresponding to the missing region, beyond which most of the pixel values in the remaining image are close to zero, the remaining feature map distribution will be sparse, which will make the gradient descent process smoother.
Further, a set of N training image pairs is givenWhereinIs a spectral or temporal auxiliary image, theta is a network parameter,
the Mean Squared Error (MSE) of the loss function in the model is defined as shown in the above equation.
And the jump connection can transmit the characteristic information of the previous layer to the next layer to keep the details of the image in order to reduce the problem of gradient disappearance or explosion caused by the increase of the layer depth. In the STS-CNN model, a multi-scale convolution feature extraction unit and an expansion convolution both adopt jump connection.
FIG. 4 is a jump connection of the remote sensing image missing data reconstruction method based on the deep convolutional neural network. This figure shows that in the STS-CNN model, the multiscale volume block uses three-hop connections (as shown in fig. 4 (a)), and the dilation convolution uses two-hop connections (as shown in fig. 4 (b)).
It will be appreciated by persons skilled in the art that the invention is not limited to details of the foregoing embodiments and that the invention can be embodied in other specific forms without departing from the spirit or scope of the invention. In addition, various modifications and alterations of this invention may be made by those skilled in the art without departing from the spirit and scope of this invention, and such modifications and alterations should also be viewed as being within the scope of this invention. It is therefore intended that the following appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.

Claims (10)

1. A remote sensing image missing data reconstruction method based on a deep convolutional neural network is characterized by mainly comprising the steps of fusing multi-source data; a second multi-scale convolution feature extraction unit; expanding convolution (three); enhancing the spatio-temporal spectral information (iv); and (5) jumping connection.
2. The remote sensing image missing data reconstruction method based on claim 1 is characterized in that a space-time spectrum frame model (STS-CNN) of a deep convolutional neural network is adopted, a training and testing data set is firstly created, a gradient descent algorithm is adopted as an optimization method, residual mapping is defined, a value of a learning rate is initialized, and evaluation indexes and training time are set; inputting two different types of data into a convolutional neural network, generating feature mapping through convolution operation, and increasing a receiving domain to obtain more feature information without increasing the calculation amount through a multi-scale convolution feature extraction unit, expansion convolution and jump connection; and then calculating the mean square error of the loss function until the numerical value is minimum, and finally obtaining real data through convergence loss.
3. Gradient descent algorithm according to claim 2, characterized in that in order to ensure that the output of CNN is a non-linear combination of inputs, thereby introducing a non-linear function as the excitation function, a gradient descent strategy is used to update the network parametersAndthe iterative training rule of (1) is as follows:
where α is the learning rate of the entire network.
4. The fusion of multi-source data (one) based on claim 1, wherein the data information is highly correlated with missing regions in surface features and texture features, so that complementary information such as spectral or spatio-temporal data can improve reconstruction accuracy; two types of data, such as spatial data of a missing region and supplementary information such as spectral temporal data, are input in the STS-CNN, and the two inputs are respectively subjected to a layer of convolution operation of 3 × 3 to generate 30 feature map outputs, respectively, which are then connected to 3 × 3 × 60.
5. The multi-scale convolution feature extraction unit (II) is characterized in that in the process of reconstructing the missing information of the remote sensing image, the relationship among different non-local areas is multiplied, so that the process depends on the context information of different scales, and the multi-scale convolution feature extraction unit is introduced to extract more features; the multi-scale convolution unit comprises three convolution operations, the sizes of kernels are respectively 3 multiplied by 3, 5 multiplied by 5 and 7 multiplied by 7, the three convolutions are simultaneously carried out on the feature map of input data to generate feature maps of 20 channels, and then the three feature maps are connected into a feature map of 60 channels, so that the features for extracting context information of different scales are fused.
6. The expanded convolution (III) according to claim 1, wherein the STS-CNN model employs expanded convolution to expand the receiving domain and maintain the size of the convolution kernel filter without increasing the computation and parameters of the model; the dilation convolution can use different convolution factors in different ranges to realize the same filtering, and a receiving domain and a layer depth are in an exponential relation; for the reconstruction model, the dilation factors of the 3 × 3 dilation convolution of layers 2 to 6 are set to 1, 2, 3, 2, and 1, respectively.
7. Enhanced spatio-temporal spectral information (iv) according to claim 1, characterized in that, to preserve the spatial information, the residual image between the original image and the image of the missing region is transferred to the last layer before the loss function, equivalent to the construction of the missing region, thus defining a residual mapping and setting the mean squared error of the model loss function for the proposed pair of models given a training image; in addition, to ensure that the information reduces distortion, the mask is transmitted to subsequent layers in the network, in connection with the dilation convolution.
8. Residual mapping according to claim 7, characterized in that in the model, the relation between different supplementary data is learned using residual output instead of direct output, the learning process of the residual unit is very sparse, and the original data is more easily approximated by extracting and expressing deep and intrinsic features of the original data; since the input area and the output result are approximately the same over the complete area, a residual mapping is defined:
wherein,is an image lacking data information, xiIs the original undamaged image riCorresponding to the missing region, beyond which most of the pixel values in the remaining image are close to zero, the remaining feature map distribution will be sparse, which will make the gradient descent process smoother.
9. The mean-squared error of claim 7, wherein a set of N training image pairs is providedWhereinIs a spectral or temporal auxiliary image, theta is a network parameter,
the Mean Squared Error (MSE) of the loss function in the model is defined as shown in the above equation.
10. The jump connection (v) according to claim 1, wherein, in order to solve or reduce the problem of gradient disappearance or explosion caused by the increase of the layer depth, the jump connection is used for a deep convolutional neural network, and the jump connection can transfer the feature information of the previous layer to the next layer, and the details of the image are preserved; in the STS-CNN model, the multiscale convolution uses three-hop connection, and the dilation convolution uses two-hop connection.
CN201810549450.4A 2018-05-31 2018-05-31 A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks Withdrawn CN108876754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810549450.4A CN108876754A (en) 2018-05-31 2018-05-31 A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810549450.4A CN108876754A (en) 2018-05-31 2018-05-31 A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks

Publications (1)

Publication Number Publication Date
CN108876754A true CN108876754A (en) 2018-11-23

Family

ID=64336318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810549450.4A Withdrawn CN108876754A (en) 2018-05-31 2018-05-31 A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks

Country Status (1)

Country Link
CN (1) CN108876754A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741407A (en) * 2019-01-09 2019-05-10 北京理工大学 A kind of high quality reconstructing method of the spectrum imaging system based on convolutional neural networks
CN109785302A (en) * 2018-12-27 2019-05-21 中国科学院西安光学精密机械研究所 A kind of empty spectrum union feature learning network and multispectral change detecting method
CN110889449A (en) * 2019-11-27 2020-03-17 中国人民解放军国防科技大学 Edge-enhanced multi-scale remote sensing image building semantic feature extraction method
CN112614066A (en) * 2020-12-23 2021-04-06 文思海辉智科科技有限公司 Image restoration method and device and electronic equipment
CN112836728A (en) * 2020-07-27 2021-05-25 盐城郅联空间科技有限公司 Image interpretation method based on deep training model and image interpretation sample library
CN114119444A (en) * 2021-11-29 2022-03-01 武汉大学 Multi-source remote sensing image fusion method based on deep neural network
CN114913296A (en) * 2022-05-07 2022-08-16 中国石油大学(华东) MODIS surface temperature data product reconstruction method
WO2023000158A1 (en) * 2021-07-20 2023-01-26 海南长光卫星信息技术有限公司 Super-resolution reconstruction method, apparatus and device for remote sensing image, and storage medium
CN117314757A (en) * 2023-11-30 2023-12-29 湖南大学 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium
CN117576523A (en) * 2023-10-19 2024-02-20 北华航天工业学院 Data set establishment method for multi-scale remote sensing calibration and authenticity inspection site screening

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651915A (en) * 2016-12-23 2017-05-10 大连理工大学 Target tracking method of multi-scale expression based on convolutional neural network
CN106991368A (en) * 2017-02-20 2017-07-28 北京大学 A kind of finger vein checking personal identification method based on depth convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651915A (en) * 2016-12-23 2017-05-10 大连理工大学 Target tracking method of multi-scale expression based on convolutional neural network
CN106991368A (en) * 2017-02-20 2017-07-28 北京大学 A kind of finger vein checking personal identification method based on depth convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QIANG ZHANG ET: "Missing Data Reconstruction in Remote Sensing Image With a Unified Spatial-Temporal-Spectral Deep Convolutional Neural Network", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785302A (en) * 2018-12-27 2019-05-21 中国科学院西安光学精密机械研究所 A kind of empty spectrum union feature learning network and multispectral change detecting method
CN109785302B (en) * 2018-12-27 2021-03-19 中国科学院西安光学精密机械研究所 Space-spectrum combined feature learning network and multispectral change detection method
CN109741407A (en) * 2019-01-09 2019-05-10 北京理工大学 A kind of high quality reconstructing method of the spectrum imaging system based on convolutional neural networks
CN110889449A (en) * 2019-11-27 2020-03-17 中国人民解放军国防科技大学 Edge-enhanced multi-scale remote sensing image building semantic feature extraction method
CN112836728A (en) * 2020-07-27 2021-05-25 盐城郅联空间科技有限公司 Image interpretation method based on deep training model and image interpretation sample library
CN112614066A (en) * 2020-12-23 2021-04-06 文思海辉智科科技有限公司 Image restoration method and device and electronic equipment
WO2023000158A1 (en) * 2021-07-20 2023-01-26 海南长光卫星信息技术有限公司 Super-resolution reconstruction method, apparatus and device for remote sensing image, and storage medium
CN114119444A (en) * 2021-11-29 2022-03-01 武汉大学 Multi-source remote sensing image fusion method based on deep neural network
CN114119444B (en) * 2021-11-29 2024-04-16 武汉大学 Multi-source remote sensing image fusion method based on deep neural network
CN114913296A (en) * 2022-05-07 2022-08-16 中国石油大学(华东) MODIS surface temperature data product reconstruction method
CN114913296B (en) * 2022-05-07 2023-08-11 中国石油大学(华东) MODIS surface temperature data product reconstruction method
CN117576523A (en) * 2023-10-19 2024-02-20 北华航天工业学院 Data set establishment method for multi-scale remote sensing calibration and authenticity inspection site screening
CN117314757A (en) * 2023-11-30 2023-12-29 湖南大学 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium
CN117314757B (en) * 2023-11-30 2024-02-09 湖南大学 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium

Similar Documents

Publication Publication Date Title
CN108876754A (en) A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks
Kulp et al. CoastalDEM: A global coastal digital elevation model improved from SRTM using a neural network
CN111383192A (en) SAR-fused visible light remote sensing image defogging method
Wang et al. Spatiotemporal fusion of remote sensing image based on deep learning
CN108269244B (en) Image defogging system based on deep learning and prior constraint
Wang et al. Remote sensing image gap filling based on spatial-spectral random forests
Gomes et al. Deep learning optimization in remote sensing image segmentation using dilated convolutions and ShuffleNet
CN112766099B (en) Hyperspectral image classification method for extracting context information from local to global
CN112767244B (en) High-resolution seamless sensing method and system for earth surface elements
CN110189282A (en) Based on intensive and jump connection depth convolutional network multispectral and panchromatic image fusion method
Liu et al. Thick cloud removal under land cover changes using multisource satellite imagery and a spatiotemporal attention network
Zhang et al. Transres: a deep transfer learning approach to migratable image super-resolution in remote urban sensing
Li et al. A pseudo-siamese deep convolutional neural network for spatiotemporal satellite image fusion
Zhang et al. Application of geographically weighted regression to fill gaps in SLC-off Landsat ETM+ satellite imagery
Liu et al. SI-SA GAN: A generative adversarial network combined with spatial information and self-attention for removing thin cloud in optical remote sensing images
Sun et al. Identifying terraces in the hilly and gully regions of the Loess Plateau in China
Bukheet et al. Land cover change detection of Baghdad city using multi-spectral remote sensing imagery
Müller et al. Identification of catchment functional units by time series of thermal remote sensing images
Li et al. ConvFormerSR: Fusing transformers and convolutional neural networks for cross-sensor remote sensing imagery super-resolution
He et al. An Unsupervised Dehazing Network with Hybrid Prior Constraints for Hyperspectral Image
Wang et al. Data fusion in data scarce areas using a back-propagation artificial neural network model: a case study of the South China Sea
CN115457356A (en) Remote sensing image fusion method, device, equipment and medium for geological exploration
Tejaswini et al. Land cover change detection using convolution neural network
Sadiq et al. Recovering the large gaps in Landsat 7 SLC-off imagery using weighted multiple linear regression (WMLR)
Adam et al. The assessment of invasive alien plant species removal programs using remote sensing and GIS in two selected reserves in the eThekwini Municipality, KwaZulu-Natal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20181123