CN112508082A - Unsupervised learning remote sensing image space spectrum fusion method and system - Google Patents

Unsupervised learning remote sensing image space spectrum fusion method and system Download PDF

Info

Publication number
CN112508082A
CN112508082A CN202011398541.6A CN202011398541A CN112508082A CN 112508082 A CN112508082 A CN 112508082A CN 202011398541 A CN202011398541 A CN 202011398541A CN 112508082 A CN112508082 A CN 112508082A
Authority
CN
China
Prior art keywords
network
image
panchromatic
fusion
multispectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011398541.6A
Other languages
Chinese (zh)
Inventor
蒋梦辉
李�杰
沈焕锋
袁强强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202011398541.6A priority Critical patent/CN112508082A/en
Publication of CN112508082A publication Critical patent/CN112508082A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a remote sensing image space spectrum fusion method and system for unsupervised learning, which realize remote sensing image fusion through deep learning and are characterized in that: based on a single set of panchromatic-multispectral image pair, an unsupervised network training mode is adopted to realize the fusion of the panchromatic-multispectral image pair; the realization process comprises respectively performing down-sampling on a panchromatic image and a multispectral image which are originally observed in the single panchromatic-multispectral image pair to serve as a network training data pair, taking the multispectral image which is originally observed as network label data, quickly training a fusion network, and inputting the panchromatic image and the multispectral image which are originally observed into the trained fusion network to obtain the fusion image. The invention constructs the training data pair by using a down-sampling method, avoids the requirement of the training process of the traditional network method on ground truth value, and realizes unsupervised learning; in addition, the invention aims at a single group of panchromatic-multispectral image pair training network, does not need a large amount of training data, and can quickly complete the training of the network.

Description

Unsupervised learning remote sensing image space spectrum fusion method and system
Technical Field
The invention belongs to the technical field of remote sensing image processing, relates to a remote sensing image fusion method, and particularly relates to an unsupervised learning remote sensing image space spectrum fusion scheme.
Background
Due to the limitation of satellite sensor hardware, the receivable incident energy of the sensor is limited, so that the space-spectrum-time resolution and the like of a single remote sensing image cannot be obtained at the same time. Currently, many satellites can provide full-color and multi-spectral images of the same scene on the ground at the same time; the panchromatic image has high spatial resolution, but only has one waveband, and the spectral resolution is low; the multispectral image has a plurality of wave bands, but the spatial resolution is low, and the application requirements cannot be met. Therefore, complementary information in the multi-source remote sensing images is fused by fully utilizing the remote sensing image fusion technology, the multispectral images with high spatial resolution are obtained, and the multispectral image fusion method has great research and application values.
According to the development process, the current remote sensing image fusion method can be roughly divided into the following categories: the first type is a component replacement-based method, which separates spatial information components and spectral information components by projecting and changing a multispectral image, then replaces the spatial components with a panchromatic image, and finally obtains a fused image through inverse projection transformation, wherein representative methods include a PCA method, a Gram-Schmidt method and the like. The method can rapidly improve the spatial details of the multispectral image, but has a certain spectrum distortion phenomenon. The second method is based on multi-resolution analysis, and is to perform high-pass filtering on the panchromatic image through wavelet transform, Laplace transform and the like, and then inject the high-frequency detail information of the filtered panchromatic image into the multispectral image to improve the resolution of the multispectral image. The method is good in spectral fidelity performance, but insufficient in spatial information enhancement. The third method is a method based on a variation model, which considers the fusion process as the optimization solution of an ill-defined inverse problem, generally establishes a variation fusion model according to a maximum posterior probability estimation theory or a sparse expression theory, and optimizes the model to obtain a fusion result. The method has a solid mathematical foundation, the fusion precision is higher than that of the first two methods, but the effect depends on a complex regularization prior which needs to be manually designed, and the iterative solution process consumes a large amount of time. On the basis, a fourth fusion method based on deep learning is gradually researched by broad scholars, and the method fully utilizes the strong feature extraction and expression capability of the convolutional neural network to learn the mapping relation between the original observation image and the fusion image from a large amount of training sample data.
However, most of the existing methods based on deep learning adopt a supervised learning method, training and testing are performed on simulation data, and a simulated ground truth value (namely an ideal fusion result) is used as a network label data supervised network training process; in the real case, ground truth does not exist. In addition, the existing method based on deep learning relies on a large number of training samples, and to a certain extent, the sample size determines the accuracy of the fusion network, which makes the training of the network take a lot of time, and therefore a new fusion scheme needs to be researched.
Disclosure of Invention
The invention aims to provide a remote sensing image space spectrum fusion scheme for unsupervised learning, aiming at the defects in the prior art.
The technical scheme of the invention is a remote sensing image empty-spectrum fusion method of unsupervised learning, which realizes remote sensing image fusion through deep learning, and realizes the fusion of a panchromatic-multispectral image pair by adopting an unsupervised network training mode based on a single set of panchromatic-multispectral image pair; the implementation process comprises respectively performing downsampling on a panchromatic image and a multispectral image originally observed in the single panchromatic-multispectral image pair, taking the panchromatic image and the multispectral image with lower resolution as a network training data pair, taking the multispectral image originally observed as network label data, quickly training a fusion network, inputting the panchromatic image and the multispectral image originally observed into the trained fusion network, and outputting the obtained network as the fusion image.
Moreover, an implementation of constructing the network training data pairs includes the steps of,
1.1, performing r-time downsampling on a panchromatic image originally observed to the spatial resolution which is the same as that of a multispectral image originally observed; r is a preset multiple value;
1.2, performing r-time downsampling on the multispectral image of the original observation to a lower resolution; then, the sampling is carried out again, and the original image size is returned;
and step 1.3, taking the full-color and multispectral images after the down-sampling as input data of a network, and taking the multispectral images which are originally observed as label data of the network, so as to construct a network training data pair.
Moreover, the down-sampling and up-sampling in steps 1.1 and 1.2 are performed by bi-cubic interpolation.
Moreover, r is preferably 4.
Moreover, the implementation of training the converged network includes the following steps,
step 2.1, setting a network structure;
step 2.2, setting a loss function for guiding network parameter updating;
and 2.3, calculating errors of the network output and the network label data according to the loss function in the step 2.2, transmitting the errors to each structure of the network through back propagation, and updating parameters of the network by adopting a gradient descent method.
Furthermore, the network structure used in step 2.1 is a residual network, and includes three parts, namely, a front convolution part, a residual block part and a rear convolution part, wherein the front convolution part is a 7 × 7 convolutional layer, the residual block part includes five residual blocks, each residual block is composed of two 3 × 3 convolutions and a jump connection, and the rear convolution part is a 7 × 7 convolutional layer.
Furthermore, the loss function used in step 2.2 is formulated as:
Figure BDA0002811644950000021
wherein, x is label data of the network in the training process, namely multispectral images of original observation; net (. cndot.) is the network, y and z are the input data during network training, corresponding to the downsampled multispectral and panchromatic images, and Θ is the learnable parameter of the network, (. cndot.)↓rRepresenting r times down-sampling, and λ is an adjustable weight.
On the other hand, the invention also provides an unsupervised learning remote sensing image space spectrum fusion system, which is used for realizing the unsupervised learning remote sensing image space spectrum fusion method.
And, including the following modules,
the system comprises a first module, a second module and a third module, wherein the first module is used for constructing a network training data pair, and comprises the steps of respectively performing downsampling on a panchromatic image and a multispectral image which are originally observed in a single set of panchromatic-multispectral image pair, and taking the panchromatic image and the multispectral image with lower resolution as the network training data pair;
the second module is used for training a fusion network by taking the originally observed multispectral image as network label data based on taking the obtained panchromatic image and the multispectral image with lower resolution as a network training data pair;
and the third module is used for inputting the panchromatic image and the multispectral image which are originally observed into the trained fusion network, and outputting the obtained network as the fusion image.
Or the system comprises a processor and a memory, wherein the memory is used for storing program instructions, and the processor is used for calling the stored instructions in the memory to execute the remote sensing image space spectrum fusion method of unsupervised learning.
The invention has the advantages that:
(1) most of the traditional deep learning fusion methods are based on a supervised learning form, and ground truth values are required to serve as network tag data. According to the invention, through downsampling the observation image, the original observation multispectral image is used as label data for network training, and the network is trained on a lower resolution scale, so that the requirement of a traditional network method on ground truth is overcome, and unsupervised learning is realized.
(2) The invention uses a single group of panchromatic-multispectral images to train the network, overcomes the requirement of the traditional network method for large data volume, and avoids spending a great deal of time and energy to construct the training sample library and the training network.
In a word, the method provided by the invention does not need ground truth value, can realize the rapid training and testing of the network, obtains a high-precision fusion result, and realizes the remote sensing image space spectrum fusion. The training method is strong in pertinence, high in training speed, easy to achieve, strong in expandability and high in use value and market application value.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
The incident energy which can be received by a single satellite sensor is limited, so that the space-spectrum resolution of a single remote sensing image cannot be obtained; the high-spatial-resolution panchromatic image and the low-spatial-resolution multispectral image of the same ground scene provided by the fusion satellite can obtain a high-precision fusion image with high spatial resolution and high spectral resolution, and the subsequent application is facilitated.
Referring to fig. 1, the invention provides a remote sensing image space spectrum fusion method for unsupervised learning, which includes the following steps:
step 1: and constructing a network training data pair.
A single set of panchromatic-multispectral image pairs, namely: a panchromatic image at the same time and a multispectral image in the same scene. The existing network is trained based on a large number of panchromatic-multispectral image pairs, and the invention only uses a group of data pairs to train the network, so that the efficiency and the practicability are obviously improved.
The specific implementation of step 1 in the embodiment includes the following sub-steps:
step 1.1: and (3) using a common bicubic interpolation method in image processing to down-sample the panchromatic image of the original observation to the same spatial resolution as the multispectral image of the original observation by r times.
Step 1.2: using a bicubic interpolation method to down-sample the original observed multispectral image r times to lower resolution; in addition, in order to facilitate subsequent fusion, a bicubic interpolation method is used again, and the multispectral image subjected to down-sampling is up-sampled to the original scale by the time of r. It is noted that although the size (number of pixels) of the multispectral image before and after processing is unchanged, its spatial resolution has been reduced by a factor of r. In particular, the preferred recommended value of r is 4. Other times may be selected according to the particular situation.
Step 1.3: and taking the resampled panchromatic image and multispectral image as input data of a network, taking the originally observed multispectral image as label data of the network, and constructing a network training data pair.
Step 2: the training network is specifically realized by the following sub-steps:
step 2.1: the method comprises the steps of setting a network structure, selecting a residual error network as a network infrastructure, wherein the preferred network infrastructure adopted by the embodiment comprises three parts of front convolution, a residual error block and rear convolution, the front convolution part is a 7 x 7 convolution layer, the residual error block part comprises 5 residual error blocks, each residual error block is formed by connecting two 3 x 3 convolutions and a jump, and the rear convolution part is a 7 x 7 convolution layer.
Step 2.2: in order to effectively improve the spatial information of the multispectral image and fully maintain the spectral characteristics of the multispectral image, a loss function formula adopted by the network is as follows:
Figure BDA0002811644950000041
wherein, x is label data of the network in the training process, namely multispectral images of original observation; net (. cndot.) is the network, y and z are the input data during network training, corresponding to the downsampled multispectral and panchromatic images, and Θ is the learnable parameter of the network, (. cndot.)↓rRepresenting r times down-sampling, and λ is an adjustable weight.
The loss function comprises two terms, wherein the first term is used for calculating the error between the network output and the network label data and guiding the space details of the fusion result to be fully improved, and the second term is used for calculating the error between the network output and the network input and is used for the spectral fidelity of the fusion result.
And step 3: after the network training is finished, the original observed panchromatic image and multispectral image are directly input into the trained network, and the network output is the fusion image.
On the basis of the existing remote sensing image fusion based on deep learning, the invention considers the dependence of the existing network on a large amount of training data and ground truth, takes the down-sampled panchromatic and multispectral images as the input of the network and the originally observed multispectral image as the network label data aiming at a single panchromatic-multispectral image pair, quickly trains the fusion network and is used for fusing the panchromatic-multispectral image pair. The method can quickly train the network and obtain a higher fusion result; and the network does not need ground truth value in the non-supervision training, has higher computational efficiency and is easy to put into practical use.
In specific implementation, a person skilled in the art can implement the automatic operation process by using a computer software technology, and a system device for implementing the method, such as a computer-readable storage medium storing a corresponding computer program according to the technical solution of the present invention and a computer device including a corresponding computer program for operating the computer program, should also be within the scope of the present invention.
In some possible embodiments, an unsupervised learning remote sensing image space spectrum fusion system is provided, which comprises the following modules,
the system comprises a first module, a second module and a third module, wherein the first module is used for constructing a network training data pair, and comprises the steps of respectively performing downsampling on a panchromatic image and a multispectral image which are originally observed in a single set of panchromatic-multispectral image pair, and taking the panchromatic image and the multispectral image with lower resolution as the network training data pair;
the second module is used for training a fusion network by taking the originally observed multispectral image as network label data based on taking the obtained panchromatic image and the multispectral image with lower resolution as a network training data pair;
and the third module is used for inputting the panchromatic image and the multispectral image which are originally observed into the trained fusion network, and outputting the obtained network as the fusion image.
In some possible embodiments, an unsupervised learning remote sensing image space spectrum fusion system is provided, which includes a processor and a memory, wherein the memory is used for storing program instructions, and the processor is used for calling the stored instructions in the memory to execute an unsupervised learning remote sensing image space spectrum fusion method as described above.
In some possible embodiments, there is provided an unsupervised learning remote sensing image space spectrum fusion system, including a readable storage medium, on which a computer program is stored, and when the computer program is executed, the unsupervised learning remote sensing image space spectrum fusion method is implemented.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A remote sensing image space spectrum fusion method of unsupervised learning realizes remote sensing image fusion through deep learning, and is characterized in that: based on a single set of panchromatic-multispectral image pair, an unsupervised network training mode is adopted to realize the fusion of the panchromatic-multispectral image pair; the implementation process comprises respectively performing downsampling on a panchromatic image and a multispectral image originally observed in the single panchromatic-multispectral image pair, taking the panchromatic image and the multispectral image with lower resolution as a network training data pair, taking the multispectral image originally observed as network label data, quickly training a fusion network, inputting the panchromatic image and the multispectral image originally observed into the trained fusion network, and outputting the obtained network as the fusion image.
2. The unsupervised learning remote sensing image space spectrum fusion method according to claim 1, characterized in that: an implementation of constructing the network training data pairs includes the following steps,
1.1, performing r-time downsampling on a panchromatic image originally observed to the spatial resolution which is the same as that of a multispectral image originally observed; r is a preset multiple value;
1.2, performing r-time downsampling on the multispectral image of the original observation to a lower resolution; then, the sampling is carried out again, and the original image size is returned;
and step 1.3, taking the full-color and multispectral images after the down-sampling as input data of a network, and taking the multispectral images which are originally observed as label data of the network, so as to construct a network training data pair.
3. The unsupervised learning remote sensing image space spectrum fusion method according to claim 2, characterized in that: the down-sampling and up-sampling in steps 1.1 and 1.2 are performed by bi-cubic interpolation.
4. The unsupervised learning remote sensing image space spectrum fusion method according to claim 2, characterized in that: r is preferably 4.
5. The unsupervised learning remote sensing image space spectrum fusion method of claim 1, 2, 3 or 4, wherein: an implementation of training the converged network includes the following steps,
step 2.1, setting a network structure;
step 2.2, setting a loss function for guiding network parameter updating;
and 2.3, calculating errors of the network output and the network label data according to the loss function in the step 2.2, transmitting the errors to each structure of the network through back propagation, and updating parameters of the network by adopting a gradient descent method.
6. The unsupervised learning remote sensing image space spectrum fusion method according to claim 5, characterized in that: the network structure used in step 2.1 is a residual network, and includes three parts of front convolution, residual block and rear convolution, the front convolution part is a convolution layer of 7 × 7, the residual block part includes five residual blocks, each residual block is composed of two convolutions of 3 × 3 and a jump connection, and the rear convolution part is a convolution layer of 7 × 7.
7. The unsupervised learning remote sensing image space spectrum fusion method according to claim 5, characterized in that: the loss function formula used in step 2.2 is:
Figure FDA0002811644940000021
wherein, x is label data of the network in the training process, namely multispectral images of original observation; net (. cndot.) is the network, y and z are the input data during network training, corresponding to the downsampled multispectral and panchromatic images, and Θ is the learnable parameter of the network, (. cndot.)↓rRepresenting r times down-sampling, and λ is an adjustable weight.
8. A remote sensing image space spectrum fusion system for unsupervised learning is characterized in that: the remote sensing image space spectrum fusion method for realizing unsupervised learning according to any one of claims 1-7.
9. The unsupervised learning remote sensing image space-spectrum fusion system of claim 8, wherein: comprises the following modules which are used for realizing the functions of the system,
the system comprises a first module, a second module and a third module, wherein the first module is used for constructing a network training data pair, and comprises the steps of respectively performing downsampling on a panchromatic image and a multispectral image which are originally observed in a single set of panchromatic-multispectral image pair, and taking the panchromatic image and the multispectral image with lower resolution as the network training data pair;
the second module is used for training a fusion network by taking the originally observed multispectral image as network label data based on taking the obtained panchromatic image and the multispectral image with lower resolution as a network training data pair;
and the third module is used for inputting the panchromatic image and the multispectral image which are originally observed into the trained fusion network, and outputting the obtained network as the fusion image.
10. The unsupervised learning remote sensing image space-spectrum fusion system of claim 8, wherein: the system comprises a processor and a memory, wherein the memory is used for storing program instructions, and the processor is used for calling the stored instructions in the memory to execute the unsupervised learning remote sensing image space spectrum fusion method according to any one of claims 1-7.
CN202011398541.6A 2020-12-02 2020-12-02 Unsupervised learning remote sensing image space spectrum fusion method and system Pending CN112508082A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011398541.6A CN112508082A (en) 2020-12-02 2020-12-02 Unsupervised learning remote sensing image space spectrum fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011398541.6A CN112508082A (en) 2020-12-02 2020-12-02 Unsupervised learning remote sensing image space spectrum fusion method and system

Publications (1)

Publication Number Publication Date
CN112508082A true CN112508082A (en) 2021-03-16

Family

ID=74968130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011398541.6A Pending CN112508082A (en) 2020-12-02 2020-12-02 Unsupervised learning remote sensing image space spectrum fusion method and system

Country Status (1)

Country Link
CN (1) CN112508082A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079105A (en) * 2023-08-04 2023-11-17 中国科学院空天信息创新研究院 Remote sensing image spatial spectrum fusion method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146831A (en) * 2018-08-01 2019-01-04 武汉大学 Remote sensing image fusion method and system based on double branch deep learning networks
CN109767412A (en) * 2018-12-28 2019-05-17 珠海大横琴科技发展有限公司 A kind of remote sensing image fusing method and system based on depth residual error neural network
CN111223049A (en) * 2020-01-07 2020-06-02 武汉大学 Remote sensing image variation fusion method based on structure-texture decomposition
CN111353424A (en) * 2020-02-27 2020-06-30 中国科学院遥感与数字地球研究所 Remote sensing image space spectrum fusion method of depth recursive residual error network and electronic equipment
CN111524063A (en) * 2019-12-24 2020-08-11 珠海大横琴科技发展有限公司 Remote sensing image fusion method and device
CN111583166A (en) * 2019-12-24 2020-08-25 珠海大横琴科技发展有限公司 Image fusion network model construction and training method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146831A (en) * 2018-08-01 2019-01-04 武汉大学 Remote sensing image fusion method and system based on double branch deep learning networks
CN109767412A (en) * 2018-12-28 2019-05-17 珠海大横琴科技发展有限公司 A kind of remote sensing image fusing method and system based on depth residual error neural network
CN111524063A (en) * 2019-12-24 2020-08-11 珠海大横琴科技发展有限公司 Remote sensing image fusion method and device
CN111583166A (en) * 2019-12-24 2020-08-25 珠海大横琴科技发展有限公司 Image fusion network model construction and training method and device
CN111223049A (en) * 2020-01-07 2020-06-02 武汉大学 Remote sensing image variation fusion method based on structure-texture decomposition
CN111353424A (en) * 2020-02-27 2020-06-30 中国科学院遥感与数字地球研究所 Remote sensing image space spectrum fusion method of depth recursive residual error network and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIANG HE ET AL;: "《Spatial Spectral Fusion in Different Swath Widths by a Recurrent Expanding Residual Convolutional Neural Network 》", 《REMOTE SENSING》 *
MENGHUI JIANGA ET AL;: "《A differential information residual convolutional neural network for pansharpening》", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》 *
杨骏锋: "《基于卷积神经网络的Pan-sharpening方法》", 《硕士电子期刊》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079105A (en) * 2023-08-04 2023-11-17 中国科学院空天信息创新研究院 Remote sensing image spatial spectrum fusion method and device, electronic equipment and storage medium
CN117079105B (en) * 2023-08-04 2024-04-26 中国科学院空天信息创新研究院 Remote sensing image spatial spectrum fusion method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Wang et al. Ultra-dense GAN for satellite imagery super-resolution
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
WO2021018163A1 (en) Neural network search method and apparatus
CN112184554B (en) Remote sensing image fusion method based on residual mixed expansion convolution
CN112488978A (en) Multi-spectral image fusion imaging method and system based on fuzzy kernel estimation
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
CN110378344B (en) Spectral dimension conversion network-based convolutional neural network multispectral image segmentation method
CN114418853B (en) Image super-resolution optimization method, medium and equipment based on similar image retrieval
CN111353939B (en) Image super-resolution method based on multi-scale feature representation and weight sharing convolution layer
CN113066037B (en) Multispectral and full-color image fusion method and system based on graph attention machine system
CN113240683B (en) Attention mechanism-based lightweight semantic segmentation model construction method
CN112801904B (en) Hybrid degraded image enhancement method based on convolutional neural network
CN116309070A (en) Super-resolution reconstruction method and device for hyperspectral remote sensing image and computer equipment
CN114066755A (en) Remote sensing image thin cloud removing method and system based on full-band feature fusion
CN115713462A (en) Super-resolution model training method, image recognition method, device and equipment
CN110633706B (en) Semantic segmentation method based on pyramid network
CN115861083A (en) Hyperspectral and multispectral remote sensing fusion method for multi-scale and global features
Yang et al. Image super-resolution reconstruction based on improved Dirac residual network
Deng et al. Multiple frame splicing and degradation learning for hyperspectral imagery super-resolution
CN112508082A (en) Unsupervised learning remote sensing image space spectrum fusion method and system
CN116188272B (en) Two-stage depth network image super-resolution reconstruction method suitable for multiple fuzzy cores
CN117726513A (en) Depth map super-resolution reconstruction method and system based on color image guidance
CN116385265B (en) Training method and device for image super-resolution network
CN116309227A (en) Remote sensing image fusion method based on residual error network and spatial attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210316