CN113191993A - Panchromatic and multispectral image fusion method based on deep learning - Google Patents

Panchromatic and multispectral image fusion method based on deep learning Download PDF

Info

Publication number
CN113191993A
CN113191993A CN202110425489.7A CN202110425489A CN113191993A CN 113191993 A CN113191993 A CN 113191993A CN 202110425489 A CN202110425489 A CN 202110425489A CN 113191993 A CN113191993 A CN 113191993A
Authority
CN
China
Prior art keywords
image
network
panchromatic
output
multispectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110425489.7A
Other languages
Chinese (zh)
Other versions
CN113191993B (en
Inventor
张凯
盛志
张风
王安飞
刁文秀
李卓林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN202110425489.7A priority Critical patent/CN113191993B/en
Publication of CN113191993A publication Critical patent/CN113191993A/en
Application granted granted Critical
Publication of CN113191993B publication Critical patent/CN113191993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10041Panchromatic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a panchromatic and multispectral image fusion method and system based on deep learning, which comprises the following steps: acquiring a full-color image and a multispectral image to be fused; inputting the full-color image and the multispectral image into a pre-trained image fusion model to obtain a fused remote sensing image; the scheme can fully utilize the deep neural network, highlight the spatial structure information of the image and adjust the image style by utilizing the constructed branch spectrum adjusting network, so that the fused remote sensing image not only has high spatial resolution, but also can well reserve the specific style information of a specific satellite.

Description

Panchromatic and multispectral image fusion method based on deep learning
Technical Field
The disclosure belongs to the technical field of remote sensing image processing, and particularly relates to a panchromatic and multispectral image fusion method based on deep learning.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Earth remote sensing satellites typically provide two different types of images, namely high-spatial and low-spectral resolution panchromatic images and low-spatial and high-spectral resolution multispectral images. At present, due to technical limitation, a satellite sensor acquires full-color images and multispectral images, and cannot directly acquire multispectral images with high space and high spectral resolution.
The inventor finds that a remote sensing image fusion method through a deep neural network exists in the existing method, but the existing traditional full-color image and multispectral image fusion based on the deep neural network usually trains a model for each satellite, a large amount of training data is needed, separate training data is needed for images of different styles shot by satellites of different sensors, different network models need to be trained for fusing the images of different styles of the different sensors, so that the model training efficiency is low, and meanwhile, the trained fusion model cannot ensure the fusion quality on the premise of no large amount of training data.
Disclosure of Invention
In order to solve the above problems, the present disclosure provides a deep learning-based fusion method for panchromatic and multispectral images, which can make full use of a depth neural network, highlight spatial structure information of an image, and adjust the style of the image by using a constructed branch spectrum adjusting network, so that the fused remote sensing image not only has high spatial resolution, but also can well retain specific style information of a specific satellite.
According to a first aspect of the embodiments of the present disclosure, there is provided a panchromatic and multispectral image fusion method based on deep learning, including:
acquiring a full-color image and a multispectral image to be fused;
inputting the full-color image and the multispectral image into a pre-trained image fusion model to obtain a fused remote sensing image;
the image fusion model comprises a spatial structure enhancement network and a spectral information adjustment network, wherein the spatial structure enhancement network is based on a convolutional neural network and is trained by using remote sensing image training sets of different types of satellites; the spectral information adjusting network comprises a plurality of branch networks, each branch network is trained by using a remote sensing image of a specific satellite, and a fused image is obtained by multiplying output results of the spatial structure enhancing network and the spectral information adjusting network.
Further, the input of the spatial structure enhancement network is a panchromatic image and an initial multispectral image, the panchromatic image and the initial multispectral image are firstly stacked to obtain an image M, 4 convolution modules are adopted in the middle of the image M to extract spatial information, the convolution modules are densely connected, the M is input into a first convolution module, and the output is ms 2; m is stacked with the output ms2 of the first convolution module, and input into the second convolution module, and the output is ms 3; m is stacked with ms2 and ms3, input into a third convolution module and output as ms 4; m is stacked with ms2, ms3, ms4 as input to a fourth convolution module with an output of ms 5; the full color image itself was stacked four times, outputting pan 2; pan2 is then added to ms5, outputting HRMS 1.
Further, the spectral information adjusting network inputs and outputs the spatial structure enhancement network to specific branches for processing, and each branch network comprises 32 convolutional layers of 3 × 3 filters, a global average pooling layer, and two fully-connected layers, namely 4 convolutional layers of 3 × 3 filters; and the spectral information adjusting network adjusts each channel of the spatial structure enhancing network, and the output result is Mask.
Further, performing dot multiplication on the output result HRMS1 of the spatial structure enhancement network and the output result Mask of the spectral information adjustment network, wherein the dot multiplication result is a final high-resolution multispectral image;
further, the acquired panchromatic image and the multispectral image to be fused need to be preprocessed, and the multispectral image with low spatial resolution is subjected to four-time up-sampling operation to obtain an initial multispectral image with the same size as the panchromatic image.
According to a second aspect of the embodiments of the present disclosure, there is provided a panchromatic and multispectral image fusion system based on deep learning, including:
an image acquisition unit for acquiring a full-color image and a multispectral image to be fused;
the image fusion unit is used for inputting the full-color image and the multispectral image into a pre-trained image fusion model to obtain a fused remote sensing image;
the image fusion model comprises a spatial structure enhancement network and a spectral information adjustment network, wherein the spatial structure enhancement network is based on a convolutional neural network and is trained by using remote sensing image training sets of different types of satellites; the spectral information adjusting network comprises a plurality of branch networks, each branch network is trained by using a remote sensing image of a specific satellite, and a fused image is obtained by multiplying output results of the spatial structure enhancing network and the spectral information adjusting network.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, including a memory, a processor, and a computer program stored in the memory and running on the memory, wherein the processor implements the deep learning-based panchromatic and multispectral image fusion method when executing the program.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a deep learning-based panchromatic and multispectral image fusion method as described.
Compared with the prior art, the beneficial effect of this disclosure is:
(1) the image fusion model disclosed by the invention can fully utilize the deep neural network, highlight the spatial structure information of the image and adjust the image style by utilizing the constructed branch spectrum adjusting network, so that the fused remote sensing image not only has high spatial resolution, but also can well reserve the specific style information of a specific satellite.
(2) According to the scheme, only one common model is required to be trained for various satellite images, and the problem that training data for a certain satellite is less is solved by using common training of various satellite data, so that the training effect of the fusion model is improved, and the quality of the fusion image is improved.
(3) The scheme of the invention improves the utilization rate of data, and is more robust and the fusion result is more stable compared with the prior art.
Advantages of additional aspects of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
Fig. 1 is a flowchart of a spatial structure enhancement network according to a first embodiment of the disclosure;
fig. 2(a) is a schematic diagram of a branched network structure of a spectral information adjusting network according to a first embodiment of the disclosure.
Fig. 2(b) is a flowchart of a specific branch structure in the spectral information adjusting network according to the first embodiment of the disclosure;
fig. 2(c) is a schematic diagram of dot product of output results of the spatial structure enhancement network and the spectral information adjustment network according to the first embodiment of the disclosure;
fig. 3(a) -3 (d) are graphs comparing the fusion results of the image fusion method according to an embodiment of the present disclosure on a low spatial resolution multispectral image and a high spatial resolution panchromatic image (where fig. 3(a) is the multispectral image with low spatial resolution, fig. 3(b) is the panchromatic image with high spatial resolution, fig. 3(c) is a reference image, and fig. 3(d) is the multispectral image fused according to the method of the present disclosure.
Detailed Description
The present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
The first embodiment is as follows:
the embodiment aims to provide a panchromatic and multispectral image fusion method based on deep learning.
A panchromatic and multispectral image fusion method based on deep learning comprises the following steps:
acquiring a full-color image and a multispectral image to be fused;
inputting the full-color image and the multispectral image into a pre-trained image fusion model to obtain a fused remote sensing image;
the image fusion model comprises a spatial structure enhancement network and a spectral information adjustment network, wherein the spatial structure enhancement network is based on a convolutional neural network and is trained by using remote sensing image training sets of different types of satellites; the spectral information adjusting network comprises a plurality of branch networks, each branch network is trained by using a remote sensing image of a specific satellite, and a fused image is obtained by multiplying output results of the spatial structure enhancing network and the spectral information adjusting network.
For the sake of understanding, the following detailed description of the embodiments of the present disclosure is made with reference to the accompanying drawings:
the scheme of the disclosure provides a two-stage deep neural network model for a method for fusing a full-color image and a multispectral image of a satellite, wherein the first stage is a spatial structure enhancement network shared by multiple satellites based on a convolutional neural network, the second stage is a spectral information adjustment network of multiple branches, and the specific branch adjusts the result of the first stage into the style of a specific satellite. The spatial structure enhancement network uses images of various different styles as training data, the images with better spatial structure are trained through the convolutional neural network, the image style is further adjusted through the spectral information adjustment network, in order to make the purpose, technical scheme and advantages of the disclosure clearer, the technical scheme of the disclosure is further explained in detail below by combining with the attached drawings and implementation, and the specific steps are as follows.
The method is different from a general fusion method based on a deep convolutional network in that images of various different sensors can be used as training images, training data are expanded, a network structure needs to be designed for each type of image and a model needs to be trained in the traditional fusion network, images of different styles are trained by using a common network model, and then spectrum adjustment is carried out on the images by using a branch spectrum information adjustment network to obtain images of the needed styles.
The method is based on the deep learning network, the fusion network is divided into a space structure enhancing network and a branch spectrum information adjusting network, the space structure enhancing network trains the space structure adjusting network by using satellite data of various sensors, training samples are effectively expanded, images with good space structure information are obtained, and the color style of the images is not similar to that of one of the satellites. And then extracting the style of the remote sensing image of the specific sensor by using the spectrum information adjusting network, and performing spectrum adjustment on the image with better spatial structure information obtained in the first step to obtain the image with the specific style.
Specifically, the steps of the method of the present disclosure are explained from data input, model training to fusion result output:
step 1: inputting an image
(1a) Inputting a training image: multispectral images of low spatial resolution and panchromatic images of high spatial resolution;
(1b) the low-spatial-resolution multispectral image ms and the high-resolution panchromatic image pan used for training comprise corresponding real images;
(1c) performing four-time upsampling operation on the multispectral image with low spatial resolution through deconvolution to obtain an initial multispectral image ms1 which is the same as the size of the full-color image p and belongs to R256 4;
step 2: extracting effective characteristic diagram through a spatial structure enhancement network to perform fusion of full-color image and multispectral image
(2a) The full-color image p and the initial multispectral image ms1 are first stacked to obtain an image M e R256 x 4. In the stacking operation, the images are stitched in a third dimension.
(2b) M is the same as R256 and 4, and the output is ms2 is the same as R256 and 4; m e R256 4 is stacked with the output ms2 e R256 4 of the first convolution module as input to the second convolution module with the output ms3 e R256 4; m is left at R256 4 and the output ms2 is left at R256 4 of the first convolution module, the output ms3 is left at R256 4 of the second convolution module is stacked and used as input, the input is the third convolution module, and the output is ms4 is left at R256 4; m e R256 4 is stacked with the first convolution module output ms2 e R256 4, the second convolution module output ms3 e R256 4, the third convolution module output ms4 e R256 4 as inputs to the fourth convolution module with the output ms5 e R256 4. The full-color image p itself was stacked four times with the output pan2 e R256 e 4, highlighting the spatial structure information, and then added to the output ms5 e R256 e 4 of the fourth convolution module. The output result is HRMS1 ∈ R256 × 4.
And step 3: spectral information adjustment network extraction style feature
(3a) Establishing a spectrum information adjusting network: the fused HRMS1 for different kinds of satellites inputs different spectral tuning branches. The spectrum adjustment network comprises a plurality of branch networks, each branch network processes one kind of remote sensing image, and the HRMS1 inputs the corresponding branch processing.
(3b) For one of the branches, the input is HRMS1 ∈ R256 × 4. HRMS1 ∈ R256 ∈ 4 is first passed through a convolution layer of 32 3 × 3 filters, output H1 ∈ R256 ∈ R32, H1 ∈ R256 ∈ 32 is input into a global averaging pooling layer, global flattening pooling performs feature compression on two-dimensional information, each two-dimensional feature channel is changed into a real number, 32 channels correspond to 32 real numbers, the real number has global perception to some extent, output H2 ∈ R32, H7 ∈ R32 is input into two fully-connected layers, the first fully-connected layer inputs 32 nodes, outputs 16 nodes, comprises an activation function Rectified LinearUnit (ReLU), the second fully-connected layer inputs 16 nodes, outputs 32 nodes, comprises an activation function moid, the output results of two fully-connected layers are H3 ∈ R32, H3 ∈ R73742 is adjusted to H1, H6332 is multiplied by H63R 256 and MS 6332 is adjusted to generate H6332 points, h3 was adjusted channel by channel for H1. The dot product result is processed through one convolution layer (4 3 × 3 filters), and the output result is Mask ∈ R256 × 4.
(3c) Mask ∈ R256 × 4 and HRMS1 ∈ R256 × 4. Obtaining a final high-resolution multispectral image;
and 4, step 4: training network
The scheme of the disclosure trains the network by adopting a stochastic gradient descent algorithm, and the loss function is an L2 norm of the distance between the generated image and the reference image. The learning rate is set to 0.001. The iteration number is set to 8000 turns, the batch _ size is set to 8, and the trained training model is output after the iteration is finished.
And 5: image fusion
After the two-stage deep neural network training is completed, the tested multispectral image with low spatial resolution and panchromatic image with high spatial resolution are input into a spatial structure enhancement network to obtain an intermediate result C, then the intermediate result C is input into a spectral information adjustment network to obtain a D, and the D and the C are subjected to dot multiplication to obtain a final result, namely a fusion image.
The effect of the scheme of the present disclosure is illustrated by the following simulations:
1. simulation environment:
PyCharm Community Edition 2020.1x64,Window 10。
2. simulation content:
in the embodiment of the present disclosure, five satellite images are adopted: 216 pairs of fastside satellite images, the images including a captured green field and a light brown terrace. 248 pairs of world view satellite images, the image content is dense houses under the feet of the mountain, which is a small city under the feet of the mountain. 696 for the geoeye satellite image, the image content is a bald mountain area, which contains the town. 104 pairs of Ikonos satellite images, the content is mountains in Sichuan area. 416 GaoFen-2 satellite images, the content is urban area of the city. The multispectral image size of low spatial resolution is 64 × 64 × 4, the spatial resolution is 8.0m, the panchromatic image size of high spatial resolution is 256 × 256, the spatial resolution is 2.0m, and the reference image size is 256 × 256 × 4;
fig. 3(a) is a low spatial resolution multi-spectral image, 64 x 4 in size,
fig. 3(b) is a high spatial resolution full color image, 256 x 256 in size,
fig. 3(c) is a reference picture, 256 x 4 in size,
fig. 3(d) is a high spatial resolution multispectral image obtained after fusing fig. 3(a) and fig. 3(b) using the present disclosure, with a size of 256 × 256 × 4.
With reference to the drawings, it can be seen that the resolution information of fig. 3(d) is significantly improved compared to fig. 3(a), especially in the detailed parts of tree roads, houses and the like, the edges are clearer, and the spectral information of fig. 3(d) is richer compared to fig. 3(b), so that the disclosure can better merge fig. 3(a) and fig. 3 (b).
In order to verify the effect of the method, the existing method is respectively used for comparing with the method. The technology comprises the following steps: the BDSD transformation method, the AWLP transformation method, the Indusion method, the SVT algorithm, the VPLMC, the PNN and the PanNet fuse the images to be fused in the images (a) and (b) in the images (2) and perform objective index evaluation on the fusion result, wherein the evaluation indexes are as follows:
1) and the correlation coefficient CC represents the retention degree of the spectral information, the result is in the interval [0, 1], and the closer the correlation coefficient is to 1, the more similar the fusion result is to the reference image.
2) The RMSE root mean square error represents the square root of the ratio of the square of the deviation between the predicted value and the true value to the observation frequency n, and the smaller the numerical value, the better the fusion result.
3) The closer the global composite error index ERG is to 1, the better.
4) And the spectrum radian SAM represents the distortion degree of the spectrum, the spectrum of each pixel in the image is regarded as a high-dimensional vector, the similarity between the spectra is measured by calculating the included angle between the two vectors, and the closer to 0, the better the fusion result is.
5) And the global quality evaluation indexQ represents the overall similarity of the images in space and spectrum, the result range is in an interval [0, 1], and the larger the global quality evaluation index is, the more similar the fused image is to the reference image.
6) And the overall image quality index UIQI represents the closeness degree of the fused image and the reference image, and the closer to 1, the better the fusion result.
The fused results of the present disclosure and the prior art were evaluated from the objective evaluation indexes according to the above evaluation indexes, and the results are shown in table 1.
TABLE 1 Objective evaluation of fusion results of various methods
Figure BDA0003029525250000091
As can be seen from table 1, the above evaluation values are superior to the evaluation values of the prior art, and thus it can be seen that most of the objective evaluation indexes of the present disclosure are superior to the objective evaluation indexes of the prior art.
The fusion image obtained by the method disclosed by the invention is rich in spatial information and better reserves the spectral information of the multispectral image.
The subjective evaluation according to the simulation result of the simulation content is that the spatial resolution of the original multispectral image is improved by the fused image through visual analysis, and the training data amount is increased by using various images as training data.
Example two:
the embodiment aims to provide a panchromatic and multispectral image fusion system based on deep learning.
An image acquisition unit for acquiring a full-color image and a multispectral image to be fused;
the image fusion unit is used for inputting the full-color image and the multispectral image into a pre-trained image fusion model to obtain a fused remote sensing image;
the image fusion model comprises a spatial structure enhancement network and a spectral information adjustment network, wherein the spatial structure enhancement network is based on a convolutional neural network and is trained by using remote sensing image training sets of different types of satellites; the spectral information adjusting network comprises a plurality of branch networks, each branch network is trained by using a remote sensing image of a specific satellite, and a fused image is obtained by multiplying output results of the spatial structure enhancing network and the spectral information adjusting network.
In further embodiments, there is also provided:
an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, the computer instructions when executed by the processor performing the method of embodiment one. For brevity, no further description is provided herein.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
A computer readable storage medium storing computer instructions which, when executed by a processor, perform the method of embodiment one.
The method in the first embodiment may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
Those of ordinary skill in the art will appreciate that the various illustrative elements, i.e., algorithm steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The panchromatic and multispectral image fusion method and system based on deep learning can be realized, and have wide application prospects.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Although the present disclosure has been described with reference to specific embodiments, it should be understood that the scope of the present disclosure is not limited thereto, and those skilled in the art will appreciate that various modifications and changes can be made without departing from the spirit and scope of the present disclosure.

Claims (10)

1. A panchromatic and multispectral image fusion method based on deep learning is characterized by comprising the following steps:
acquiring a full-color image and a multispectral image to be fused;
inputting the full-color image and the multispectral image into a pre-trained image fusion model to obtain a fused remote sensing image;
the image fusion model comprises a spatial structure enhancement network and a spectral information adjustment network, wherein the spatial structure enhancement network is based on a convolutional neural network and is trained by using remote sensing image training sets of different types of satellites; the spectral information adjusting network comprises a plurality of branch networks, each branch network is trained by using a remote sensing image of a specific satellite, and a fused image is obtained by multiplying output results of the spatial structure enhancing network and the spectral information adjusting network.
2. The panchromatic and multispectral image fusion method based on deep learning as claimed in claim 1, wherein the input of the spatial structure enhancement network is panchromatic image and initial multispectral image, the panchromatic image and the initial multispectral image are firstly stacked to obtain an image M, 4 convolution modules are adopted in the middle for extracting spatial information, the convolution modules are densely connected, M is input into the first convolution module, and the output is ms 2; m is stacked with the output ms2 of the first convolution module, and input into the second convolution module, and the output is ms 3; m is stacked with ms2 and ms3, input into a third convolution module and output as ms 4; m is stacked with ms2, ms3, ms4 as input to a fourth convolution module with an output of ms 5; the full color image itself was stacked four times, outputting pan 2; pan2 is then added to ms5, outputting HRMS 1.
3. The method for fusion of panchromatic and multispectral images based on deep learning of claim 1, wherein the spectral information adjustment network inputs the output of the spatial structure enhancement network into a specific branch for processing, each branch network comprising 32 convolutional layers of 3 x 3 filters, a global average pooling layer, and two fully connected layers, 4 convolutional layers of 3 x 3 filters; and the spectral information adjusting network adjusts each channel of the spatial structure enhancing network, and the output result is Mask.
4. The method as claimed in claim 1, wherein the spatial structure enhancement network outputs HRMS1 and the spectral information adjustment network outputs Mask, which are dot-multiplied to obtain the final high-resolution multispectral image HRMS.
5. The method of claim 1, wherein the panchromatic image and multispectral image to be fused are pre-processed, and the multispectral image with low spatial resolution is up-sampled four times to obtain an initial multispectral image with the same size as the panchromatic image.
6. A panchromatic and multispectral image fusion system based on deep learning is characterized by comprising the following components:
an image acquisition unit for acquiring a full-color image and a multispectral image to be fused;
the image fusion unit is used for inputting the full-color image and the multispectral image into a pre-trained image fusion model to obtain a fused remote sensing image;
the image fusion model comprises a spatial structure enhancement network and a spectral information adjustment network, wherein the spatial structure enhancement network is based on a convolutional neural network and is trained by using remote sensing image training sets of different types of satellites; the spectral information adjusting network comprises a plurality of branch networks, each branch network is trained by using a remote sensing image of a specific satellite, and a fused image is obtained by multiplying output results of the spatial structure enhancing network and the spectral information adjusting network.
7. The panchromatic and multispectral image fusion system based on deep learning of claim 6, wherein the input of the spatial structure enhancement network is panchromatic image and initial multispectral image, the panchromatic image and the initial multispectral image are firstly stacked to obtain an image M, 4 convolution modules are adopted in the middle for extracting spatial information, the convolution modules are densely connected, M is input into the first convolution module, and the output is ms 2; m is stacked with the output ms2 of the first convolution module, and input into the second convolution module, and the output is ms 3; m is stacked with ms2 and ms3, input into a third convolution module and output as ms 4; m is stacked with ms2, ms3, ms4 as input to a fourth convolution module with an output of ms 5; the full color image itself was stacked four times, outputting pan 2; pan2 is then added to ms5, outputting HRMS 1.
8. The deep learning-based panchromatic and multispectral image fusion system of claim 6, wherein the spectral information adjustment network inputs the output of the spatial structure enhancement network into a specific branch for processing, each branch network comprising 32 convolutional layers of 3 x 3 filters, a global average pooling layer, and two fully-connected layers, 4 convolutional layers of 3 x 3 filters; and the spectral information adjusting network adjusts each channel of the spatial structure enhancing network, and the output result is Mask.
9. An electronic device comprising a memory, a processor, and a computer program stored and executed on the memory, wherein the processor implements a deep learning based panchromatic and multispectral image fusion method as claimed in any one of claims 1-4 when executing the program.
10. A non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a deep learning based panchromatic and multispectral image fusion method according to any one of claims 1-4.
CN202110425489.7A 2021-04-20 2021-04-20 Panchromatic and multispectral image fusion method based on deep learning Active CN113191993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110425489.7A CN113191993B (en) 2021-04-20 2021-04-20 Panchromatic and multispectral image fusion method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110425489.7A CN113191993B (en) 2021-04-20 2021-04-20 Panchromatic and multispectral image fusion method based on deep learning

Publications (2)

Publication Number Publication Date
CN113191993A true CN113191993A (en) 2021-07-30
CN113191993B CN113191993B (en) 2022-11-04

Family

ID=76977516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110425489.7A Active CN113191993B (en) 2021-04-20 2021-04-20 Panchromatic and multispectral image fusion method based on deep learning

Country Status (1)

Country Link
CN (1) CN113191993B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050111754A1 (en) * 2003-11-05 2005-05-26 Cakir Halil I. Methods, systems and computer program products for fusion of high spatial resolution imagery with lower spatial resolution imagery using a multiresolution approach
CN109146831A (en) * 2018-08-01 2019-01-04 武汉大学 Remote sensing image fusion method and system based on double branch deep learning networks
CN109767412A (en) * 2018-12-28 2019-05-17 珠海大横琴科技发展有限公司 A kind of remote sensing image fusing method and system based on depth residual error neural network
CN110660038A (en) * 2019-09-09 2020-01-07 山东工商学院 Multispectral image and panchromatic image fusion method based on generation countermeasure network
CN111080567A (en) * 2019-12-12 2020-04-28 长沙理工大学 Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN111127374A (en) * 2019-11-22 2020-05-08 西北大学 Pan-sharing method based on multi-scale dense network
CN111833280A (en) * 2019-09-30 2020-10-27 东南大学 High-fidelity remote sensing image fusion method based on intermediate frequency signal modulation and compensation
CN112184554A (en) * 2020-10-13 2021-01-05 重庆邮电大学 Remote sensing image fusion method based on residual mixed expansion convolution
CN112465733A (en) * 2020-08-31 2021-03-09 长沙理工大学 Remote sensing image fusion method, device, medium and equipment based on semi-supervised learning
CN112488978A (en) * 2021-02-05 2021-03-12 湖南大学 Multi-spectral image fusion imaging method and system based on fuzzy kernel estimation
CN112529827A (en) * 2020-12-14 2021-03-19 珠海大横琴科技发展有限公司 Training method and device for remote sensing image fusion model

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050111754A1 (en) * 2003-11-05 2005-05-26 Cakir Halil I. Methods, systems and computer program products for fusion of high spatial resolution imagery with lower spatial resolution imagery using a multiresolution approach
CN109146831A (en) * 2018-08-01 2019-01-04 武汉大学 Remote sensing image fusion method and system based on double branch deep learning networks
CN109767412A (en) * 2018-12-28 2019-05-17 珠海大横琴科技发展有限公司 A kind of remote sensing image fusing method and system based on depth residual error neural network
CN110660038A (en) * 2019-09-09 2020-01-07 山东工商学院 Multispectral image and panchromatic image fusion method based on generation countermeasure network
CN111833280A (en) * 2019-09-30 2020-10-27 东南大学 High-fidelity remote sensing image fusion method based on intermediate frequency signal modulation and compensation
CN111127374A (en) * 2019-11-22 2020-05-08 西北大学 Pan-sharing method based on multi-scale dense network
CN111080567A (en) * 2019-12-12 2020-04-28 长沙理工大学 Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN112465733A (en) * 2020-08-31 2021-03-09 长沙理工大学 Remote sensing image fusion method, device, medium and equipment based on semi-supervised learning
CN112184554A (en) * 2020-10-13 2021-01-05 重庆邮电大学 Remote sensing image fusion method based on residual mixed expansion convolution
CN112529827A (en) * 2020-12-14 2021-03-19 珠海大横琴科技发展有限公司 Training method and device for remote sensing image fusion model
CN112488978A (en) * 2021-02-05 2021-03-12 湖南大学 Multi-spectral image fusion imaging method and system based on fuzzy kernel estimation

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
QIANGQIANG YUAN等: "A Multiscale and Multidepth Convolutional Neural Network for Remote Sensing Imagery Pan-Sharpening", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》 *
ZHENFENG SHAO等: "Remote Sensing Image Fusion With Deep Convolutional Neural Network", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》 *
孙家抦: "《遥感原理与应用》", 30 June 2013 *
柳文等: "ALOS全色波段与多光谱影像融合方法的比较研究", 《科学技术与工程》 *
王明丽等: "基于跨层复制连接卷积神经网络的遥感图像融合", 《吉林大学学报(理学版)》 *

Also Published As

Publication number Publication date
CN113191993B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
Zhang et al. Pan-sharpening using an efficient bidirectional pyramid network
CN112634137B (en) Hyperspectral and panchromatic image fusion method for extracting multiscale spatial spectrum features based on AE
CN110363215B (en) Method for converting SAR image into optical image based on generating type countermeasure network
CN109191382B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN107392925B (en) Remote sensing image ground object classification method based on super-pixel coding and convolutional neural network
CN111428781A (en) Remote sensing image ground object classification method and system
CN109410164B (en) The satellite PAN and multi-spectral image interfusion method of multiple dimensioned convolutional neural networks
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN107358260B (en) Multispectral image classification method based on surface wave CNN
CN112184554B (en) Remote sensing image fusion method based on residual mixed expansion convolution
CN110287869A (en) High-resolution remote sensing image Crop classification method based on deep learning
CN108765425B (en) Image segmentation method and device, computer equipment and storage medium
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
CN109727207B (en) Hyperspectral image sharpening method based on spectrum prediction residual convolution neural network
CN108960404B (en) Image-based crowd counting method and device
CN109636769A (en) EO-1 hyperion and Multispectral Image Fusion Methods based on the intensive residual error network of two-way
CN110544212B (en) Convolutional neural network hyperspectral image sharpening method based on hierarchical feature fusion
CN112488978A (en) Multi-spectral image fusion imaging method and system based on fuzzy kernel estimation
CN113610905B (en) Deep learning remote sensing image registration method based on sub-image matching and application
CN113901900A (en) Unsupervised change detection method and system for homologous or heterologous remote sensing image
CN112862871A (en) Image fusion method and device
CN104881682A (en) Image classification method based on locality preserving mapping and principal component analysis
CN115760814A (en) Remote sensing image fusion method and system based on double-coupling deep neural network
CN111914909A (en) Hyperspectral change detection method based on space-spectrum combined three-direction convolution network
CN113139902A (en) Hyperspectral image super-resolution reconstruction method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant