CN111524063A - Remote sensing image fusion method and device - Google Patents

Remote sensing image fusion method and device Download PDF

Info

Publication number
CN111524063A
CN111524063A CN201911347962.3A CN201911347962A CN111524063A CN 111524063 A CN111524063 A CN 111524063A CN 201911347962 A CN201911347962 A CN 201911347962A CN 111524063 A CN111524063 A CN 111524063A
Authority
CN
China
Prior art keywords
image
remote sensing
multispectral
training sample
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911347962.3A
Other languages
Chinese (zh)
Inventor
邓练兵
陈金鹿
薛剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Dahengqin Technology Development Co Ltd
Original Assignee
Zhuhai Dahengqin Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Dahengqin Technology Development Co Ltd filed Critical Zhuhai Dahengqin Technology Development Co Ltd
Priority to CN201911347962.3A priority Critical patent/CN111524063A/en
Publication of CN111524063A publication Critical patent/CN111524063A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4061Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution by injecting details from different spectral ranges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a remote sensing image fusion method and a device, wherein the method comprises a remote sensing image fusion model training method and a remote sensing image fusion method, and the remote sensing image fusion model training method comprises the following steps: acquiring a training sample, wherein the training sample is an image obtained by downsampling a full-color image and a multispectral image by preset times; preprocessing a training sample by adopting a Wald criterion to generate a label; extracting features of the training sample through a preset depth residual error neural network model to obtain convolution features, and adding the convolution features and the multispectral image to obtain a residual error result; determining a loss function according to the residual error result and the label of the training sample; and adjusting a preset depth residual error neural network model by using a gradient descent method according to the loss function to obtain a remote sensing image fusion model. By implementing the method, the spectrum and the spatial information provided by the multispectral image and the panchromatic image can be fully utilized to obtain the image with high spatial resolution and spectral resolution.

Description

Remote sensing image fusion method and device
Technical Field
The invention relates to the field of image processing, in particular to a remote sensing image fusion method and device.
Background
Remote sensing image fusion is commonly used for environmental monitoring, ground feature classification, climate monitoring and the like. The remote sensing image comprises a full-color image and a multispectral image, wherein the full-color image has high spatial resolution and low spectral resolution, so that the full-color image contains a small amount of spectral information; multispectral images have high spectral resolution and low spatial resolution, resulting in multispectral images that contain a small amount of spatial information.
In order to fully utilize the spectrum and spatial information provided by multispectral images and panchromatic images, how to obtain images with high spatial resolution and spectral resolution is a problem to be solved urgently.
Disclosure of Invention
Therefore, the technical problem to be solved by the present invention is to overcome the defects that in the prior art, the panchromatic image in the remote sensing image contains less spectral information, the multispectral image contains less spatial information, and the spectral and spatial information provided by the multispectral image and the panchromatic image cannot be fully utilized, so as to provide a remote sensing image fusion method and device.
According to a first aspect, the embodiment provides a training method for a remote sensing image fusion model, which includes the following steps: acquiring a training sample, wherein the training sample is an image obtained by downsampling a full-color image and a multispectral image by preset times respectively; preprocessing the training sample by adopting a Wald criterion to generate a label representing a full-color image and a multispectral image; extracting the features of the training samples through a preset depth residual error neural network model to obtain convolution features, and adding the convolution features and the multispectral image to obtain a residual error result; determining a loss function of the preset deep residual error neural network model according to the residual error result and the label of the training sample; and adjusting the preset depth residual error neural network model by using a gradient descent method according to the loss function to obtain the remote sensing image fusion model.
With reference to the first aspect, in a first implementation manner of the first aspect, the loss function of the preset deep residual neural network model is determined according to the residual result and the label of the training sample by the following formula:
Figure BDA0002333913390000021
wherein Loss represents Loss; x is the number of(t)Representing the t-th input image, y(t)Representing the t-th multispectral image with high spatial resolution, f (x)(t)) And T is the number of training samples, and represents the result fused by the neural network based on the depth residual error.
With reference to the first aspect, in a second implementation manner of the first aspect, the adjusting the preset depth residual neural network model according to the loss function by using a gradient descent method to obtain the remote sensing image fusion model includes: and when the minimum value of the loss function is obtained, determining to obtain the remote sensing image fusion model.
According to a second aspect, the embodiment provides a remote sensing image fusion method, which includes the following steps: acquiring and preprocessing a full-color image and a multispectral image which need to be fused; inputting the panchromatic image and the multispectral image into a remote sensing image fusion model according to the first aspect or any embodiment of the first aspect to obtain a fusion image.
With reference to the second aspect, in a first embodiment of the second aspect, the method further includes: and regarding the spectrum of each pixel in the fused image as a high-dimensional vector, measuring the distortion degree from the multispectral image to the fused image by calculating an included angle between the two vectors, wherein the smaller the calculation result is, the better the spectrum property is.
With reference to the second aspect, in a second embodiment of the second aspect, the method further includes: and calculating the average value of the difference between the multispectral image and the fusion image, wherein the smaller the calculation result is, the better the spectrum property is.
According to a third aspect, an embodiment of the present invention provides a training apparatus for a remote sensing image fusion model, including: the training sample acquisition module is used for acquiring a training sample, wherein the training sample is an image obtained by respectively performing downsampling on the full-color image and the multispectral image by preset times; the label generation module is used for preprocessing the training sample by adopting a Wald criterion to generate a label representing a full-color image and a multispectral image; the residual error result calculation module is used for extracting the characteristics of the training sample through a preset depth residual error neural network model to obtain convolution characteristics, and adding the convolution characteristics and the multispectral image to obtain a residual error result; a loss calculation module, configured to determine a loss function of the preset deep residual neural network model according to the residual result and the label of the training sample; and the remote sensing image fusion model acquisition module is used for adjusting the preset depth residual error neural network model by using a gradient descent method according to the loss function to obtain the remote sensing image fusion model.
According to a fourth aspect, the present invention provides a remote sensing image fusion apparatus, comprising: the image acquisition module is used for acquiring and preprocessing a full-color image and a multispectral image which need to be fused; a fused image output module, configured to input the panchromatic image and the multispectral image into the remote sensing image fusion model according to the first aspect or any embodiment of the first aspect, so as to obtain a fused image.
According to a fifth aspect, the present invention provides an electronic device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method for training a remote sensing image fusion model according to the first aspect or any embodiment of the first aspect or the method for fusing remote sensing images according to any embodiment of the second aspect or the second aspect when executing the program.
According to a sixth aspect, the present invention provides a storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method for training a remote sensing image fusion model according to the first aspect or any of the embodiments of the first aspect or the method for fusing remote sensing images according to the second aspect or any of the embodiments of the second aspect.
The technical scheme of the invention has the following advantages:
1. the invention provides a training method of a remote sensing image fusion model, which obtains an ideal remote sensing image fusion model by training a depth residual neural network, realizes the full utilization of the spectrum and the spatial information provided by a multispectral image and a panchromatic image in the remote sensing image, and obtains an image with high spatial resolution and spectral resolution.
2. According to the training method of the remote sensing image fusion model, when the minimum value of the loss function is obtained, the obtained remote sensing image fusion model is determined to be the remote sensing image fusion model meeting the image fusion requirement, and an index for judging whether the training reaches the standard or not is provided for the training of the remote sensing image fusion model.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating a specific example of a method for training a remote sensing image fusion model according to an embodiment of the present invention;
FIG. 2 is a flowchart of a specific example of a remote sensing image fusion method according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a specific example of a training apparatus for a remote sensing image fusion model according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a specific example of a remote sensing image fusion device according to an embodiment of the present invention;
fig. 5 is a schematic block diagram of a specific example of an electronic device in the embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment provides a training method for a remote sensing image fusion model, as shown in fig. 1, the method includes the following steps:
and S110, acquiring a training sample, wherein the training sample is an image obtained by respectively performing downsampling on the full-color image and the multispectral image by preset times.
The sample may be obtained by taking a multispectral image and a panchromatic image of the same region through a ground observation satellite such as Gaofen series, GeoEye-1, QuickBird, etc., or may be obtained by obtaining the panchromatic image and the multispectral image of the same region from a network picture database. The training sample may be all of the obtained samples or may be a part of the obtained samples, for example, 70% of the obtained samples are used as the training sample. The embodiment does not limit the acquisition mode of the training sample, and can determine the acquisition mode according to the requirement.
Because the multispectral image with consistent spatial resolution with the panchromatic image does not exist, the original multispectral image can be used as a true value, and the panchromatic image and the multispectral image after down sampling are used as input to train the network. Taking the WorldView image as an example, the original fusion task is to fuse a 0.3 meter panchromatic image and a 1.2 meter multispectral image into a 0.3 meter multispectral image. Training is not possible because the network requires a 0.3 meter multispectral image as the true value for training, which is not actually present. To alleviate this problem, a full-color image of 0.3 meters and a multispectral image of 1.2 meters can be down-sampled by a factor of 4 to obtain a full-color image of 1.2 meters and a multispectral image of 4.8 meters. At this time, the fusion task is changed to fuse the 1.2 m panchromatic image and the 4.8 m multispectral image into the 1.2 m multispectral image, and the true value of the 1.2 m multispectral image exists, so that the whole training process can be normally carried out.
And S120, preprocessing the training sample by adopting a Wald criterion to generate a label for representing the full-color image and the multispectral image.
Illustratively, since the whole network is trained without real images with high spatial resolution and high spectral resolution as labels, the Wald criterion is adopted to prepare full-color images and multispectral images and labels required in training.
S130, extracting the features of the training sample through a preset depth residual error neural network model to obtain convolution features, and adding the convolution features and the multispectral image to obtain a residual error result.
Illustratively, to extract deep-level image features, a convolutional neural network containing a plurality of convolution modules is constructed to extract features of panchromatic and multispectral images. And fusing the two images on the basis of the extracted features to obtain a better fusion effect.
Wherein each convolution module comprises two parts of convolution and nonlinear ReLU activation. The calculations for both can be expressed together as:
Figure BDA0002333913390000081
wherein, y(j)Represents the output, k(i)(j)Is a convolution kernel applied to the ith input feature map to obtain the jth output feature map, and b represents the offset.
The method comprises the steps of extracting features of an input training sample by constructing a neural network containing a plurality of convolution modules, and finally adding the extracted features and an input low-spatial-resolution multispectral image to form a residual error result so as to obtain a fused image.
And S140, determining a loss function of the preset deep residual error neural network model according to the residual error result and the label of the training sample.
For example, the loss function may be an exponential error loss obtained according to the residual result and the label of the training sample, may be an absolute error loss, or may be a squared error loss, and the method for obtaining the loss function is not limited in this embodiment, and may be determined as needed.
And S150, adjusting a preset depth residual error neural network model by using a gradient descent method according to the loss function to obtain a remote sensing image fusion model.
Illustratively, internal parameters of the depth residual neural network model are changed with a loss function such that the loss function decreases in the direction of gradient descent.
The embodiment provides a training method for a remote sensing image fusion model, which obtains an ideal remote sensing image fusion model by training a depth residual neural network, and realizes full utilization of spectrum and spatial information provided by a multispectral image and a panchromatic image in a remote sensing image to obtain an image with high spatial resolution and spectral resolution.
As an optional embodiment of the present application, the step S150 includes: and when the minimum value of the loss function is obtained, determining to obtain the remote sensing image fusion model.
For example, the determination manner of whether the loss function is the minimum value may be to calculate the loss function under different parameters, and in the calculation result, when a certain parameter is smaller than the loss functions calculated by all other parameters, it indicates that the current loss function is the minimum value.
According to the training method for the remote sensing image fusion model, when the minimum value of the loss function is obtained, the obtained remote sensing image fusion model is determined to be the remote sensing image fusion model meeting the image fusion requirement, and an index for judging whether the training reaches the standard or not is provided for the training of the remote sensing image fusion model.
As an alternative embodiment of the present application, the loss of the preset deep residual neural network model is calculated according to the residual result and the label of the training sample by the following formula:
Figure BDA0002333913390000091
wherein Loss represents Loss; x is the number of(t)Representing the t-th input image, y(t)Representing the t-th multispectral image with high spatial resolution, f (x)(t)) And T is the number of training samples, and represents the result fused by the neural network based on the depth residual error.
The embodiment provides a remote sensing image fusion method, as shown in fig. 2, including the following steps:
s210, acquiring and preprocessing a full-color image and a multispectral image which need to be fused.
For example, the preprocessing method is as above step S110, and is not described again. The acquisition mode can be that multispectral images and full-color images of the same area are shot by earth observation satellites such as Gaofen series, GeoEye-1, Quickbird and the like, are preprocessed, read into a computer memory and input into a network. The method for acquiring the full-color image and the multispectral image to be fused is not limited in the embodiment, and can be determined as required.
S220, inputting the full-color image and the multispectral image into the remote sensing image fusion model in any embodiment to obtain a fusion image.
The embodiment provides a remote sensing image fusion method, which is characterized in that acquired pictures to be fused are fused in a remote sensing image fusion model, so that the spectrum and spatial information provided by a multispectral image and a panchromatic image in the remote sensing image are fully utilized, and an image with high spatial resolution and spectral resolution can be obtained.
As an optional embodiment of the present application, the remote sensing image fusion method further includes: the spectrum of each pixel in the fused image is regarded as a high-dimensional vector, the distortion degree from the multispectral image to the fused image is measured by calculating the included angle between the two vectors, and the smaller the calculation result is, the better the spectral performance is.
Illustratively, the spectrum curve of a pixel on the obtained remote sensing fusion image, namely the brightness value of the pixel on a plurality of wave bands is transmitted to an N-dimensional space as a vector, each spectrum curve is regarded as a vector with a direction and a length, and the included angle between the two vectors is calculated. The calculation formula is as follows:
Figure BDA0002333913390000101
wherein, X is the spectral vector of the ground feature on the original multispectral image, and Y is the spectral vector of the same ground feature on the fusion image.
According to the remote sensing image fusion method provided by the embodiment, the reduction capability of the fused image obtained through the remote sensing image fusion model to the original image spectrum information can be determined through the calculation of the spectrum vector included angle.
As an optional embodiment of the present application, the remote sensing image fusion method further includes: and calculating the average value of the difference between the multispectral image and the fusion image, wherein the smaller the calculation result is, the better the spectral property is.
Illustratively, the calculation formula may be:
Figure BDA0002333913390000111
the RMSE represents the mean value of the difference between the multispectral image and the fusion image, i and j represent pixel values, N, M represents a pixel value range, R represents a standard reference image, and F represents the fusion image obtained through the remote sensing image fusion model. The remote sensing image fusion method provided by the embodiment judges the quality of the image fusion effect by calculating the mean value of the difference between the multispectral image and the fusion image.
The embodiment provides a training device for a remote sensing image fusion model, as shown in fig. 3, including:
a training sample obtaining module 310, configured to obtain a training sample, where the training sample is an image obtained by performing downsampling on the full-color image and the multispectral image by a preset multiple respectively; the specific implementation manner is shown in the corresponding part of S110 in this embodiment, and is not described herein again.
A label generation module 320, configured to perform preprocessing on the training sample by using the Wald criterion, and generate a label representing a full-color image and a multispectral image; the specific implementation manner is shown in the corresponding part of S120 in this embodiment, and is not described herein again.
The residual error result calculation module 330 is configured to perform feature extraction on the training sample through a preset depth residual error neural network model to obtain a convolution feature, and perform addition operation on the convolution feature and the multispectral image to obtain a residual error result; the specific implementation manner is shown in the corresponding part of S130 in this embodiment, and is not described herein again.
A loss calculating module 340, configured to determine a loss function of the preset deep residual neural network model according to the residual result and the label of the training sample; the specific implementation manner is shown in the corresponding part of S140 in this embodiment, and is not described herein again.
And the remote sensing image fusion model obtaining module 350 is used for adjusting the preset depth residual error neural network model by using a gradient descent method according to the loss function to obtain the remote sensing image fusion model. The specific implementation manner is shown in the corresponding portion of S150 in this embodiment, and is not described herein again.
The embodiment provides a training device for a remote sensing image fusion model, which obtains an ideal remote sensing image fusion model through training a depth residual neural network, realizes full utilization of spectrum and spatial information provided by a multispectral image and a panchromatic image in a remote sensing image, and can obtain an image with high spatial resolution and spectral resolution.
As an optional embodiment of the present application, the remote sensing image fusion model obtaining module 350 includes: and the sensing image fusion model determining module is used for determining to obtain the remote sensing image fusion model when the minimum value of the loss function is obtained.
As an optional embodiment of the present application, the loss calculating module 340 may calculate the loss of the preset depth residual neural network model according to the residual result and the label of the training sample by the following formula:
Figure BDA0002333913390000131
wherein Loss represents Loss; x is the number of(t)Representing the t-th input image, y(t)Representing the t-th multispectral image with high spatial resolution, f (x)(t)) And T is the number of training samples, and represents the result fused by the neural network based on the depth residual error. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The present embodiment provides a remote sensing image fusion apparatus, as shown in fig. 4, including:
the image acquisition module 410 is used for acquiring and preprocessing a full-color image and a multispectral image which need to be fused; the specific implementation manner is shown in the corresponding part of S210 in this embodiment, and is not described herein again.
A fused image output module 420, configured to input the panchromatic image and the multispectral image into the remote sensing image fusion model in any embodiment described above, so as to obtain a fused image. The specific implementation manner is shown in the corresponding portion of S220 in this embodiment, and details are not described herein again.
The embodiment provides a remote sensing image fusion device, which fuses acquired images to be fused in a remote sensing image fusion model, so that the spectrum and spatial information provided by a multispectral image and a panchromatic image in the remote sensing image are fully utilized, and an image with high spatial resolution and spectral resolution can be obtained.
As an optional embodiment of the present application, the remote sensing image fusion apparatus further includes:
and the spectral evaluation module is used for regarding the spectrum of each pixel in the fused image as a high-dimensional vector, measuring the distortion degree from the multispectral image to the fused image by calculating an included angle between the two vectors, and the smaller the calculation result is, the better the spectral performance is. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
As an optional embodiment of the present application, the remote sensing image fusion apparatus further includes:
and the evaluation module is used for calculating the average value of the difference between the multispectral image and the fusion image, and the smaller the calculation result is, the better the spectral property is. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The embodiment of the present application also provides an electronic device, as shown in fig. 5, including a processor 510 and a memory 520, where the processor 510 and the memory 520 may be connected by a bus or in other manners.
Processor 510 may be a Central Processing Unit (CPU). The Processor 510 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 520, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the remote sensing image fusion model training method or the remote sensing image fusion method in the embodiments of the present invention. The processor executes various functional applications and data processing of the processor by executing non-transitory software programs, instructions, and modules stored in the memory.
The memory 520 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 520 may optionally include memory located remotely from the processor, which may be connected to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 520 and when executed by the processor 510, perform a remote sensing image fusion model training method or a remote sensing image fusion method as in the embodiments shown in fig. 1 or fig. 2.
The details of the electronic device may be understood with reference to the corresponding related descriptions and effects in the embodiments shown in fig. 1 or fig. 2, and are not described herein again.
The embodiment also provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions can execute the remote sensing image fusion model training method or the remote sensing image fusion method in any method embodiment. The storage medium may be a magnetic Disk, an optical Disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a flash Memory (FlashMemory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. A training method for a remote sensing image fusion model is characterized by comprising the following steps:
acquiring a training sample, wherein the training sample is an image obtained by downsampling a full-color image and a multispectral image by preset times respectively;
preprocessing the training sample by adopting a Wald criterion to generate a label representing a full-color image and a multispectral image;
extracting the features of the training samples through a preset depth residual error neural network model to obtain convolution features, and adding the convolution features and the multispectral image to obtain a residual error result;
determining a loss function of the preset deep residual error neural network model according to the residual error result and the label of the training sample;
and adjusting the preset depth residual error neural network model by using a gradient descent method according to the loss function to obtain the remote sensing image fusion model.
2. The method according to claim 1, wherein the adjusting the preset depth residual neural network model according to the loss function by using a gradient descent method to obtain the remote sensing image fusion model comprises:
and when the minimum value of the loss function is obtained, determining to obtain the remote sensing image fusion model.
3. The method of claim 1, wherein the loss function of the preset deep residual neural network model is determined from the residual results and the labels of the training samples by the following formula:
Figure FDA0002333913380000021
wherein Loss represents Loss; x is the number of(t)Representing the t-th input image, y(t)Representing the t-th multispectral image with high spatial resolution, f (x)(t)) And T is the number of training samples, and represents the result fused by the neural network based on the depth residual error.
4. A remote sensing image fusion method is characterized by comprising the following steps:
acquiring and preprocessing a full-color image and a multispectral image which need to be fused;
inputting the panchromatic image and the multispectral image into the remote sensing image fusion model according to any one of claims 1 to 3 to obtain a fusion image.
5. The method of claim 4, further comprising: and regarding the spectrum of each pixel in the fused image as a high-dimensional vector, measuring the distortion degree from the multispectral image to the fused image by calculating an included angle between the two vectors, wherein the smaller the calculation result is, the better the spectrum property is.
6. The method of claim 4, further comprising: and calculating the average value of the difference between the multispectral image and the fusion image, wherein the smaller the calculation result is, the better the spectrum property is.
7. A training device for a remote sensing image fusion model is characterized by comprising:
the training sample acquisition module is used for acquiring a training sample, wherein the training sample is an image obtained by respectively performing downsampling on the full-color image and the multispectral image by preset times;
the label generation module is used for preprocessing the training sample by adopting a Wald criterion to generate a label representing a full-color image and a multispectral image;
the residual error result calculation module is used for extracting the characteristics of the training sample through a preset depth residual error neural network model to obtain convolution characteristics, and adding the convolution characteristics and the multispectral image to obtain a residual error result;
a loss calculation module, configured to determine a loss function of the preset deep residual neural network model according to the residual result and the label of the training sample;
and the remote sensing image fusion model acquisition module is used for adjusting the preset depth residual error neural network model by using a gradient descent method according to the loss function to obtain the remote sensing image fusion model.
8. A remote sensing image fusion apparatus, comprising:
the image acquisition module is used for acquiring and preprocessing a full-color image and a multispectral image which need to be fused;
a fused image output module for inputting the panchromatic image and the multispectral image into the remote sensing image fusion model according to claim 1 or 2 to obtain a fused image.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method for training a remote sensing image fusion model according to any one of claims 1 to 3 or the method for fusing remote sensing images according to any one of claims 4 to 6 are implemented by the processor when executing the program.
10. A storage medium having stored thereon computer instructions, characterized in that the instructions, when executed by a processor, carry out the steps of the method for training a remote sensing image fusion model according to any of claims 1 to 3 or the method for fusing remote sensing images according to any of claims 4 to 6.
CN201911347962.3A 2019-12-24 2019-12-24 Remote sensing image fusion method and device Pending CN111524063A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911347962.3A CN111524063A (en) 2019-12-24 2019-12-24 Remote sensing image fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911347962.3A CN111524063A (en) 2019-12-24 2019-12-24 Remote sensing image fusion method and device

Publications (1)

Publication Number Publication Date
CN111524063A true CN111524063A (en) 2020-08-11

Family

ID=71900405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911347962.3A Pending CN111524063A (en) 2019-12-24 2019-12-24 Remote sensing image fusion method and device

Country Status (1)

Country Link
CN (1) CN111524063A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328584A (en) * 2020-11-10 2021-02-05 中国科学院空天信息创新研究院 Multi-dimensional space-time spectrum data fusion method and device, electronic equipment and storage medium
CN112508082A (en) * 2020-12-02 2021-03-16 武汉大学 Unsupervised learning remote sensing image space spectrum fusion method and system
CN112529827A (en) * 2020-12-14 2021-03-19 珠海大横琴科技发展有限公司 Training method and device for remote sensing image fusion model
CN112991249A (en) * 2021-03-18 2021-06-18 国网经济技术研究院有限公司 Remote sensing image fusion method based on depth separable CNN model
CN113222835A (en) * 2021-04-22 2021-08-06 海南大学 Remote sensing full-color and multi-spectral image distributed fusion method based on residual error network
WO2023093281A1 (en) * 2021-11-25 2023-06-01 华为技术有限公司 Image processing method, model training method and electronic device
WO2023164929A1 (en) * 2022-03-01 2023-09-07 中国科学院深圳先进技术研究院 Multi-source remote sensing image fusion method and apparatus, device and storage medium
CN117726915A (en) * 2024-02-07 2024-03-19 南方海洋科学与工程广东省实验室(广州) Remote sensing data spatial spectrum fusion method and device, storage medium and terminal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1588447A (en) * 2004-08-19 2005-03-02 复旦大学 Remote sensitive image fusing method based on residual error
US8761506B1 (en) * 2011-04-22 2014-06-24 DigitalGlobe, Incorporated Pan sharpening digital imagery
CN108960345A (en) * 2018-08-08 2018-12-07 广东工业大学 A kind of fusion method of remote sensing images, system and associated component
CN109146831A (en) * 2018-08-01 2019-01-04 武汉大学 Remote sensing image fusion method and system based on double branch deep learning networks
CN110415199A (en) * 2019-07-26 2019-11-05 河海大学 Multi-spectral remote sensing image fusion method and device based on residual error study

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1588447A (en) * 2004-08-19 2005-03-02 复旦大学 Remote sensitive image fusing method based on residual error
US8761506B1 (en) * 2011-04-22 2014-06-24 DigitalGlobe, Incorporated Pan sharpening digital imagery
CN109146831A (en) * 2018-08-01 2019-01-04 武汉大学 Remote sensing image fusion method and system based on double branch deep learning networks
CN108960345A (en) * 2018-08-08 2018-12-07 广东工业大学 A kind of fusion method of remote sensing images, system and associated component
CN110415199A (en) * 2019-07-26 2019-11-05 河海大学 Multi-spectral remote sensing image fusion method and device based on residual error study

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马旭东: "光学遥感影像压缩及融合的质量评价研究", 《中国博士学位论文全文数据库》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328584A (en) * 2020-11-10 2021-02-05 中国科学院空天信息创新研究院 Multi-dimensional space-time spectrum data fusion method and device, electronic equipment and storage medium
CN112328584B (en) * 2020-11-10 2022-07-01 中国科学院空天信息创新研究院 Multi-dimensional space-time spectrum data fusion method and device, electronic equipment and storage medium
CN112508082A (en) * 2020-12-02 2021-03-16 武汉大学 Unsupervised learning remote sensing image space spectrum fusion method and system
CN112529827A (en) * 2020-12-14 2021-03-19 珠海大横琴科技发展有限公司 Training method and device for remote sensing image fusion model
CN112991249A (en) * 2021-03-18 2021-06-18 国网经济技术研究院有限公司 Remote sensing image fusion method based on depth separable CNN model
CN112991249B (en) * 2021-03-18 2023-11-24 国网经济技术研究院有限公司 Remote sensing image fusion method based on depth separable CNN model
WO2022222352A1 (en) * 2021-04-22 2022-10-27 海南大学 Remote-sensing panchromatic and multispectral image distributed fusion method based on residual network
CN113222835B (en) * 2021-04-22 2023-04-14 海南大学 Remote sensing full-color and multi-spectral image distributed fusion method based on residual error network
CN113222835A (en) * 2021-04-22 2021-08-06 海南大学 Remote sensing full-color and multi-spectral image distributed fusion method based on residual error network
WO2023093281A1 (en) * 2021-11-25 2023-06-01 华为技术有限公司 Image processing method, model training method and electronic device
WO2023164929A1 (en) * 2022-03-01 2023-09-07 中国科学院深圳先进技术研究院 Multi-source remote sensing image fusion method and apparatus, device and storage medium
CN117726915A (en) * 2024-02-07 2024-03-19 南方海洋科学与工程广东省实验室(广州) Remote sensing data spatial spectrum fusion method and device, storage medium and terminal
CN117726915B (en) * 2024-02-07 2024-05-28 南方海洋科学与工程广东省实验室(广州) Remote sensing data spatial spectrum fusion method and device, storage medium and terminal

Similar Documents

Publication Publication Date Title
CN111524063A (en) Remote sensing image fusion method and device
US10839211B2 (en) Systems, methods and computer program products for multi-resolution multi-spectral deep learning based change detection for satellite images
US20220245792A1 (en) Systems and methods for image quality detection
Qin et al. Spatiotemporal inferences for use in building detection using series of very-high-resolution space-borne stereo images
CN112562093A (en) Object detection method, electronic medium, and computer storage medium
Rangzan et al. Supervised cross-fusion method: a new triplet approach to fuse thermal, radar, and optical satellite data for land use classification
CN109167998A (en) Detect method and device, the electronic equipment, storage medium of camera status
CN117407710A (en) Hyperspectral remote sensing water quality parameter inversion method and device, electronic equipment and storage medium
CN111583166A (en) Image fusion network model construction and training method and device
CN112149707B (en) Image acquisition control method, device, medium and equipment
CN106651803B (en) Method and device for identifying house type data
CN111582013A (en) Ship retrieval method and device based on gray level co-occurrence matrix characteristics
CN114611635A (en) Object identification method and device, storage medium and electronic device
CN111695572A (en) Ship retrieval method and device based on convolutional layer feature extraction
CN113256493A (en) Thermal infrared remote sensing image reconstruction method and device
CN108229271B (en) Method and device for interpreting remote sensing image and electronic equipment
CN112215304A (en) Gray level image matching method and device for geographic image splicing
CN116612430A (en) Method for estimating water level by utilizing video monitoring system based on deep learning
CN111652034A (en) Ship retrieval method and device based on SIFT algorithm
Hashim et al. Geometric and radiometric evaluation of RazakSAT medium-sized aperture camera data
CN113487580B (en) Unmanned aerial vehicle image overlapping degree calculation method and system based on polygon analysis
CN118247352A (en) Binocular stereo matching method, device, equipment and medium based on deep learning
CN114862682A (en) Image processing method, image processing device, electronic equipment and storage medium
WO2020189241A1 (en) Information processing device, information processing method, and program
CN111950527A (en) Target detection method and device based on YOLO V2 neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200811

RJ01 Rejection of invention patent application after publication