CN109509160A - Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution - Google Patents

Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution Download PDF

Info

Publication number
CN109509160A
CN109509160A CN201811436115.XA CN201811436115A CN109509160A CN 109509160 A CN109509160 A CN 109509160A CN 201811436115 A CN201811436115 A CN 201811436115A CN 109509160 A CN109509160 A CN 109509160A
Authority
CN
China
Prior art keywords
layer
image
resolution
low
dictionary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811436115.XA
Other languages
Chinese (zh)
Inventor
吴宏林
赵淑珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Priority to CN201811436115.XA priority Critical patent/CN109509160A/en
Publication of CN109509160A publication Critical patent/CN109509160A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution. The method comprises the following steps: a1, performing super-resolution processing on the low-resolution multispectral image by adopting a depth neural network iterated layer by layer to obtain a reconstructed multispectral image; and A2, carrying out hierarchical fusion on the brightness components of the full-color image and the reconstructed multispectral image to obtain a high-resolution multispectral image. The method can fully reserve the spectral information and simultaneously enhance the image space detail information to the maximum extent.

Description

Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution.
Background
In the field of remote sensing, the remote sensing images acquired by most earth observation satellites cannot have both high spatial resolution and high spectral resolution due to the limitation of the design of the sensor, so that the remote sensing image fusion technology is valued by researchers and is developed rapidly.
In the remote sensing image fusion technology, the most common is the fusion processing between the panchromatic image and the multispectral image, which essentially injects the detail information of the panchromatic image into the multispectral image and simultaneously saves the spectral information of the multispectral image. Full-color images have rich detail characteristics, high spatial resolution, but little spectral information; while the multispectral image has low definition, but rich spectral information. Therefore, complementary information of the full-color image and the multispectral image is fully utilized, the limitation that a single sensor cannot meet the requirement of high spectral and spatial resolutions at the same time can be overcome by fusing the full-color image and the multispectral image, the image quality is improved, the high-resolution remote sensing image is obtained, and a more reliable basis is provided for subsequent image processing such as target detection, target identification and the like. At present, for the fusion processing between full-color image and multi-spectral image, many researchers have proposed some algorithms with better performance, and these algorithms can be roughly divided into three categories: component replacement based methods, multi-resolution analysis based methods and sparse representation based methods.
And converting the multispectral image into other spaces based on a component replacement method, assuming that the converted structure is equivalent to a full-color image, and directly replacing corresponding structure components. Commonly known in this type of method is the IHS transform, which maps RGB image pixels into an IHS color space that mimics the human visual perception processing system, the I component representing the energy intensity of the image, containing the primary information of the image 'S spatial resolution, and the H and S components representing the hue and saturation information, respectively, of the source image, which in combination contain the image' S spectral resolution information. And separating spatial information and spectral information contained in the multispectral image by an IHS (induction-induced segmentation) transformation method, replacing the I component with a full-color image, and finally obtaining a fused image by inverse IHS transformation. The method is fast and easy to implement, can improve the spatial resolution of the image to a certain extent, but can bring about serious spectrum distortion. In the IHS transform method, the I component of the converted multispectral image is directly replaced by a panchromatic image, which requires a large correlation between the two, but actually, the I component and the panchromatic image are greatly different in imaging mechanism and wavelength range, and the characteristic correlation is poor, so that serious spectral distortion is brought. Meanwhile, because the characteristics represented by the three components after IHS transformation are not completely independent, some spectral information of the multispectral image still exists on the I component, and the spectral information on the I component is lost due to direct replacement, which may cause spectral color distortion.
The method based on multi-resolution analysis generally decomposes an image into a high frequency part and a low frequency part, and obtains a required fused image in different scale spaces by using a suitable fusion strategy. The method is mainly characterized in that the difference between a full-color image and a down-sampled image is calculated, the full-color image is subjected to multi-scale transformation, and the transformed high-frequency information is combined with a multispectral image, so that the multispectral image with higher spatial resolution is obtained. As in the method proposed by MengY et al, the multispectral image and the panchromatic image are decomposed into high-frequency subbands and low-frequency subbands, the high-frequency subbands of the multispectral image are replaced with the high-frequency subbands of the high-resolution panchromatic image, and the panchromatic image is added to the low-frequency subbands of the multispectral image through the low-frequency subbands after high-pass filtering, and finally, a fused image is constructed by performing inverse multi-wavelet transform. However, methods based on multi-resolution analysis often suffer from spatial distortion problems and are prone to ringing. For example, in the method proposed by MengY et al, data is decomposed at different levels of information using multi-wavelet transform, the fusion effect is affected by the number of wavelet decomposition layers, and as the decomposition scale of wavelet transform increases, the detailed portions of the image will be significantly distorted periodically. In addition, high-pass filtering a full-color image may filter out some of its texture information.
In recent years, with the widespread attention of compressed sensing, sparse representation is also introduced into the field of remote sensing image fusion. Li S et al apply sparse representation to multispectral panchromatic image fusion, construct the high-resolution multispectral image and the degradation model of the high-resolution panchromatic image as a linear sampling process, and represent it in the form of a matrix, the model matrix is regarded as a measurement matrix in compressed sensing, convert the multispectral panchromatic image fusion problem into a signal recovery problem with sparse regularization, and solve the recovery problem through a basis tracking algorithm, effectively recover the high-resolution multispectral image. However, most of the existing methods based on sparse representation have complex models and high time complexity, and spectral information cannot be well reserved. As in the method proposed by LiS et al, the source image is blocked rather than manipulated over the entire image, resulting in a significant blockiness effect in the fused image. In addition, the sparse coefficient is solved by adopting a basis tracking algorithm in the method, the algorithm is complex, and the running time is long.
In the fusion process of the multispectral image and the panchromatic image, the multispectral image with low resolution needs to be processed firstly to keep the size consistent with that of the panchromatic image so as to carry out subsequent fusion. The image super-resolution technology can reconstruct an input low-resolution image, and the resolution of the input low-resolution image is improved while the size of the image is changed, so that the processing of the low-resolution multispectral image by using the image super-resolution technology becomes an effective means. The image super-resolution algorithm is various, and the interpolation amplification algorithm is mostly used in the existing remote sensing image fusion method, so that the image obtained by simple processing is only a smooth pseudo high-resolution image, the spatial details are fuzzy, and the spatial detail information in the multispectral image is ignored. In the method proposed by Zhong J et al, a convolutional neural network (SRCNN) is used to perform super-resolution reconstruction on a low-resolution multispectral image, so that spatial details of the low-resolution multispectral image are enhanced, and schmidt orthogonal transform (GS) is subsequently used for fusion. Although the method enhances the spatial information of the multispectral image to a certain extent, the SRCNN network is not stable and the training difficulty is high. Although the subsequent GS fusion algorithm has certain advantages in the aspect of spectral fidelity, the definition of a fused image is easy to reduce.
Disclosure of Invention
Technical problem to be solved
Aiming at the existing technical problems, the invention provides a layered remote sensing image fusion method utilizing layer-by-layer iteration super-resolution, which can fully reserve spectral information and simultaneously enhance image space detail information to the maximum extent.
(II) technical scheme
In order to achieve the purpose, the invention adopts the main technical scheme that:
a hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution comprises the following steps:
a1, performing super-resolution processing on the low-resolution multispectral image by adopting a depth neural network iterated layer by layer to obtain a reconstructed multispectral image;
and A2, performing hierarchical fusion on the full-color image corresponding to the low-resolution multispectral image and the brightness component of the reconstructed multispectral image after degradation processing to obtain the high-resolution multispectral image.
Further, the step a1 includes the following steps:
a11, training a high-resolution dictionary and a low-resolution dictionary by utilizing the high-resolution image block sample set and the low-resolution image block sample set through a combined dictionary training method;
a12, inputting the low-resolution multispectral image into a first convolution layer of a depth neural network, and extracting low-resolution image block features corresponding to the low-resolution multispectral image;
a13, inputting the extracted low-resolution image block features into a LISTA (local optimum station) network in a deep neural network, and solving sparse coefficients through layer-by-layer iteration;
a14, multiplying the solved sparse coefficient by the high-resolution dictionary to obtain a reconstructed multispectral image block;
and A15, in a second convolution layer of the deep neural network, aggregating the reconstructed multispectral image blocks to obtain a reconstructed multispectral image.
Further, the step a2 includes the following steps:
a21, carrying out YUV conversion on the reconstructed multispectral image to obtain a Y component, a U component and a V component;
the Y component is a brightness component;
a22, performing histogram matching on the full-color image and the brightness component to obtain a matched full-color image; the panchromatic image is degraded, is the same scene as the low-resolution multispectral image, but is imaged by a different sensor;
a23, decomposing the brightness component and the matched full-color image to obtain a detail layer of the brightness component, a basic layer of the brightness component, a detail layer of the matched full-color image and a basic layer of the matched full-color image;
a24, fusing the detail layer of the brightness component and the detail layer of the matched full-color image by a convolution sparse representation method to obtain a fused detail layer;
fusing the basic layer of the brightness component and the basic layer of the matched full-color image to obtain a fused basic layer;
a25, fusing the fused detail layer and the fused basic layer to obtain a new multispectral image brightness component;
and A26, carrying out YUV inverse transformation on the new multispectral image brightness component, the U component and the V component to obtain a final high-resolution multispectral image.
Further, the specific training process of the joint dictionary training is as follows:
1) selecting a group of high-resolution images from an external high-resolution natural image set with rich detail information, and performing fuzzification and down-sampling processing to obtain corresponding low-resolution images;
up-resampling the low resolution image;
2) randomly extracting low-resolution image features on first-order and second-order derivative images of the low-resolution image in a block mode to form a low-resolution image block sample set Y ═ Y1,y2,…yn};
Extracting high-resolution image blocks at corresponding positions of the high-resolution image to form a high-resolution image block sample set X ═ X1,x2,…xn};
3) Training a high-resolution dictionary and a low-resolution dictionary, wherein the process is shown as formula (1):
wherein D isxAnd DyRepresenting a high resolution dictionary and a low resolution dictionary, respectively, and N and M representing a high resolution dictionary D, respectivelyxAnd a low resolution dictionary DyDimension of vector form, λ is regularization parameter, | Z | | luminance1For enhancing sparsity, | | - |2For constraining the columns of the dictionary and removing the scaling error.
Further, in the step a13, performing layer-by-layer iterative shrinkage solving on sparse coefficients α through a j-layer loop under the condition of a given low-resolution dictionary, wherein the process is as shown in formula (2):
the process of layer-by-layer iterative shrinkage is shown in formula (3):
uj+1=hθ(Wy+Suj) (3)
wherein W, S represents the weight of two linear layers in the LISTA network, ujrepresents the result of the j-th iteration, hθFor the activation function, assuming the input is a, the activation function is defined as [ h ]θ(a)]i=sign(ai)(|ai|-θi)+,aiRepresents the ith variable, theta is a parameter of the activation function andthe index i indicates the ith parameter of the layer.
Further, in the step a23, the base layer of the luminance component and the base layer of the matched panchromatic image are obtained by formula (4):
wherein, gxAnd gyHorizontal and vertical gradient operators, g, respectivelyx=[-1,1],gy=[-1,1]Tη is regularization parameter η -9, k-1, 2, I1Representing a luminance component, I2Representing a matched full-color image, I1 bBase layer representing a luminance component, I2 bA base layer representing the matched full color image;
the detail layer of the luminance component and the detail layer of the matched full-color image are obtained by equation (5):
k=1,2,I1 ddetail layer, I, representing a luminance component2 dShowing a detail layer of the matched full color image.
Further, in the step a24, the detail layer of the luminance component and the detail layer of the matched panchromatic image are fused by a convolution sparse representation method to obtain a fused detail layer, which is as follows:
1) obtaining a sparse coefficient graph corresponding to the detail layer through a formula (6);
the fine layer is a fine layer of a brightness component and a fine layer of a matched full-color image;
wherein, Ck,mAs a sparse coefficient map, dmAs a dictionary filter, ζ is a regularization parameter and represents convolution operation; with Ck,m(x, y) represents a sparse coefficient diagram Ck,mA corresponding value in spatial domain coordinates (x, y);
2) obtaining an activity level map A of the detail layer by equation (7)k(x,y):
Ak(x,y)=||Ck,1:M(x,y)||1(7)
3) Obtaining activity level map for fusion by equation (8)
Wherein r represents the size of the sliding window, p and q represent the spatial positioning of the sliding window respectively, and p and q are belonged to [ -r, r ];
4) and (3) acquiring a fused sparse coefficient graph by adopting a 'maximum selection' method, as shown in a formula (9):
wherein, CF,1:M(x, y) is a fused sparse coefficient map,
5) obtaining a fused detail layer according to the dictionary filter and the fused sparse coefficient graph, as shown in formula (10):
wherein,is a fused detail layer.
Further, the base layer of the luminance component and the base layer of the matched full-color image are fused by adopting a 'maximum selection' method to obtain a fused base layer, as shown in formula (11):
wherein,in order to obtain a base layer after the fusion,
(III) advantageous effects
The invention has the beneficial effects that:
1. the invention provides a hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution, which fuses a low-resolution multispectral image and a full-color image. Firstly, the low-resolution multispectral image is subjected to super-resolution processing through a depth network model, so that the enhancement of the spatial details of the multispectral image is facilitated; and secondly, the brightness component of the high-resolution multispectral image after spatial enhancement and the panchromatic image are fused in a layered mode, so that the spectral distortion is effectively reduced, and the spatial detail information of the multispectral image and the panchromatic image is further utilized. The detail retention capability, the spectrum retention capability and the information abundance of the high-resolution multispectral image obtained by fusion are obviously improved.
2. The invention provides a layered remote sensing image fusion method utilizing layer-by-layer iteration super-resolution, which is characterized in that super-resolution processing is carried out on a low-resolution multispectral image by utilizing a deep neural network of layer-by-layer iteration, a sparse coefficient is solved quickly through layer-by-layer iteration of a LISTA network, a spatially enhanced multispectral image is reconstructed by means of sparsity of an external image dictionary, and loss of energy and spatial information in the process of obtaining the low-resolution multispectral image by degradation of the high-resolution multispectral image is fully considered. Compared with the common remote sensing image fusion method in which only the low-resolution multispectral image is interpolated and amplified to obtain a smooth pseudo high-resolution image, the spatial detail is clearer.
3. According to the hierarchical remote sensing image fusion method utilizing the layer-by-layer iterative super-resolution, the brightness component and the panchromatic image of the high-resolution multispectral image after space enhancement are hierarchically fused, the correlation between the multispectral image and the panchromatic image is fully considered, and the spectral information of the multispectral image is effectively reserved; meanwhile, a convolution sparse representation method is also utilized in the fusion of the detail layers, and the space detail information of the multispectral image and the panchromatic image is further utilized, so that the effect of reducing spectral distortion and enhancing the space information is finally achieved.
Drawings
FIG. 1 is a flow chart of the SCN model-based multispectral image super-resolution reconstruction process in the present invention;
FIG. 2 is a flow chart of an implementation of the present invention;
FIGS. 3(a) to 3(h) are graphs showing the effect of the fusion of the present invention with other methods.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
The invention provides a hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution, which comprises the following two steps:
a1, performing super-resolution processing on the low-resolution multispectral image by adopting a depth neural network iterated layer by layer to obtain a reconstructed multispectral image;
a2, carrying out hierarchical fusion on the brightness components of the full-color image and the reconstructed multispectral image to obtain a high-resolution multispectral image; the panchromatic image is degraded, being the same scene as the low resolution multispectral image, but imaged by a different sensor.
In step A1, super-resolution processing is carried out on the low-resolution multispectral image MS by utilizing a layer-by-layer iterative deep neural network model (SCN) to enhance the spatial detail information, and the main idea is to utilize the principle that the image has sparsity in a certain dictionary, quickly solve a sparse coefficient α by adopting a learning-based iterative shrinkage threshold (LISTA) algorithm in a layer-by-layer iteration mode, and combine the sparse coefficient with a high-resolution dictionary DxCombining to obtain a reconstructed multispectral image block x, and reconstructing the multispectral image block x to obtain a spatially enhanced reconstructed multispectral image IE
The specific process is shown in fig. 1, and comprises the following steps:
a11, training a high-resolution dictionary D by using the high-resolution image block sample set and the low-resolution image block sample set through a method of joint dictionary trainingxAnd a low resolution dictionary Dy
The specific training process of the joint dictionary training is as follows:
1) and selecting a group of high-resolution images from an external high-resolution natural image set rich in detail information, and performing fuzzification and downsampling processing to obtain corresponding low-resolution images. Meanwhile, in order to reduce the inconvenience caused by the difference of the number of pixels in the high-resolution and low-resolution samples in the dictionary training process, the low-resolution images are re-sampled upwards, so that the size of the low-resolution images is the same as that of the high-resolution images.
2) First and second derivative images in low resolution imagesThe low-resolution image block sample set Y is formed by randomly extracting the low-resolution image features in a block mode1,y2,…yn}; extracting high-resolution image blocks at corresponding positions of the high-resolution image to form a high-resolution image block sample set X ═ X1,x2,…xn}。
Preferably, the 4 filters used to extract features are each f1=[-1,0,1],f2=f1 T,f3=[1,0,-2,0,1],f4=f3 TAnd T denotes transposition.
3) Training out high resolution dictionary DxAnd a low resolution dictionary Dy. High resolution dictionary D for ensuring learningxAnd a low resolution dictionary DyThe sparse representation of each pair of high-resolution image blocks and low-resolution image blocks in the input and output images has consistency, and dictionary pairs need to be jointly trained to share sparse coding.
Corresponding high resolution dictionary DxAnd a low resolution dictionary DyObtained by equation (1):
wherein N and M respectively represent a high resolution dictionary DxAnd a low resolution dictionary DyDimension of vector form, λ is regularization parameter, | Z | | luminance1For enhancing sparsity, | | - |2For constraining the columns of the dictionary and removing the scaling error.
In this step, the number of the trained high-resolution image block and low-resolution image block pairs is 100000, the size of the image block is 5 × 5, the size of the dictionary is 128, and λ is 0.15.
A12, inputting the low-resolution multispectral image MS into the convolution layer H, and extracting the low-resolution image block characteristics y corresponding to the low-resolution multispectral image MS.
Wherein,the convolution layer H has myEach size is sy×syFilter of step size tHThe extracted feature y has a size sy×syAnd has myAnd (5) maintaining.
In the present embodiment, my=100,sy=9,tH=1。
And A13, inputting the extracted low-resolution image block features y into a LISTA network, and solving a sparse coefficient α through layer-by-layer iteration.
The LISTA network is a part of the SCN model, as shown in the area of fig. 1, and is a feedforward neural network with j-layer loops. At a given low resolution dictionary DyUnder the condition of (1), performing layer-by-layer iterative shrinkage through j layers of loops to realize fast solution of the sparse coefficient α, as shown in formula (2):
further, the layer-by-layer iterative shrinkage process is shown in equation (3):
uj+1=hθ(Wy+Suj) (3)
wherein W, S represents the weight of two linear layers in the LISTA network, ujrepresents the result of the j-th iteration, hθFor the activation function, assuming the input is a, the activation function is defined as [ h ]θ(a)]i=sign(ai)(|ai|-θi)+,aiRepresents the ith variable, theta is a parameter of the activation function andthe index i represents the ith parameter of the layer, which is an element-by-element contraction function, and the network parameters of each layer can be contracted, and n represents the size of the dictionary and takes the value of 128.
A14, sparse coefficient α and high resolution dictionaryMultiplying to obtain the size sx×sx=mxThe reconstructed multispectral image block x.
Wherein s isx=5,mx=25。
A15, in the convolution layer G, aggregating the reconstruction multispectral image blocks x to obtain a reconstruction multispectral image IE
Wherein the convolution layer G has mxEach size is sg×sgFilter of step size tGThe filter assigns a proper weight to the reconstructed multispectral overlapped block and takes the weighted average value as the final output space-enhanced reconstructed multispectral image IE
In the present embodiment, sg=9,tG=1。
In step A2, a multispectral image I is to be reconstructedEThe brightness component and the full-color image PAN are subjected to hierarchical fusion to obtain a final high-resolution multispectral image IM. Wherein, the high-resolution multispectral image I obtained by the fusion methodMThe brightness component can effectively reserve and reconstruct the multispectral image IEThe spectral information is simultaneously utilized to further utilize the spatial detail information in the full-color image PAN.
The specific process is shown in fig. 2, and comprises the following steps:
a21, reconstructing the multispectral image I for the convenience of subsequent fusionEYUV conversion is carried out to obtain Y component, U component and V component, wherein the Y component is a brightness component and is marked as I1
A22, inputting the full-color image PAN, and respectively obtaining the full-color image PAN and the brightness component I1The histograms of the two images are consistent, the correlation of the two images is enhanced, the spectrum distortion is reduced, and then a matched full-color image I is obtained2
The panchromatic image PAN is degraded, being the same scene as the low resolution multispectral image MS, but imaged by a different sensor.
A23, converting the brightness component I1And matched full-color image I2Decomposing to obtain detail layer I of brightness component1 dBase layer of luminance component I1 bDetail layer I of full-color image after matching2 dAnd base layer I of the matched full-color image2 b
Base layer Ik bObtained by equation (4):
wherein, gxAnd gyHorizontal and vertical gradient operators, g, respectivelyx=[-1,1],gy=[-1,1]Tη is a regularization parameter η -9 and k-1, 2.
Detail layer Ik dObtained by the formula (5).
A24, by convolution sparse representation method, detail layer I of brightness component1 dAnd detail layer I of the matched full-color image2 dFusing to obtain a fused detail layer IF d
The main idea of the convolution sparse representation is to consider an image as a set of convolution sums between a sparse coefficient graph and a dictionary filter, consider the whole image as a whole to operate, and more completely utilize the detail information of the image. The fusion process of the detail layer is as follows:
1) obtaining a detail layer I by the formula (6)k dCorresponding sparse coefficient graph Ck,m,m∈{1,...Μ},
Wherein d ismFor the dictionary filter, ζ is the regularization parameter, which represents the convolution operation.
With Ck,m(x, y) represents a sparse coefficient diagram Ck,mA corresponding value in spatial domain coordinates (x, y).
2) Obtaining a detail layer I by equation (7)k dActivity level diagram Ak(x,y),
Ak(x,y)=||Ck,1:M(x,y)||1(7)
3) Further, in order to reduce the interference on the fusion result due to the misregistration of the images, a window-based averaging strategy is used for Ak(x, y) the activity level map for fusion is obtained by processing the formula (8)
Wherein r represents the size of the sliding window, p, q represent the spatial positioning of the sliding window, respectively, and p, q ∈ [ -r, r ].
4) By using the "select maximum" method, fusion is obtainedSparse coefficient map CF,1:M(x, y) as shown in formula (9):
wherein,
the 'selective maximum' method is a common method for enriching high-frequency components in the field of image fusion, and can better reserve a detail layer Ik dThe characteristics of (1).
5) According to dictionary filter dmAnd the fused sparse coefficient map CF,mObtaining a fused detail layer IF dAs shown in equation (10):
preferably, the dictionary filter dmThe space size of (a) is 8 × 8, the number is 128, ζ is 0.01, and r is 9.
Due to the base layer Ik bSome spatial detail information still remains in the layer, so the layer I is adopted and fusedk dThe same "choose maximum" method applies to the base layer I of the luminance component1 bAnd base layer I of the matched full-color image2 bFusing to obtain a fused base layer IF b
Wherein,
level diagram of activityThe obtaining method of (2) is the same as the formula (8).
A25, detail layer I after fusionF dAnd a fused base layer IF bFusing to obtain new multispectral image brightness component IFNamely:
a26, adding the new brightness component IFYUV inverse transformation is carried out on the U component and the V component to obtain a final high-resolution multispectral image IM
The invention adopts the remote sensing image shot by the Quickbird satellite to carry out the experiment. The experiment was implemented by programming Matlab R2015b on a PC with a CPU of 4.0GHz, a memory of 16GB, and an operating system of Win 7. The spatial resolution of the multispectral image in the QuickBird dataset was 2.88m and the spatial resolution of the panchromatic image was 0.72 m. The original multispectral image with the size of 256 multiplied by 256 is selected as a reference image, and the multispectral image and the panchromatic image adopted by the invention are obtained by carrying out 4 times down sampling on the original image in the data set and then carrying out degradation. The input low resolution multispectral image size is 128 x 128 and the input panchromatic image size is 256 x 256. The comparison method adopted in the invention comprises the following steps: IHS transform, Discrete Wavelet Transform (DWT), high pass filter based (HPF) and wavelet transform combined with sparse representation (DWTSR).
Fig. 3 is a graph comparing the fusion effect of the present invention and other methods, where fig. 3(a) is an input low-resolution multispectral image, fig. 3(b) is an input panchromatic image, fig. 3(c) is a reference image, fig. 3(d) is the fusion result of IHS transform method, fig. 3(e) is the fusion result of Discrete Wavelet Transform (DWT), fig. 3(f) is the fusion result of high-pass filter-based method (HPF), fig. 3(g) is the fusion result of wavelet transform and sparse representation combined method (DWTSR), and fig. 3(h) is the fusion result of the method provided by the present invention.
It can be seen from the figure that the IHS transform method has severe color difference, produces severe spectral distortion, the image definition of the discrete wavelet transform method is not high, the method based on the high pass filter is good in storing the spectral information, but the spatial information is seriously lost, and the method combining the wavelet transform and the sparse representation has the blocking effect in a partial region, but the method provided by the invention has clear image, well reconstructs the spatial information, and the spectral information is also well stored.
As shown in Table 1, the performance index of the present invention is compared with that of other methods. The evaluation index includes four of Root Mean Square Error (RMSE), Correlation Coefficient (CC), Spectral Angle (SAM), and peak signal-to-noise ratio (PSNR). Wherein the root mean square error reflects the degree of spectral difference between the fused image and the reference image, and the smaller the value of the root mean square error is, the smaller the degree of spectral difference is; the correlation coefficient reflects the correlation degree between the fused image and the full-color image, the larger the value of the correlation coefficient is, the fused image is rich in more space detail information, and the better the comprehensive quality of the fused image is; the spectral angle represents the included angle between the fused image and the reference image vector, and the smaller the included angle is, the smaller the spectral distortion degree is; the peak signal-to-noise ratio reflects the overall distortion degree of the image, and the higher the value of the peak signal-to-noise ratio is, the better the quality of the fused image is. Therefore, it can be seen from table 1 that the indexes of the method provided by the present invention are superior to those of other methods, and the fusion effect is the best.
TABLE 1 comparison of Performance indices of the present invention with other methods
In summary, according to the hierarchical remote sensing image fusion method using layer-by-layer iterative super resolution provided by the invention, firstly, a layer-by-layer iterative deep neural network (SCN) is adopted to perform super resolution processing on a low-resolution multispectral image, so that the spatial detail information of the low-resolution multispectral image is enhanced, and more spatial detail information is recovered at a higher speed than that of a traditional method; and then, performing hierarchical fusion on the brightness components of the full-color image and the reconstructed multispectral image, and further utilizing the spatial detail information while maintaining the spectral information.
The technical principles of the present invention have been described above in connection with specific embodiments, which are intended to explain the principles of the present invention and should not be construed as limiting the scope of the present invention in any way. Based on the explanations herein, those skilled in the art will be able to conceive of other embodiments of the present invention without inventive efforts, which shall fall within the scope of the present invention.

Claims (7)

1. A hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution is characterized by comprising the following steps:
a1, performing super-resolution processing on the low-resolution multispectral image by adopting a depth neural network iterated layer by layer to obtain a reconstructed multispectral image;
and A2, performing hierarchical fusion on the full-color image corresponding to the low-resolution multispectral image and the brightness component of the reconstructed multispectral image after degradation to obtain the high-resolution multispectral image.
2. The method for fusing layered remote sensing images according to claim 1, wherein the step a1 comprises the following steps:
a11, training a high-resolution dictionary and a low-resolution dictionary by utilizing the high-resolution image block sample set and the low-resolution image block sample set through a combined dictionary training method;
a12, inputting the low-resolution multispectral image into a first convolution layer of a depth neural network, and extracting low-resolution image block features corresponding to the low-resolution multispectral image;
a13, inputting the extracted low-resolution image block features into a LISTA (local optimum station) network in a deep neural network, and solving sparse coefficients through layer-by-layer iteration;
a14, multiplying the solved sparse coefficient by the high-resolution dictionary to obtain a reconstructed multispectral image block;
and A15, in a second convolution layer of the deep neural network, aggregating the reconstructed multispectral image blocks to obtain a reconstructed multispectral image.
3. The method for fusing layered remote sensing images according to claim 2, wherein the step a2 comprises the steps of:
a21, carrying out YUV conversion on the reconstructed multispectral image to obtain a Y component, a U component and a V component;
the Y component is a brightness component;
a22, performing histogram matching on the full-color image and the brightness component to obtain a matched full-color image; the panchromatic image is degraded, is the same scene as the low-resolution multispectral image, but is imaged by a different sensor;
a23, decomposing the brightness component and the matched full-color image to obtain a detail layer of the brightness component, a basic layer of the brightness component, a detail layer of the matched full-color image and a basic layer of the matched full-color image;
a24, fusing the detail layer of the brightness component and the detail layer of the matched full-color image by a convolution sparse representation method to obtain a fused detail layer;
fusing the basic layer of the brightness component and the basic layer of the matched full-color image to obtain a fused basic layer;
a25, fusing the fused detail layer and the fused basic layer to obtain a new multispectral image brightness component;
and A26, carrying out YUV inverse transformation on the new multispectral image brightness component, the U component and the V component to obtain a final high-resolution multispectral image.
4. The method for fusing the layered remote sensing images according to claim 2, wherein a specific training process of the joint dictionary training is as follows:
1) selecting a group of high-resolution images from an external high-resolution natural image set with rich detail information, and performing fuzzification and down-sampling processing to obtain corresponding low-resolution images;
up-resampling the low resolution image;
2) randomly extracting low-resolution image features on first-order and second-order derivative images of the low-resolution image in a block mode to form a low-resolution image block sample set Y ═ Y1,y2,…yn};
Extracting high-resolution image blocks at corresponding positions of the high-resolution image to form a high-resolution image block sample set X ═ X1,x2,…xn};
3) Training a high-resolution dictionary and a low-resolution dictionary, wherein the process is shown as formula (1):
wherein D isxAnd DyRepresenting a high resolution dictionary and a low resolution dictionary, respectively, and N and M representing a high resolution dictionary D, respectivelyxAnd a low resolution dictionary DyDimension of vector form, λ is regularization parameter, | Z | | luminance1For enhancing sparsity, | | - |2For constraining the columns of the dictionary and removing the scaling error.
5. The method for fusing the layered remote sensing images as claimed in claim 4, wherein in the step A13, the sparse coefficients α are solved by performing layer-by-layer iterative shrinkage through a j-layer loop under the condition of a given low-resolution dictionary, and the process is as shown in formula (2):
the process of layer-by-layer iterative shrinkage is shown in formula (3):
uj+1=hθ(Wy+Suj) (3)
wherein W, S represents the weight of two linear layers in the LISTA network, ujrepresents the result of the j-th iteration, hθFor the activation function, assuming the input is a, the activation function is defined as [ h ]θ(a)]i=sign(ai)(|ai|-θi)+,aiRepresents the ith variable, theta is a parameter of the activation function andthe index i indicates the ith parameter of the layer.
6. The hierarchical remote sensing image fusion method according to claim 3, wherein in the step A23, the base layer of the luminance component and the base layer of the matched panchromatic image are obtained by formula (4):
wherein, gxAnd gyHorizontal and vertical gradient operators, g, respectivelyx=[-1,1],gy=[-1,1]Tη is regularization parameter η -9, k-1, 2, I1Representing a luminance component, I2Representing a matched full-color image, I1 bBase layer representing a luminance component, I2 bA base layer representing the matched full color image;
the detail layer of the luminance component and the detail layer of the matched full-color image are obtained by equation (5):
k=1,2,I1 ddetail layer, I, representing a luminance component2 dShowing a detail layer of the matched full color image.
7. The method for fusing layered remote sensing images according to claim 3, wherein in the step A24, the detail layers of the luminance component and the detail layers of the matched panchromatic image are fused by a convolution sparse representation method to obtain a fused detail layer, and the process is as follows:
1) obtaining a sparse coefficient graph corresponding to the detail layer through a formula (6);
the fine layer is a fine layer of a brightness component and a fine layer of a matched full-color image;
wherein, I1Representing a luminance component, I2Representing the matched full-color image, Ck,mAs a sparse coefficient map, dmAs a dictionary filter, ζ is a regularization parameter and represents convolution operation; with Ck,m(x, y) represents a sparse coefficient diagram Ck,mA corresponding value in spatial domain coordinates (x, y);
2) the details are obtained by equation (7)Activity level diagram A of layersk(x,y):
Ak(x,y)=||Ck,1:M(x,y)||1(7)
3) Obtaining activity level map for fusion by equation (8)
Wherein r represents the size of the sliding window, p and q represent the spatial positioning of the sliding window respectively, and p and q are belonged to [ -r, r ];
4) and (3) acquiring a fused sparse coefficient graph by adopting a 'maximum selection' method, as shown in a formula (9):
wherein, CF,1:M(x, y) is a fused sparse coefficient map,
5) obtaining a fused detail layer according to the dictionary filter and the fused sparse coefficient graph, as shown in formula (10):
wherein,the detail layer after fusion;
and (3) fusing the basic layer of the brightness component and the basic layer of the matched full-color image by adopting a 'maximum selection' method to obtain a fused basic layer, wherein the formula (11) is as follows:
wherein,in order to obtain a base layer after the fusion,
CN201811436115.XA 2018-11-28 2018-11-28 Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution Pending CN109509160A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811436115.XA CN109509160A (en) 2018-11-28 2018-11-28 Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811436115.XA CN109509160A (en) 2018-11-28 2018-11-28 Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution

Publications (1)

Publication Number Publication Date
CN109509160A true CN109509160A (en) 2019-03-22

Family

ID=65751068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811436115.XA Pending CN109509160A (en) 2018-11-28 2018-11-28 Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution

Country Status (1)

Country Link
CN (1) CN109509160A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211046A (en) * 2019-06-03 2019-09-06 重庆邮电大学 A kind of remote sensing image fusion method, system and terminal based on generation confrontation network
CN110443775A (en) * 2019-06-20 2019-11-12 吉林大学 Wavelet transform domain multi-focus image fusing method based on convolutional neural networks
CN110490799A (en) * 2019-07-25 2019-11-22 西安理工大学 Based on the target in hyperspectral remotely sensed image super-resolution method from fusion convolutional neural networks
CN110533600A (en) * 2019-07-10 2019-12-03 宁波大学 A kind of same/heterogeneous remote sensing image high-fidelity broad sense sky-spectrum fusion method
CN110751036A (en) * 2019-09-17 2020-02-04 宁波大学 High spectrum/multi-spectrum image fast fusion method based on sub-band and blocking strategy
CN110852950A (en) * 2019-11-08 2020-02-28 中国科学院微小卫星创新研究院 Hyperspectral image super-resolution reconstruction method based on sparse representation and image fusion
CN111382867A (en) * 2020-02-20 2020-07-07 华为技术有限公司 Neural network compression method, data processing method and related device
CN111742329A (en) * 2020-05-15 2020-10-02 安徽中科智能感知产业技术研究院有限责任公司 Mining typical ground object dynamic monitoring method and platform based on multi-source remote sensing data fusion and deep neural network
CN112528914A (en) * 2020-12-19 2021-03-19 东南数字经济发展研究院 Satellite image full-color enhancement method for gradually integrating detail information
CN112907449A (en) * 2021-02-22 2021-06-04 西南大学 Image super-resolution reconstruction method based on deep convolution sparse coding
US20210383505A1 (en) * 2020-09-03 2021-12-09 Nvidia Corporation Image enhancement using one or more neural networks
CN113920431A (en) * 2021-10-12 2022-01-11 长光卫星技术有限公司 Fusion method suitable for high-resolution remote sensing image
CN114792287A (en) * 2022-03-25 2022-07-26 南京航空航天大学 Medical ultrasonic image super-resolution reconstruction method based on multi-image fusion
CN114820741A (en) * 2022-04-29 2022-07-29 辽宁工程技术大学 Hyperspectral image full-waveband hyper-resolution reconstruction method
CN115861081A (en) * 2023-02-27 2023-03-28 耕宇牧星(北京)空间科技有限公司 Image super-resolution reconstruction method based on stepped multi-level wavelet network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542549A (en) * 2012-01-04 2012-07-04 西安电子科技大学 Multi-spectral and panchromatic image super-resolution fusion method based on compressive sensing
CN103208102A (en) * 2013-03-29 2013-07-17 上海交通大学 Remote sensing image fusion method based on sparse representation
WO2014183259A1 (en) * 2013-05-14 2014-11-20 中国科学院自动化研究所 Full-color and multi-spectral remote sensing image fusion method
CN105761234A (en) * 2016-01-28 2016-07-13 华南农业大学 Structure sparse representation-based remote sensing image fusion method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542549A (en) * 2012-01-04 2012-07-04 西安电子科技大学 Multi-spectral and panchromatic image super-resolution fusion method based on compressive sensing
CN103208102A (en) * 2013-03-29 2013-07-17 上海交通大学 Remote sensing image fusion method based on sparse representation
WO2014183259A1 (en) * 2013-05-14 2014-11-20 中国科学院自动化研究所 Full-color and multi-spectral remote sensing image fusion method
CN105761234A (en) * 2016-01-28 2016-07-13 华南农业大学 Structure sparse representation-based remote sensing image fusion method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIANCHAO YANG ET AL.: "Image Super-Resolution Via Sparse Representation", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
YU LIU ET AL.: "Image Fusion With Convolutional Sparse Representation", 《IEEE SIGNAL PROCESSING LETTERS》 *
ZHAOWEN WANG ET AL.: "Deep Networks for Image Super-Resolution with Sparse Prior", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
杨超: "基于压缩感知理论的图像融合算法研究", 《中国硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211046B (en) * 2019-06-03 2023-07-14 重庆邮电大学 Remote sensing image fusion method, system and terminal based on generation countermeasure network
CN110211046A (en) * 2019-06-03 2019-09-06 重庆邮电大学 A kind of remote sensing image fusion method, system and terminal based on generation confrontation network
CN110443775A (en) * 2019-06-20 2019-11-12 吉林大学 Wavelet transform domain multi-focus image fusing method based on convolutional neural networks
CN110443775B (en) * 2019-06-20 2022-12-16 吉林大学 Discrete wavelet transform domain multi-focus image fusion method based on convolutional neural network
CN110533600A (en) * 2019-07-10 2019-12-03 宁波大学 A kind of same/heterogeneous remote sensing image high-fidelity broad sense sky-spectrum fusion method
CN110533600B (en) * 2019-07-10 2022-07-19 宁波大学 Same/heterogeneous remote sensing image high-fidelity generalized space-spectrum fusion method
CN110490799B (en) * 2019-07-25 2021-09-24 西安理工大学 Hyperspectral remote sensing image super-resolution method based on self-fusion convolutional neural network
CN110490799A (en) * 2019-07-25 2019-11-22 西安理工大学 Based on the target in hyperspectral remotely sensed image super-resolution method from fusion convolutional neural networks
CN110751036A (en) * 2019-09-17 2020-02-04 宁波大学 High spectrum/multi-spectrum image fast fusion method based on sub-band and blocking strategy
CN110751036B (en) * 2019-09-17 2020-06-30 宁波大学 High spectrum/multi-spectrum image fast fusion method based on sub-band and blocking strategy
CN110852950A (en) * 2019-11-08 2020-02-28 中国科学院微小卫星创新研究院 Hyperspectral image super-resolution reconstruction method based on sparse representation and image fusion
CN110852950B (en) * 2019-11-08 2023-04-07 中国科学院微小卫星创新研究院 Hyperspectral image super-resolution reconstruction method based on sparse representation and image fusion
CN111382867B (en) * 2020-02-20 2024-04-16 华为技术有限公司 Neural network compression method, data processing method and related devices
CN111382867A (en) * 2020-02-20 2020-07-07 华为技术有限公司 Neural network compression method, data processing method and related device
CN111742329B (en) * 2020-05-15 2023-09-12 安徽中科智能感知科技股份有限公司 Mining typical feature dynamic monitoring method and platform based on multi-source remote sensing data fusion and deep neural network
CN111742329A (en) * 2020-05-15 2020-10-02 安徽中科智能感知产业技术研究院有限责任公司 Mining typical ground object dynamic monitoring method and platform based on multi-source remote sensing data fusion and deep neural network
US20210383505A1 (en) * 2020-09-03 2021-12-09 Nvidia Corporation Image enhancement using one or more neural networks
CN112528914A (en) * 2020-12-19 2021-03-19 东南数字经济发展研究院 Satellite image full-color enhancement method for gradually integrating detail information
CN112907449B (en) * 2021-02-22 2023-06-09 西南大学 Image super-resolution reconstruction method based on depth convolution sparse coding
CN112907449A (en) * 2021-02-22 2021-06-04 西南大学 Image super-resolution reconstruction method based on deep convolution sparse coding
CN113920431A (en) * 2021-10-12 2022-01-11 长光卫星技术有限公司 Fusion method suitable for high-resolution remote sensing image
CN113920431B (en) * 2021-10-12 2024-08-09 长光卫星技术股份有限公司 Fusion method suitable for high-resolution remote sensing image
CN114792287A (en) * 2022-03-25 2022-07-26 南京航空航天大学 Medical ultrasonic image super-resolution reconstruction method based on multi-image fusion
CN114792287B (en) * 2022-03-25 2024-10-15 南京航空航天大学 Medical ultrasonic image super-resolution reconstruction method based on multi-image fusion
CN114820741A (en) * 2022-04-29 2022-07-29 辽宁工程技术大学 Hyperspectral image full-waveband hyper-resolution reconstruction method
CN115861081A (en) * 2023-02-27 2023-03-28 耕宇牧星(北京)空间科技有限公司 Image super-resolution reconstruction method based on stepped multi-level wavelet network

Similar Documents

Publication Publication Date Title
CN109509160A (en) Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution
CN110533620B (en) Hyperspectral and full-color image fusion method based on AAE extraction spatial features
Shao et al. Remote sensing image fusion with deep convolutional neural network
CN112634137B (en) Hyperspectral and panchromatic image fusion method for extracting multiscale spatial spectrum features based on AE
CN106952228B (en) Super-resolution reconstruction method of single image based on image non-local self-similarity
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN108830796B (en) Hyperspectral image super-resolution reconstruction method based on spectral-spatial combination and gradient domain loss
Huang et al. Deep hyperspectral image fusion network with iterative spatio-spectral regularization
CN106920214B (en) Super-resolution reconstruction method for space target image
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
CN116051428B (en) Deep learning-based combined denoising and superdivision low-illumination image enhancement method
CN113191325B (en) Image fusion method, system and application thereof
CN116152120A (en) Low-light image enhancement method and device integrating high-low frequency characteristic information
CN113793289A (en) Multi-spectral image and panchromatic image fuzzy fusion method based on CNN and NSCT
CN114511470B (en) Attention mechanism-based double-branch panchromatic sharpening method
CN114549361B (en) Image motion blur removing method based on improved U-Net model
Zhou et al. PAN-guided band-aware multi-spectral feature enhancement for pan-sharpening
CN117576483B (en) Multisource data fusion ground object classification method based on multiscale convolution self-encoder
CN113284067A (en) Hyperspectral panchromatic sharpening method based on depth detail injection network
CN117474764A (en) High-resolution reconstruction method for remote sensing image under complex degradation model
CN109615584B (en) SAR image sequence MAP super-resolution reconstruction method based on homography constraint
CN106846286B (en) Video super-resolution algorithm for reconstructing based on a variety of complementary priori
CN110807746B (en) Hyperspectral image sharpening method based on detail embedded injection convolutional neural network
Meenakshisundaram Quality assessment of IKONOS and Quickbird fused images for urban mapping
Karaca et al. MultiTempGAN: multitemporal multispectral image compression framework using generative adversarial networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190322